Tips & tricks for installing and running IBM products

Download the Brave browser and earn BAT tokens

Tom Bosmans  27 April 2019 21:54:04
The Brave browser promises better privacy ; and to directly pay the user for the ads you're looking at .
The payout is in BAT (Basic Attention Tokens).

I think this looks promising .

Download the browser here :

ISAM with VirtualBox and Vagrant for development

Tom Bosmans  15 March 2019 13:36:28

How to run ISAM on Virtual Box (and how to run it using Vagrant)

The goal of this post is to get IBM Security Access Manager running on Virtual Box ( ), on my local machine.  This will allow me to test the Ansible playbooks I'm preparing locally before commiting them onto the Git repository.
As a small addition, I have Vagrant , to quickly set up a new clean instance.  Vagrant does not really bring a whole lot of value in this case, because ISAM is a locked down appliance and Vagrant can't really do a lot.  

Download the ISAM 9.0.6 ISO file

Get the ISAM 9.0.6 (or whatever the most recent version is) from Passport Advantage.

Setup Virtual Box

Under File/Preferences/Network , create a new NAT network.
You can just accept the defaults.

Create new Virtual Machine

Create a new virtual machine.  

- Configure 2 NIC's, both configured with the Intel E1000 adapter
Connect the first NIC to the NAT Network you prepared.
Connect the second NIC to the Host-Only Network.

- Configure Storage : create a new SCSI controller, with LSI Logic adapter
Create a new disk, with a size of at least 8 GB (recommended is 100Gb)

- Connect the CD/DVD to the ISO file you downloaded earlier with ISAM on it

- Assign 2048 Mb of memory  

- Disable Audio

Advanced configuration for the Virtual Machine

To run the ISAM appliance on Virtual Box, we need to trick it to think it's running on VMWare.

Open a command prompt and navigate to the Virtual Box installation folder.
Run the "list vms" command to get a list of your virtual machines.

C:\Program Files\Oracle\VirtualBox>

VBoxManage.exe list vms

"isam905" {1c548e84-f7cd-4283-a36a-71843757d1af}

Run the following command to “fake” Vmware .  Use the output of the previous command for the vm info.
This is the "magic" command that makes everything work:

VBoxManage setextradata "isam905" "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVendor"  "VMware Virtual Platform"

These commands configure port forwarding for initial configuration:

VBoxManage modifyvm "isam905" --natpf1 "guestssh,tcp,,2222,,22"

VBoxManage modifyvm "isam905" --natpf1 "lmi,tcp,,4443,,443"

Configure ISAM

Start the vm and configure it.
After initial booting from the dvd and installing the image, you must disconnect the dvd pointing to the iso file.

Then reboot the machine again; and it should boot to the console .

Access the LMI

You can now access the LMI on

The console is available by ssh on port 2222:

ssh -p 2222 admin@

Next steps

I can now use Ansible to configure the ISAM vm, specifically :
- perform first steps , activation etc.
- configure the second interface on the Host Only adapter, so ISAM is accessible on a "normal" address as well (without port forwarding).

Another thing I'm working on, is to use Vagrant to be able to rapidly start up a temporary ISAM appliance .


Using Atom as text editor - let’s say I’m not convinced

Tom Bosmans  27 February 2019 13:56:54
I'm working with Atom ( since a few days , because it has integrated Git/Github support and it is available cross-platform, but boy ... I hate it.

It does not do any of the basic stuff I expect an editor to do

- why is searching so hard ?
- when I open a file again in the tree , it OPENS THE FILE AGAIN instead of going to the Tab where the file is already open .  Other (free) editors give at least a warning if you try to do that.
- because of that (?),  it overwrites my changes from time to time, seemingly at random
- sloooow

Also the integration with Git/Github seems random at best.  I regularly need to restart the editor, because I can't access the Git/Github functions .

I hate it ...  I even prefer vi .

There are things that I like as well, but I have not found anything yet that would make me recommend Atom over anything else ....

Logout everywhere for OIDC/OAuth2 on ISAM

Tom Bosmans  22 January 2019 12:00:40

Single sign on

We have an environment where multiple websites are configured to use OIDC authentication (authorization code flow) to an IBM ISAM acting as the Idp (Identity Provider).
All these websites expect different scopes in their tokens (eg. access tokens and id tokens) .

Of course, the user can also use mulitple devices (browsers) to access the sites.

The IDP thus can hand out a number of different tokens for a single user.

We also created a custom "Remember Me" function ; that relies on an Oauth access token stored in a cookie (more on that some other time).

So a user effectively has single sign on between all the websites now : every time a new access token is requested (eg. with a different scope); the user is redirected to the idp.  But because of the "Remember Me" cookie, the user is logged on automatically to the idp.  The idp then hands out the new access token.  

The challenge in this case now, is to implement a Logout function, that not only invalidates the current access token in use for that particular website; but also all other active tokens (access tokens, refresh tokens; ...).


We want to terminate ALL active tokens (from all different applications (client_id), but also from all logged in devices - logout everywhere.

To remove all tokens for a particular (logged in) user; there are methods in the API


That would do what we need.   Note that this removes the Oauth tokens that exist for the user across ALL devices the user is using.

WebSeal's Single-signoff-uri

Our initial idea was that we could use WebSeal's single-signoff-uri parameter.  This is intended as a mechanism to log the user out of any backend applications when the WebSeal session is terminated.
There are four different mechanisms that can terminate a WebSEAL session:

User request by accessing pkmslogout.
Session timeout.
EAI session termination command.
Session terminate command from the pdadmin tool.

When the WebSeal session terminates, it sends (GET) requests to the url's you configure here; including cookies and headers from the incoming request.
By default (after you've configure the OIDC and API Connect features); there's an entry already for Oauth:

single-signoff-uri = /mga/sps/oauth/oauth20/logout

So adding a single-signoff-uri that points to our custom infomap; would be triggered when the WebSeal session terminates ...
The problem with this configuration in our scenario; is that it would mean that every time you are logged out of the IdP, it would trigger the logout mechanism .  We only want to trigger it when the user clicks "Logout" !

Logout everywhere

Infomap setup

The infomap can obviously also be called directly by the enduser.

By using the api endpoint, we can furthermore do the calls in an Ajax call (json returned).  Note that this requires a CORS configuration in most cases .

So assuming my ISAM that acts as the IDP is on , the call to the infomap could look like this .  Note the difference in the uri (apiauthsvc vs. authsvc).
API Call (return JSON)

For the API call to succeed; you must add an Accept Header to your GET call with the value "application/json" (see here , for instance : )

You need to have the necessary template files in place; you need


The html file is returned for the Browser call, the json file when you use the API call

I've used this excellent blog entry from my colleague Shane Weeden as an example and a source for code :

Infomap code

Now this code does 2 things : it calls the deleteAllTokensForUser function to remove all tokens for the logged in user, and then it performs a logout by using a REST call to the pdadmin utility , to run the command

server task terminate all_sessions

This code depends on a Server Connection to exist ; named "ISAM Runtime REST" , that should point to your LMI; with the userid and password for an admin user.
The details to run the pdadmin command are hardcoded in this example:

What is missing from this code, is the possibility to pass a redirect URL.  This would make sense in the "Browser" version, less for the API version of the infomap.

The infomap code :


* Utility to get the username from the the Infomap context
function getInfomapUsername() {
            // get username from already authenticated user
            var result = context.get(Scope.REQUEST,
                                              "urn:ibm:security:asf:request:token:attribute", "username");
            IDMappingExtUtils.traceString("[FEDLOGOUT] : username from existing token: " + result);

            // if not there, try getting from session (e.g. UsernamePassword module)
            if (result == null) {
                             result = context.get(Scope.SESSION,
                                                               "urn:ibm:security:asf:response:token:attributes", "username");
                             IDMappingExtUtils.traceString("[FEDLOGOUT] : username from session: " + result);

            return result;

* Utility to html encode a string
function htmlEncode(str) {
               return String(str).replace(/&/g, '&').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
*    perform pdadmin session deletion
function deleteAllSessions(username) {
            IDMappingExtUtils.traceString("[FEDLOGOUT] Entering deleteAllSessions()");
            // Get the Web Server Connection Details for ISAM Runtime
            servername = "ISAM Runtime REST";
            var ws1 = ServerConnectionFactory.getWebConnectionByName(servername);
            if (ws1 == null) {
                             IDMappingExtUtils.traceString("[FEDLOGOUT] Could not find the server data for " + servername);
                             next = "abort";
            var restURL = ws1.getUrl()+"/isam/pdadmin";
            var adminUser = ws1.getUser();
            var adminPwd = ws1.getPasswd();
            IDMappingExtUtils.traceString("[FEDLOGOUT] url : "+restURL+" adminUser  " + adminUser + " password: "+ adminPwd );
            var headers = new Headers();
            headers.addHeader("Content-Type", "application/json");
            headers.addHeader("Accept", "application/json");
var respbody = '{"admin_id":"sec_master", "admin_pwd":"<sec_master_password>", "commands":"server task <websealinstance> terminate all_sessions ' + username + '"}';
            var hr = HttpClient.httpPost(restURL, headers, respbody, null, adminUser, adminPwd, null, null);
            if(hr != null) {
                             var rc = hr.getCode();
                             IDMappingExtUtils.traceString("[FEDLOGOUT] got a response code: " + rc);
                             var body = hr.getBody();
                             if (rc == 200) {
                                              if (body != null) {
                                                              IDMappingExtUtils.traceString("[FEDLOGOUT] got a response body: " + body);
                                              } else {
                                                               IDMappingExtUtils.traceString("[FEDLOGOUT] body of response from pdadmin is null?");
                             } else {
                                              IDMappingExtUtils.traceString("[FEDLOGOUT] HTTP response code from pdadmin is " + rc);
            } else {
                             IDMappingExtUtils.traceString("[FEDLOGOUT] HTTP post to pdadmin failed");
            return true;                

// infomap that returns a page indicating if you are authenticated and who you are
var username = getInfomapUsername();
// We must have a logged in session, otherwise we cannot logout ....
// this is a bit annoying and I'm not sure how to make sure
if ( username != null ) {
var tokens_for_user = OAuthMappingExtUtils.getAllTokensForUser(username);
            var tokens = [];
            for (i = 0; i < tokens_for_user.length; i++) {
                                              var a = tokens_for_user[i];
                                              tokens.push(a.getClientId() + "---" + a.getId() + "---"+a.getType();
                                              IDMappingExtUtils.traceString("[FEDLOGOUT] : Active Token ID " + a.getId() +  ", ClientID " + a.getClientId() + " getType " + a.getType() );
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Deleted all tokens for : " + username );
            * Now invalidate the session for the idp
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Trying to logout : " + username );
            * Now return the page ... the page should actually never be shown ...  logout.json should exist as well
            macros.put("@AUTHENTICATED@", ''+(username != null));
            macros.put("@USERNAME@", (username != null ? htmlEncode(username) : ""));
            macros.put("@TOKENS@", tokens.join(";");
            // we never actually perform a login with this infomap
} else {
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Anonymous session; no logout performed");

OAuth and OpenID Connect provider configuration for reverse proxy instances - reuse acl option

Tom Bosmans  10 October 2018 10:12:04
I have multiple reverse proxy instances configured on an appliance, and recently added a new one.

I performed the "Oauth and OpenID Connect Provider configuration", and did not select the options "Reuse ACL" nor "Reuse Certificates" .

After that, I noticed that my OpenID authentication no longer worked correctly on the other instances.
The reason was that the ACL's for the objects in /mga/sps/oauth/oauth20/  disappeared .

So if you already have configured other instances on your appliance for "Oauth and OpenID connect",  always enable "Reuse ACL"  !

What actually happens is easy to follow in the autocfg__oauth.log file in the Reverse Proxy log files:

If reuse acl is not checked, it will first detach the ACL's from all objects , delete the ACL and then add it again, but only for the reverse proxy where your run the configuration .....
So you loose all configuration that uses the isam_oauth_* ACL's in the other instances .  

Moral of the story :  always enable "Reuse ACL"  when running the "Oauth and OpenID Connect Provider configuration"

Add a header X-LConn-UserId to all requests in Connections

Tom Bosmans  8 August 2018 11:51:30
By adding this generic property to LotusConnections-config.xml, all requests will contain a header X-Lconn-Userid , that contains the logged in user .

Depending on your configuration, this most likely is the email address of the logged in user.

<!-- To display email of logged in user in IHS: -->
<genericProperty name="">true</genericProperty>

You can then add this header value to the log configuration in apache/ihs, so you have logs including the user .  This is pretty helpful for tracing problems ....

Please note that this is not officially supported in any way !

IBM Cloud Private installation - Filebeat problem (CentOS7)

Tom Bosmans  13 July 2018 13:17:00
After installation of IBM Cloud Private, I noticed I did not see any log information in the ICP UI.

While checking the logs, I saw that filebeat did not start correctly (or rather, completely failed to start).

(on the master node: )
root@icpboot ~]#journalctl -xelf

Jul 13 11:54:17 hyperkube[1825]: E0713 11:54:17.168699    1825 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

Jul 13 11:54:17 hyperkube[1825]: E0713 11:54:17.168734    1825 pod_workers.go:186] Error syncing pod 2406dc66-85e4-11e8-8135-000c299e5111 ("logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"), skipping: failed to "StartContainer" for "filebeat" with RunContainerError: "failed to start container \"ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18\": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount."

Jul 13 11:54:28 hyperkube[1825]: I0713 11:54:28.083562    1825 kuberuntime_manager.go:513] Container {Name:filebeat Image:ibmcom/filebeat:5.5.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:NODE_HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/usr/share/filebeat/filebeat.yml SubPath:filebeat.yml MountPropagation:} {Name:data ReadOnly:false MountPath:/usr/share/filebeat/data SubPath: MountPropagation:} {Name:container-log ReadOnly:true MountPath:/var/log/containers SubPath: MountPropagation:} {Name:pod-log ReadOnly:true MountPath:/var/log/pods SubPath: MountPropagation:} {Name:docker-log ReadOnly:true MountPath:/var/lib/docker/containers/ SubPath: MountPropagation:} {Name:default-token-kbdxx ReadOnly:true MountPath:/var/run/secrets/ SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Jul 13 11:54:28 hyperkube[1825]: I0713 11:54:28.083787    1825 kuberuntime_manager.go:757] checking backoff for container "filebeat" in pod "logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"

This means that in the IBM Cloud Private UI, I don't see any logs .

Digging a bit further, I saw that the logging-elk-filebeat-ds indeed was not started.

root@icpboot ~]# kubectl get ds --namespace=kube-system
NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

auth-apikeys                         1         1         1         1            1           role=master     20h

auth-idp                             1         1         1         1            1           role=master     20h

auth-pap                             1         1         1         1            1           role=master     20h

auth-pdp                             1         1         1         1            1           role=master     20h

calico-node                          3         3         3         3            3                    20h

catalog-ui                           1         1         1         1            1           role=master     20h

icp-management-ingress               1         1         1         1            1           role=master     20h

kube-dns                             1         1         1         1            1           master=true     20h

logging-elk-filebeat-ds              3         3         2         3            0                    20h

metering-reader                      3         3         2         3            2                    20h

monitoring-prometheus-nodeexporter   3         3         3         3            3                    20h

nginx-ingress-controller             1         1         1         1            1           proxy=true      20h

platform-api                         1         1         1         1            1           master=true     20h

platform-deploy                      1         1         1         1            1           master=true     20h

platform-ui                          1         1         1         1            1           master=true     20h

rescheduler                          1         1         1         1            1           master=true     20h

service-catalog-apiserver            1         1         1         1            1           role=master     20h

unified-router                       1         1         1         1            1           master=true     20h

Now the problem is of course in the log file, but I did not know how to fix it :

Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

On each node, execute these commands:

findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

mount --make-shared /var/lib/docker/containers

The result looks something like this:

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers private

[root@icpworker1 ~]# mount --make-shared /var/lib/docker/containers

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers shared

After that, the logging-elk-filebeat DaemonSet is available :

root@icpboot ~]# kubectl get ds --namespace=kube-system

NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE


logging-elk-filebeat-ds              3         3         2         3            2                    20h


I don't know if this is a bug, or if this is caused by me trying to run ICP on CentOS7 (which is not a supported platform) ...

Synology TFTP server for PXE Boot

Tom Bosmans  9 July 2018 10:54:08
Something I've been meaning to do for a while now, is setup my Synology NAS as a PXE boot server.

I want to be able to easily install new Operating systems on any new hardware I get, but more importantly, to easily install multiple Virtual Machines on my primary workstation without too much hassle.
This will involve setting up the Synology as TFTP server , supplying the correct PXE files (images and configuration) , and also configuring my DHCP server .

The official documentation from Synology is woefully inadequate to get PXE up and running, it is missing a number of vital steps.

Luckily, there are other sources on the internet that

Configure TFTP on Synology

Prepare a shared folder/volume.  In my case, I have a shared volume named "shared" , where I created a folder "PXEBOOT"

Go to Main Menu > Control Panel > File Services and select the TFTP tab.
Tick Enable TFTP service.
Image:Synology TFTP server for PXE Boot

Enter the folder you prepared earlier.

Now you need to add the folder structure for TFTP to be able to show a boot menu , and prepare the images.

Check out this exellent guide, that contains a link to a zip file with a configuration that contains CentOS and Ubuntu images.

(it actually uses this Github repository : )

The Github repository is not quite up to date, but it's easy to add newer images, I've added Ubuntu 18.04 and CentOS 7.5 .  It is configured to use the netinstall (http), so you do need an internet connection.

Unzip it, and put it on your shared folder, on your Synology, so it looks like this:
Image:Synology TFTP server for PXE Boot

Verify TFTP

I'm using Redhat 7.5 , and I wanted to quickly test TFTP.  Unfortunately, the tftp client is not a part of my configured repositories , so I just downloaded a client from .


(to) <ip address of synology>

tftp> verbose

Verbose mode on.

tftp> get pxelinux.0

getting from <ip address of synology>:pxelinux.0 to pxelinux.0 [netascii]

Received 26579 bytes in 0.2 seconds [1063767 bit/s]


This indicates that the location of pxelinux.0, that needs to be configured in the DHCP server, is in the root of the TFTP server and is accessible by everyone.

Configure Ubiquity Edge Router's DHCP

A very complete guide to do this can be found here. You need to use the Ubiquity CLI to do it.

I've configured the following (result of show service dhcp-server )

shared-network-name LAN2 {

    authoritative disable

    subnet {

        bootfile-name /pxelinux.0

        bootfile-server <ip address of synology>






        subnet-parameters "filename &quot;/pxelinux.0&quot;;"



use-dnsmasq disable

Note that you must use the &quote; syntax !

The following commands were used :


edit service dhcp-server shared-network-name LAN2 subnet

set subnet-parameters "filename &quot;/pxelinux.0&quot;;"

set bootfile-name /pxelinux.0

set bootfile-server <ip address of synology>



show service dhcp-server

Issuing a new "set" command does not overwrite a value, instead it adds a new line.  You need to remove the entries that are not correct (if you have multiple lines) :

show service dhcp-server


edit service dhcp-server shared-network-name LAN2 subnet

delete subnet-parameters "filename &quot;/shared/PXEBOOT/pxelinux.0&quot;;"



show service dhcp-server

If you have multiple lines or wrong lines, you will see PXE errors in the boot screen .

VMWare workstation

Lastly, I need to configure VMWare workstation.
2 important things here :
- I added a bridged network adapter , to obtain a dhcp address from my home network .  This adapter will receive the pxe boot instructions.
- I increased the memory size from 1024 MB to 2048 MB, because the CentOS 7.5 installer complained about "no space left on device" on the /tmp drive during installation (which effectively means, in memory).
Image:Synology TFTP server for PXE Boot

When booting, now get the configured menu options from my PXE boot server ....
Image:Synology TFTP server for PXE Boot

Then step through the installation options as you would perform a normal manual installation.  Of course it's also possible to prepare automated installations, but that is another topic .

Letsencrypt certificates for my own test servers

Tom Bosmans  26 June 2018 14:02:15
Yes, it's a bit over the top to use Let's Encrypt certificates for test systems, where a self-signed certificate would serve a similar purpose.  Furthermore, a Let's Encrypt certificate has a short lifietime and needs to be replaced every 3 months.

But since Let's Encrypt brought wild-card certificates to us fairly recently (march 2018), there is an advantage here.  You only need this single certificate and you can use it on all your systems.  Of course, in most case  you don't want to use wild-card certificates, but for my case (non-production test systems) , this is perfectly valid.

You also must use the DNS challenge (instead of the more traditional https challenge that Let's encrypt uses for verify ) .  The mechanism that is used, is similar to other verification mechanisms like DKIM for smtp (see DKIM deployed on my mail servers )

My usecase here, is a test environment running ISAM - IBM Security Access Manager (  Not having to trust the signer each time I access a page with a sefl-signed certificate , is a huge plus when demoing a solution :-)

1. Prerequisites
  • You need a recent version of certbot (that has the support for dns challenge, and the support for ACMEv2) , I'm using certbot 0.24.0
  • This certbot needs to run on a system with Internet access (outbound only, it needs to connect to the letsencrypt systems)
  • You also need a public dns domain, because Let's Encrypt uses DNS for the verification.   The only thing that needs to be in the domain records, is a TXT record btw.  You don't need to configure anything else.

2. DNS Preparation

I ordered a dns domain from my prefered DNS provider ( .
I could get a EU domain for something like 3 Euro for the first year .

There is nothing to configure for now , the configuration is done during the certbot action.

3. Certbot

Run certbot with the option --preferred-challenges dns , and define your domain as *.. (mine is * .
You can also use certbot-auto, and you can use a single commandline here, but I used this method :

[root@system ~]# certbot certonly --manual --server --preferred-challenges dns
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None

Starting new HTTPS connection (1):

Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c'

to cancel): *

Obtaining a new certificate

Performing the following challenges:

dns-01 challenge for


NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.

Are you OK with your IP being logged?


(Y)es/(N)o: Y


Please deploy a DNS TXT record under the name with the following value:


Before continuing, verify the record is deployed.


Press Enter to Continue

So now you need to go to your DNS provider, and create a TXT DNS record for _acme-challenge. , in my case,

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw

In the interface of my DNS provider, it looks like this :  I need to create a new subdomain, named .

Image:Letsencrypt certificates for my own test servers
In the next step , I can then enter the value that certbot provided , in a TXT field.

Now once you saved your DNS entry, DO NOT continue immediately.

Give it at least 1 minute, so you're certain the DNS entry is available, or even better, verify that your nameserver is up-to-date by performing a dns lookup, for instance using dig .

In my case, I can use this command, to use the nameserver of my provider.  Do this on another system you have your certbot command running, or open a new session .

dig -t txt +short


It needs to return the value of the TXT record.  As long as it doesn't, DO NOT continue in the certbot session, because it will fail and you need to start over.

But if it does return the key, continue.  

Waiting for verification...

Cleaning up challenges


- Congratulations! Your certificate and chain have been saved at:


Your key file has been saved at:


Your cert will expire on 2018-09-24. To obtain a new or tweaked

version of this certificate in the future, simply run certbot

again. To non-interactively renew *all* of your certificates, run

"certbot renew"

- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt:
Donating to EFF:           

Now the chain and certificate files are in the standard Let's Encrypt locations described (/etc/letsencrypt/live/./
Since this is a wildcard certificate, you likely want to copy it elsewhere and distribute across your systems.

4. Let's Encrypt keys and ISAM

IBM Security Access Manager expects pkcs12 certificates , so we first need to use openssl to convert the letsencrypt certificates to a .p12 .
I'm using ISAM 9.0.5, as OVA .

openssl pkcs12 -export -out \

-inkey /etc/letsencrypt/live/ \

-in /etc/letsencrypt/live/ \

-certfile /etc/letsencrypt/live/

Enter Export Password:

Verifying - Enter Export Password:

Use a strong password to protect your key !
( )

Now get the certificate to a system where you can upload it to ISAM .

In the LMI, I want it in 2 places :
- the management certificate
- the default certificate for the reverse proxies

hosts file on ISAM

Add the ip addresses for the interfaces you want to use in the hosts file on ISAM.  We could use DNS as well (since we have the public dns domain), but since this is internal, I am not going to do that and use simple hosts files .
Image:Letsencrypt certificates for my own test servers

Also, I use the following hosts file on my local machine to access my environment :

management certificate

Go to Manage System Settings/System Settings/Management SSL Certificate

Image:Letsencrypt certificates for my own test servers
The LMI will be restarted after this.

pdsrv keydb

Edit the pdsrv keydb, go to Personal certificates and select "Import"
Image:Letsencrypt certificates for my own test servers
Then select the "Let's Encrypt" certificate , click "Edit" and set it as the default certificate.

Image:Letsencrypt certificates for my own test servers

The DST ROOT CA is missing from the IBM provided keydbs (this is actually a bug in my opinion -  see this link : ), so you need to add it to the Signer Certificates in all key databases basically.  You can download it from the website in the link below, or you can export it from any modern browser (for example FireFox, below).  

Image:Letsencrypt certificates for my own test servers

Note that the reverse proxy can't handle the missing root CA, while the LMI does not seem to require it.   In each case, any server you want to protect using ISAM that would use TLS/SSL and a Let's Encrypt certificate would also require you to add this DST Root CA X3.

Note that HTTP/2  results in an ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY  error at this point !

You need to restart the reverse proxies after saving and deploying this.

5. End result

I can now access the LMI on this url :

Image:Letsencrypt certificates for my own test servers

... and the reverse proxy (using Chrome this time round)
Image:Letsencrypt certificates for my own test servers

Everything is green, so everything is OK (at least , OK enough for my test environment).

Additional information

ISAM automation

To automate all these manual actions, I actually should use automation tooling like Ansible .
Fortunately , there is a publicly available repository with ansible-roles and playbooks for ISAM .  It would be relatively straightforward to automate the management of the certificates here (generate a new one, use openssl to convert it, upload it to ISAM for the reverse proxies and for the management interface).

My zonefile, for your information

This zonefile is obviously pretty specific to my DNS provider and to my situation, but still, it may serve as an example for what you would need to have to make this work .
It's the _acme-challenge entry that does the trick .


@ 28800 IN SOA 2018062619 10800 3600 604800 28800

@ 28800 IN NS

@ 28800 IN NS

@ 28800 IN NS

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw

WebSphere liberty docker on Synology NAS

Tom Bosmans  21 June 2018 16:44:55
I've got a Synology DS415+ at home, and have Docker running on it.  I needed a quick way to install a WebSphere liberty server, and since the Synology NAS support Docker containers, why not ...  It's very easy to get up and running, you just need a few extra configuration settings.

Please note that I'm not sure if this would work on any Synology, though.  I think you need a Synology that has an Intel CPU (mine is an INTEL Atom C2538)  ...


Install the Docker package on your synology nas using the Package Center .

Docker interface

Start Docker once it's installed.  In the Registry, you can search for "liberty".  Use the "Download" button to download the image .

The synology uses Docker hub, and it's this version you want to download :

There's more information there, for instance how to handle your certificates and key databases .  

Image:WebSphere liberty docker on Synology NAS

Once the download of the image is complete, select Liberty and click "Launch".  This creates an actual container from the image.

Image:WebSphere liberty docker on Synology NAS

You can then configure the container.  In particular, what needs to be configured are the volumes, and the ports .
Since the Docker container cannot be edited, you need volumes to save data between restarts .

Image:WebSphere liberty docker on Synology NAS

These 3 volumes are needed for the following paths :

/opt/ibm/wlp/output/ (or, more precisely, the path that's in the WLP_OUTPUT_DIR variable)

/logs  (or , more precisely, the path in the LOG_DIR variable)

The documentation states you just need /logs and /config, but I found that the first path is also necessary.

You can also choose to do this later , by using the "Edit" button:
This is my Volume configuration

Image:WebSphere liberty docker on Synology NAS

The ports , by default , are set to Automatic.  This means that they change after every restart, and that's not very handy.
I choose the ports 19080 and 19443 for the http and https ports respectively.

Image:WebSphere liberty docker on Synology NAS

The environment varialbes can be used to give the Liberty container some correct startup options.   A very useful one, is the java options that are used to start the Liberty jvm.
By default, the jvm would be started in UTC time, and there's no "global" way to configure your Docker containers to start in a the correct timezone by default.

So add -Duser.timezone=Europe/Brussels  (or your timezone specification of choice) to the IBM_JAVA_OPTIONS environment variable :

IBM_JAVA_OPTIONS    -XX:+UseContainerSupport -Duser.timezone=Europe/Brussels

Image:WebSphere liberty docker on Synology NAS
This concludes the configuration for the Docker container .  

Configure Liberty server (server.xml)

To get a meaningful Liberty server, you probably want to deploy your own configuration .
Using the File Station in Synology, I have the following folder structure (that contains the volume configuration of the container).

Image:WebSphere liberty docker on Synology NAS

In the config directory, the magic happens.   As with a "normal" Liberty installation, you have a server.xml file here (that is empty by default).
There's also an "apps" directory , that contains your ear files.

In my case, I've used a simple configuration that you can download here : server.xml

Image:WebSphere liberty docker on Synology NAS

This configuration contains a basic user registry, an LTPA configuration and has 2 applications installed : the adminCenter and the defaultApplication.ear (Snoop)

The LTPA keys are generated automatically when you first start the container.  Note that for LTPA SSO to work, you must configure your Liberty Server to run in the correct timezone (see previous topic) !

There are some specific steps to take , before everything will work :

SSL configuration

When you start the docker image, a default key configuration is generated.  You can of course use your own key database , but I choose the quick and easy solution.

Open the keystore.xml file that's in config/configDropins/defaults .  Use the password for the defaultKeyStore in the keystore parameter in your own server.xml.  

<keyStore id="defaultKeyStore" password="<replace with your keystore.xml password>" />


There's multiple ways to install the adminCenter, this is the method I followed :

Click on "Details" , with the websphere-liberty container selected.  
Switch to the "Terminal" tab .
Click on "Create" to create a new Bash terminal session .

Image:WebSphere liberty docker on Synology NAS

Use the following commands to install the adminCenter :

root@websphere-liberty1:/# cd /opt/ibm/wlp/bin                                                                          
root@websphere-liberty1:/opt/ibm/wlp/bin# ./installUtility install adminCenter-1.0      

After restarting the Docker container, the adminCenter is available on the following url : https://:19443/adminCenter .
Image:WebSphere liberty docker on Synology NAS

You need to log in using the admin user (if using the server.xml that's provided here, the password is : Passw0rd ) .

Image:WebSphere liberty docker on Synology NAS

More information on the adminCenter application can be found here :

Default Application

WebSphere Application server comes out of the box with a DefaultApplication (aka snoop), that is handy to see if your server is working correctly,  Now unfortunately, there is no DefaultApplication.ear that comes with Liberty.
This version DefaultApplication.ear works with Liberty .

So download this file, and upload it to your Synology, in the "apps" directory.  Your Liberty server will install it automatically (or restart the Docker image , so the server.xml also becomes active).

The Snoop Servlet is then available on https://:19443/snoop  .  You do need to login (if you use the server.xml that's provided here)

Image:WebSphere liberty docker on Synology NAS

Log files

The log that's in the "Detail" page is not very useful.  
Image:WebSphere liberty docker on Synology NAS
Fortunately, you can use File Station the Synology to access the "log" directory, where the standard messages.log is (and the other log files , like ffdc logs, if you're interested in those)