Tips & tricks for installing and running IBM products

Virtual host junctions and federation on ISAM

Tom Bosmans  9 August 2019 10:15:40
I recently struggled with a setup of an OIDC federation setup on IBM Security Access Manager.

Instead of using standard, transparant path junctions, I was now using Virttual Host Junctions.
And I could not get the federation to work.

After a while, the penny dropped , and I realised that the /mga junction is not avaiable in the Virtual host junction.  Which I should have realised from the beginning of course...

So there's 2 options

match-vhj-first and session sharing



The "match-vhj-first " is set to yes by default, and this means that you cannot access the /mga junction (and hence cannot kick off the federation etc).  Set this parameter to "false", and the /mga junction is available in the Virtual Host Junction.
For federation, you may get away with just that change, as long as you setup seperate federations per virtual host.

See this older article by Philip Nye on the related topic of Context Based Access control:

https://www.ibm.com/developerworks/mobile/library/se-accessmanager/index.html


Single federation per Reverse Proxy instance and session sharing


For the OIDC Federation, I opted for a slightly different approach though.  There is no real need to change the default "match-vhj-first" setting.  What is important in setting up Federations, is that the hostnames used are constant throughout the OIDC federation flow.  And by setting up a "master" federation partner, I can kick off federation on all virtual host junctions on the same reverse proxy - no need to change anything in the federation setup when adding a new virtual host junction.

Webseal configuration


In the Webseald.conf configuration file , I set fhe following parameters:


[junction]
match-vhj-first = yes

[session-cookie-domains]
domain = mydomain.tld

[session]
shared-domain-cookie = yes
tcp-session-cookie-name = PD-H-SESSION-ID-rp1
ssl-session-cookie-name = PD-S-SESSION-ID-rp1


This enables the sharing of the webseal session between the junctions (including virtual host junctions) within a reverse proxy (webseal).
This is not necessary if you use DSC , and this does not work if you don't have a single "cookie domain" .

Federations


Key here is to have a single OIDC Federation set up against the "base" webseal reverse proxy, eg. the ip address or hostname you used to setup the reverse proxy.

The URLs to use in the Federation components (  redirectUriPrefix, providerId ) need to point to that base webseal reverse proxy.
Make sure to not mix the hostnames here, because then the Federation will fail.

If you enabled local-response-redirect, you can then use the following kickoff URL to start the Federation.

[local-response-redirect]
local-response-redirect-uri = https://mywebseal.mydomain.tld/mga/sps/oidc/rp//kickoff/

After a successful federation, thanks to the session sharing, you are also logged into your Virtual Host Junction.


The only correct way to setup the ISAM RTE with basic users for the UserLookupHelper

Tom Bosmans  2 August 2019 09:03:41
I've been struggling a bit to get the UserLookupHelper work correctly in a custom authenticatiion mechanism I am building for Username / Password authentication.  I am using the ISAM "all-in-one" deployment pattern , that is becoming quite popular these days.  This basically consists of identical ISAM appliances that have all functions (reverse proxy, AAC, federation) and operate independently.  The environment provides high availability by adding identical appliances (typically using automation).  The identital appliances do share some resources, notably the High Volume database and the federated ldap repositories.
This pattern also relies on using Basic users in Federated repositories.

The problem I came across is that in this setup, the UserLookupHelper is very difficult to get to working correctly using the ISAM RTE.
Recentlly a technote was published (
https://www-01.ibm.com/support/docview.wss?uid=ibm10884160 ) , but that only describes what is causing the problem, not the detailed steps to resolve or avoid it.

I tried various workarounds, and a workaround that is suggested a lot is to use an LDAP connection to (1 of the) federated repositories.   While that may work for you, it is NOT what I want - I want to use ALL of the configured federated directories .

In the end , I found that a simple recommendation in setting up ISAM can avoid all these problems, so I'll start with that

Recommendation


In the "all-in-one" deployment pattern, with Basic users in remote ldap repositories, there is only 1 correct ISAM Runtime server setup : Local Policy Server/Remote LDAP Server (lmi / ssl) .

When initially configuring the Runtime environment, select a local Policy server and a remote LDAP server pointing to your embedded ldap (instead of local Policy Server/local LDAP server).
Configure the connection to your local embedded LDAP using:
- hostname: your appliance's LMI hostname
- port: 636
- enable ssl

Select a keystore that contains the CA certificates of ALL the federated repositories you want to connect to and also the CA for your embedded ldap.

The embedded LDAP does not listen on port 636 on the localhost/loopback/127.0.0.1 interface !


You need to make 2 small modification afterwards to ldap.conf

[ldap]

host =

port = 389
-> port = 636

ssl-port = 636


And under [bind-credentials], you must add the bind-dn and bind-pwd for a user that can read your embedded ldap (eg. the root user), like this :

[bind-credentials]

bind-dn = cn=root,secAuthority=Default

bind-pwd = your-root-password-i-hope-it-is-not-passw0rd



See the technote for the explanation.

This setup will enable you to use :
- UserName / Password AAC Authentication mechanism using the ISAM RTE
- the UserLookupHelper connecting to ISAM RTE (and all the configured federated repositories with basic users)
- the SCIM prerequisites

Errors & what does not work


UserLookupHelper does not work


So your userlookuphelper does not work in your setup with Basic Users - it cannot contact the ISAM RTE or it cannot contact the federated repositories you defined.

You MUST use the ISAM RTE way of initialisation, that is the only way that will work for Federated Repostitories with Basic users !
You MUST have set the bind-dn and bind-pwd under [bind-credentials] in ldap.conf .

Following snippet shows how to correctly initialize the UserLookupHelper, in this case for a username that is submitted in a form.

var username = context.get(Scope.REQUEST, "urn:ibm:security:asf:request:parameter", "username");

var userLookupHelper = new UserLookupHelper();

// Use ISAM RTE with false

// configure bind-dn and bind-pw in ldap.conf

// need to have all certificates in the keystore that is used in ldap.conf (to ad and to rte)

// set 636 as non-ssl port

// MUST use false, because this is the only way Basic Users and Federated repositories will work

// check here
https://www-01.ibm.com/support/docview.wss?uid=ibm10884160
userLookupHelper.init();

reportmsg("Init ... [" + username + "]",false);

if( userLookupHelper.isReady() ) {

  IDMappingExtUtils.traceString("User Lookup Helper is ready for  ... [" + username + "]");

  var user = userLookupHelper.getUser(username);

  ...



But this likely does not work for you.

Enable tracing


When you start working on your InfoMap with UserLookupHelper, you probably want to enable tracing .
Go to "Secure Access Control" , "Global Settings" , "Runtime Parameters", and go the "Runtime Tracing" tab

Image:The only correct way to setup the ISAM RTE with basic users for the UserLookupHelper
Add this trace specification:

com.tivoli.pd.rgy.*=ALL:com.tivoli.am.fim.trustserver.sts.utilities.IDMappingExtUtils=ALL



The trace.log you can then view in "Monitor / Application Logs" (https://lmi/isam/application_logs)  
Under access_control/runtime/trace.log

Errors indication the problem


If you use Basic users in LDAP directories that require SSL, and you have setup your ISAM with local Policy Server/local LDAP, you will likely see errors like this :
The runtime trace shows that the connection uses :389:readwrite:5.

Other errors may look like this:

[7/31/19 11:48:41:962 CEST] 000000bd id=00000000 com.ibm.security.access.user.UserLookupHelper                I getUser com.tivoli.pd.rgy.exception.DomainNotFoundRgyException: HPDAA0266E   The Security Access Manager domain Default does not exist.


or another typical one:

com.tivoli.pd.rgy.exception.ServerDownRgyException: HPDAA0278E   None of the configured LDAP servers of the appropriate type for the operation can be contacted.



Manually configuring ISAM


So how do I fix this ?
The root cause information is correct in the technote, but it does not fully explain how to fix the problem.
Changing the port is only part of the solution, because the embedded LDAP server listens on port 636 of the management interface of the appliance by default, not on localhost. The administrator can choose a port other than the default by modifying the advanced tuning parameter wga.rte.embedded.ldap.ssl.port. The advanced tuning parameters are accessed through Manage System Settings > Advanced Tuning Parameters. After you modify this advanced tuning parameter, you must restart the Security Access Manager runtime environment for the change to take effect.

Update ldap.conf


Go to "Secure Web Settings/Manage/Runtime Component" and navigate to "Manage/Configuration Files/ldap.conf"
Set the host to the ip address or hostname of your LMI, it cannot be set to 127.0.0.1.
Set the port to the same value as ssl-port (by default, 636.  To change it, see the previous paragraph).

[ldap]

host = 192.168.18.250

port = 636

ssl-port = 636


ssl-keyfile = yourkeystore.kdb



The keystore you set here must contain all the CA certificates for the federated ldap.  It's likely you have a value there already if you configured Federated Directories already.
In the default setup, you can export the server certificate from the keystore embedded_ldap_keys , and import it as a CA in "yourkeystore.kdb" .

Update the bind-credentials, with the username and password for a user that can access the embedded ldap (eg. cn=root)

[bind-credentials]

bind-dn = cn=root,secAuthority=Default

bind-pwd = your-root-password-i-hope-it-is-not-passw0rd



The ISAM runtime will restart after these changes.

Update pd.conf


Go to "Secure Web Settings/Manage/Runtime Component" and navigate to "Manage/Configuration Files/pd.conf"

Edit the pdrte stanza.


[pdrte]

user-reg-type = ldap

user-reg-server =

user-reg-host = 192.168.18.250

user-reg-hostport = 636



Update user-reg-server with your lmi's hostname .  Note that you may have to add a hostrecord for your lmi !
Update user-reg-host with your lmi's public ip address (in this example, 192.168.18.250)
Update user-reg-hostport to match the ssl port (default 636).

In the ssl stanza, I enabled the TLS 1.1 and 1.2 versions.


[ssl]

tls-v10-enable = no

tls-v11-enable = yes

tls-v12-enable = yes



Update ivmgrd.conf



Go to "Secure Web Settings/Manage/Runtime Component" and navigate to "Manage/Configuration Files/ivmgrd.conf"

Enable ssl, and assign the keystore you used earlier.

ssl-enabled = true

ssl-keyfile = yourkeystore.kdb




Update your reverse proxies


The reverse proxies need to be instructed to connect over ssl.

So for every reverse proxy, open the "Secure Web Settings"/"Reverse proxy" page, select the reverse proxies one by one and open the Configuration file (webseald.conf): "Manage/Configuration/Edit Configuration file"

Find the [ldap] stanza and enable ssl :

[ldap]

ssl-enabled = yes

ssl-keyfile = yourkeystore.kdb

ssl-keyfile-dn =




The keystore you select here should contain the CA certificates for your embedded ldap , and the federated repositories.  
Leave the ssl-keyfile-dn empty, unless you're using Mutual TLS.

Restart the reverse proxy after these changes, and repeat for all your reverse proxies.

If the reverse proxy fails to start, look in the msg__webseald-.log file (via Monitor/Reverse proxy log files) for errors indicating connection problems, for instance these:

2019-08-01-14:43:48.521+02:00I----- 0x16B480C9 webseald ERROR rgy ira ira_handle.c 1489 0x7f95100a6840 -- HPDRG0201E   Error code 0x51 was received from the LDAP server. Error text: "Can't contact LDAP server".

206       2019-08-01-14:43:48.525+02:00I----- 0x1354A0C0 webseald WARNING ivc general azn_maint.cpp 1136 0x7f94f1b8a700 -- HPDCO0192W   LDAP server 192.168.18.250:636 has failed.



Possible causes are missing ssl certificates , or errors in ip addresses and hostnames in your configuration.

Look for messages like this:
javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.h: No trusted certificate found
javax.naming.CommunicationException: simple bind failed: 192.168.18.250:636 [Root exception is javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.h: No trusted certificate

Ansible configuration


I've published Ansible playbook that performs all these tasks for you https://github.com/Bozzie4/isam_sample_playbooks  
It requires you have setup Ansible to work with ISAM (with the roles and the modules found here :
- https://github.com/IBM-Security/ibmsecurity
- https://github.com/IBM-Security/isam-ansible-roles  

Congratulations, now you should have a working environment


You should now have a working environment :
- you can use a standard pkmslogin to webseal with a basic user
- the UserLookupHelper works correctly (init(false) or init()) and can locate basic users
- the Username/Password mechanism can be configured
- the SCIM setup can use the ISAM RTE

A remark on Point of Contact configuration


This setup is applicable if you have your ISAM appliance set to "Access Manager Credential" or "Access Manager Username and extended attributes".  It is not applicable for the External Users model (since you probably won't have basic users configured ).

A remark LMI Configuration


I noticed that if I set the LMI to only use TLSv1.2, the UserLookupHelper no longer works (or rather, that the ISAM RTE connection fails on an LMIRest call ).  So I re-enabled all TLS protocols for the LMI to resolve this.
I need to get to the bottom of this, though .




Alternative solution that will only work if your federated repositories do not use SSL


If your federated repositories do not use SSL (which will never happen in real life, I hope), you can get away with a simpler change.   However, I do not recommend this

So keep the runtime configured with local Policy Server and local LDAP.

Open ldap.conf, and remove the ssl_keyfile parameter completely.  This will disable all SSL usage for your embedded ldap and for the federated repositories.

Restart everything.

Download the Brave browser and earn BAT tokens

Tom Bosmans  27 April 2019 21:54:04
The Brave browser promises better privacy ; and to directly pay the user for the ads you're looking at .
The payout is in BAT (Basic Attention Tokens).

I think this looks promising .

Download the browser here :



ISAM with VirtualBox and Vagrant for development

Tom Bosmans  15 March 2019 13:36:28

How to run ISAM on Virtual Box (and how to run it using Vagrant)



The goal of this post is to get IBM Security Access Manager running on Virtual Box (  https://www.virtualbox.org/ ), on my local machine.  This will allow me to test the Ansible playbooks I'm preparing locally before commiting them onto the Git repository.
As a small addition, I have Vagrant , to quickly set up a new clean instance.  Vagrant does not really bring a whole lot of value in this case, because ISAM is a locked down appliance and Vagrant can't really do a lot.  

Download the ISAM 9.0.6 ISO file


Get the ISAM 9.0.6 (or whatever the most recent version is) from Passport Advantage.

Setup Virtual Box



Under File/Preferences/Network , create a new NAT network.
You can just accept the defaults.

Create new Virtual Machine



Create a new virtual machine.  

- Configure 2 NIC's, both configured with the Intel E1000 adapter
Connect the first NIC to the NAT Network you prepared.
Connect the second NIC to the Host-Only Network.

- Configure Storage : create a new SCSI controller, with LSI Logic adapter
Create a new disk, with a size of at least 8 GB (recommended is 100Gb)

- Connect the CD/DVD to the ISO file you downloaded earlier with ISAM on it

- Assign 2048 Mb of memory  

- Disable Audio

Advanced configuration for the Virtual Machine



To run the ISAM appliance on Virtual Box, we need to trick it to think it's running on VMWare.

Open a command prompt and navigate to the Virtual Box installation folder.
Run the "list vms" command to get a list of your virtual machines.

C:\Program Files\Oracle\VirtualBox>

VBoxManage.exe list vms

"isam905" {1c548e84-f7cd-4283-a36a-71843757d1af}


Run the following command to “fake” Vmware .  Use the output of the previous command for the vm info.
This is the "magic" command that makes everything work:

VBoxManage setextradata "isam905" "VBoxInternal/Devices/pcbios/0/Config/DmiBIOSVendor"  "VMware Virtual Platform"


These commands configure port forwarding for initial configuration:

VBoxManage modifyvm "isam905" --natpf1 "guestssh,tcp,,2222,,22"

VBoxManage modifyvm "isam905" --natpf1 "lmi,tcp,,4443,,443"



Configure ISAM


Start the vm and configure it.
After initial booting from the dvd and installing the image, you must disconnect the dvd pointing to the iso file.

Then reboot the machine again; and it should boot to the console .


Access the LMI


You can now access the LMI on https://127.0.0.1:4443/core/login

The console is available by ssh on port 2222:

ssh -p 2222 admin@127.0.0.1



Next steps


I can now use Ansible to configure the ISAM vm, specifically :
- perform first steps , activation etc.
- configure the second interface on the Host Only adapter, so ISAM is accessible on a "normal" address as well (without port forwarding).

Another thing I'm working on, is to use Vagrant to be able to rapidly start up a temporary ISAM appliance .


Sources


https://www.vagrantup.com/docs/virtualbox/boxes.html
https://idmdepot.com/How_To/Running_ISIM_or_ISIG_VA_on_VirtualBox.html
https://www.virtualbox.org/

Using Atom as text editor - let’s say I’m not convinced

Tom Bosmans  27 February 2019 13:56:54
I'm working with Atom (https://www.atom.io) since a few days , because it has integrated Git/Github support and it is available cross-platform, but boy ... I hate it.

It does not do any of the basic stuff I expect an editor to do

- why is searching so hard ?
- when I open a file again in the tree , it OPENS THE FILE AGAIN instead of going to the Tab where the file is already open .  Other (free) editors give at least a warning if you try to do that.
- because of that (?),  it overwrites my changes from time to time, seemingly at random
- sloooow

Also the integration with Git/Github seems random at best.  I regularly need to restart the editor, because I can't access the Git/Github functions .

I hate it ...  I even prefer vi .

There are things that I like as well, but I have not found anything yet that would make me recommend Atom over anything else ....

Logout everywhere for OIDC/OAuth2 on ISAM

Tom Bosmans  22 January 2019 12:00:40

Single sign on


We have an environment where multiple websites are configured to use OIDC authentication (authorization code flow) to an IBM ISAM acting as the Idp (Identity Provider).
All these websites expect different scopes in their tokens (eg. access tokens and id tokens) .

Of course, the user can also use mulitple devices (browsers) to access the sites.

The IDP thus can hand out a number of different tokens for a single user.

We also created a custom "Remember Me" function ; that relies on an Oauth access token stored in a cookie (more on that some other time).

So a user effectively has single sign on between all the websites now : every time a new access token is requested (eg. with a different scope); the user is redirected to the idp.  But because of the "Remember Me" cookie, the user is logged on automatically to the idp.  The idp then hands out the new access token.  

The challenge in this case now, is to implement a Logout function, that not only invalidates the current access token in use for that particular website; but also all other active tokens (access tokens, refresh tokens; ...).

Infomap


We want to terminate ALL active tokens (from all different applications (client_id), but also from all logged in devices - logout everywhere.

To remove all tokens for a particular (logged in) user; there are methods in the API

OAuthMappingExtUtils.deleteAllTokensForUser(username);


That would do what we need.   Note that this removes the Oauth tokens that exist for the user across ALL devices the user is using.

WebSeal's Single-signoff-uri



Our initial idea was that we could use WebSeal's single-signoff-uri parameter.  This is intended as a mechanism to log the user out of any backend applications when the WebSeal session is terminated.
There are four different mechanisms that can terminate a WebSEAL session:

User request by accessing pkmslogout.
Session timeout.
EAI session termination command.
Session terminate command from the pdadmin tool.

When the WebSeal session terminates, it sends (GET) requests to the url's you configure here; including cookies and headers from the incoming request.
By default (after you've configure the OIDC and API Connect features); there's an entry already for Oauth:

single-signoff-uri = /mga/sps/oauth/oauth20/logout

So adding a single-signoff-uri that points to our custom infomap; would be triggered when the WebSeal session terminates ...
The problem with this configuration in our scenario; is that it would mean that every time you are logged out of the IdP, it would trigger the logout mechanism .  We only want to trigger it when the user clicks "Logout" !

https://www.ibm.com/support/knowledgecenter/SSPREK_9.0.6/com.ibm.isam.doc/wrp_config/concept/con_single_signoff_overvw.html



Logout everywhere



Infomap setup


The infomap can obviously also be called directly by the enduser.

By using the api endpoint, we can furthermore do the calls in an Ajax call (json returned).  Note that this requires a CORS configuration in most cases .

So assuming my ISAM that acts as the IDP is on https://idp.tombosmans.eu , the call to the infomap could look like this .  Note the difference in the uri (apiauthsvc vs. authsvc).
API Call (return JSON) https://idp.tombosmans.eu/mga/sps/apiauthsvc?PolicyId=urn:ibm:security:authentication:asf:federatedlogout
Browser https://idp.tombosmans.eu/mga/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:federatedlogout







For the API call to succeed; you must add an Accept Header to your GET call with the value "application/json" (see here , for instance : https://philipnye.com/2015/10/02/isam-lmi-rest-api-http-405-method-not-allowed-error/ )

You need to have the necessary template files in place; you need

/authsvc/authenticator/federatedlogout/logout.html
/authsvc/authenticator/federatedlogout/logout.json


The html file is returned for the Browser call, the json file when you use the API call

I've used this excellent blog entry from my colleague Shane Weeden as an example and a source for code :   https://www.ibm.com/blogs/sweeden/implementing-isam-credential-viewer-infomap/

Infomap code


Now this code does 2 things : it calls the deleteAllTokensForUser function to remove all tokens for the logged in user, and then it performs a logout by using a REST call to the pdadmin utility , to run the command

server task terminate all_sessions


This code depends on a Server Connection to exist ; named "ISAM Runtime REST" , that should point to your LMI; with the userid and password for an admin user.
The details to run the pdadmin command are hardcoded in this example:

What is missing from this code, is the possibility to pass a redirect URL.  This would make sense in the "Browser" version, less for the API version of the infomap.

The infomap code :


importClass(Packages.com.tivoli.am.fim.trustserver.sts.utilities.IDMappingExtUtils);
importClass(Packages.com.tivoli.am.fim.trustserver.sts.utilities.OAuthMappingExtUtils);
importClass(Packages.com.tivoli.am.fim.base64.BASE64Utility);
importClass(Packages.com.ibm.security.access.server_connections.ServerConnectionFactory);
importClass(Packages.com.ibm.security.access.httpclient.HttpClient);
importClass(Packages.com.ibm.security.access.httpclient.HttpResponse);
importClass(Packages.com.ibm.security.access.httpclient.Headers);

/**
* Utility to get the username from the the Infomap context
*/
function getInfomapUsername() {
            // get username from already authenticated user
            var result = context.get(Scope.REQUEST,
                                              "urn:ibm:security:asf:request:token:attribute", "username");
            IDMappingExtUtils.traceString("[FEDLOGOUT] : username from existing token: " + result);

            // if not there, try getting from session (e.g. UsernamePassword module)
            if (result == null) {
                             result = context.get(Scope.SESSION,
                                                               "urn:ibm:security:asf:response:token:attributes", "username");
                             IDMappingExtUtils.traceString("[FEDLOGOUT] : username from session: " + result);

            }
            return result;
}

/**
* Utility to html encode a string
*/
function htmlEncode(str) {
               return String(str).replace(/&/g, '&').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
}
/*
*    perform pdadmin session deletion
*/
function deleteAllSessions(username) {
            IDMappingExtUtils.traceString("[FEDLOGOUT] Entering deleteAllSessions()");
                             
            // Get the Web Server Connection Details for ISAM Runtime
            servername = "ISAM Runtime REST";
            var ws1 = ServerConnectionFactory.getWebConnectionByName(servername);
            if (ws1 == null) {
                             IDMappingExtUtils.traceString("[FEDLOGOUT] Could not find the server data for " + servername);
                             next = "abort";
                             return;
            }
           
            var restURL = ws1.getUrl()+"/isam/pdadmin";
            var adminUser = ws1.getUser();
            var adminPwd = ws1.getPasswd();
            IDMappingExtUtils.traceString("[FEDLOGOUT] url : "+restURL+" adminUser  " + adminUser + " password: "+ adminPwd );
            var headers = new Headers();
            headers.addHeader("Content-Type", "application/json");
            headers.addHeader("Accept", "application/json");
           
var respbody = '{"admin_id":"sec_master", "admin_pwd":"<sec_master_password>", "commands":"server task <websealinstance> terminate all_sessions ' + username + '"}';
            var hr = HttpClient.httpPost(restURL, headers, respbody, null, adminUser, adminPwd, null, null);
            if(hr != null) {
                             var rc = hr.getCode();
                             IDMappingExtUtils.traceString("[FEDLOGOUT] got a response code: " + rc);
                             var body = hr.getBody();
                             if (rc == 200) {
                                              if (body != null) {
                                                              IDMappingExtUtils.traceString("[FEDLOGOUT] got a response body: " + body);
                                              } else {
                                                               IDMappingExtUtils.traceString("[FEDLOGOUT] body of response from pdadmin is null?");
                                              }
                             } else {
                                              IDMappingExtUtils.traceString("[FEDLOGOUT] HTTP response code from pdadmin is " + rc);
                             }
            } else {
                             IDMappingExtUtils.traceString("[FEDLOGOUT] HTTP post to pdadmin failed");
            }
            return true;                
}

// infomap that returns a page indicating if you are authenticated and who you are
var username = getInfomapUsername();
// We must have a logged in session, otherwise we cannot logout ....
// this is a bit annoying and I'm not sure how to make sure
if ( username != null ) {
var tokens_for_user = OAuthMappingExtUtils.getAllTokensForUser(username);
            var tokens = [];
            for (i = 0; i < tokens_for_user.length; i++) {
                                              var a = tokens_for_user[i];
                                              tokens.push(a.getClientId() + "---" + a.getId() + "---"+a.getType();
                                              IDMappingExtUtils.traceString("[FEDLOGOUT] : Active Token ID " + a.getId() +  ", ClientID " + a.getClientId() + " getType " + a.getType() );
            }
           
            OAuthMappingExtUtils.deleteAllTokensForUser(username);
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Deleted all tokens for : " + username );
           
            /*
            * Now invalidate the session for the idp
            */
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Trying to logout : " + username );
            deleteAllSessions(username);
           
            /*
            * Now return the page ... the page should actually never be shown ...  logout.json should exist as well
            */
            page.setValue("/authsvc/authenticator/federatedlogout/logout.html");
            macros.put("@AUTHENTICATED@", ''+(username != null));
            macros.put("@USERNAME@", (username != null ? htmlEncode(username) : ""));
            macros.put("@TOKENS@", tokens.join(";");
            // we never actually perform a login with this infomap
            success.setValue(false);
} else {
            IDMappingExtUtils.traceString("[FEDLOGOUT] : Anonymous session; no logout performed");
            page.setValue("/authsvc/authenticator/federatedlogout/needloginfirst.html");
            success.setValue(false);
}


OAuth and OpenID Connect provider configuration for reverse proxy instances - reuse acl option

Tom Bosmans  10 October 2018 10:12:04
I have multiple reverse proxy instances configured on an appliance, and recently added a new one.

I performed the "Oauth and OpenID Connect Provider configuration", and did not select the options "Reuse ACL" nor "Reuse Certificates" .

After that, I noticed that my OpenID authentication no longer worked correctly on the other instances.
The reason was that the ACL's for the objects in /mga/sps/oauth/oauth20/  disappeared .


So if you already have configured other instances on your appliance for "Oauth and OpenID connect",  always enable "Reuse ACL"  !


What actually happens is easy to follow in the autocfg__oauth.log file in the Reverse Proxy log files:

If reuse acl is not checked, it will first detach the ACL's from all objects , delete the ACL and then add it again, but only for the reverse proxy where your run the configuration .....
So you loose all configuration that uses the isam_oauth_* ACL's in the other instances .  

Moral of the story :  always enable "Reuse ACL"  when running the "Oauth and OpenID Connect Provider configuration"

Add a header X-LConn-UserId to all requests in Connections

Tom Bosmans  8 August 2018 11:51:30
By adding this generic property to LotusConnections-config.xml, all requests will contain a header X-Lconn-Userid , that contains the logged in user .

Depending on your configuration, this most likely is the email address of the logged in user.


<!-- To display email of logged in user in IHS: -->
<genericProperty name="com.ibm.lconn.core.web.request.HttpRequestFilter.AddRemoteUser">true</genericProperty>



You can then add this header value to the log configuration in apache/ihs, so you have logs including the user .  This is pretty helpful for tracing problems ....

Please note that this is not officially supported in any way !

IBM Cloud Private installation - Filebeat problem (CentOS7)

Tom Bosmans  13 July 2018 13:17:00
After installation of IBM Cloud Private 2.1.0.3, I noticed I did not see any log information in the ICP UI.

While checking the logs, I saw that filebeat did not start correctly (or rather, completely failed to start).


(on the master node: )
root@icpboot ~]#journalctl -xelf



Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168699    1825 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168734    1825 pod_workers.go:186] Error syncing pod 2406dc66-85e4-11e8-8135-000c299e5111 ("logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"), skipping: failed to "StartContainer" for "filebeat" with RunContainerError: "failed to start container \"ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18\": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount."

Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083562    1825 kuberuntime_manager.go:513] Container {Name:filebeat Image:ibmcom/filebeat:5.5.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:NODE_HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/usr/share/filebeat/filebeat.yml SubPath:filebeat.yml MountPropagation:} {Name:data ReadOnly:false MountPath:/usr/share/filebeat/data SubPath: MountPropagation:} {Name:container-log ReadOnly:true MountPath:/var/log/containers SubPath: MountPropagation:} {Name:pod-log ReadOnly:true MountPath:/var/log/pods SubPath: MountPropagation:} {Name:docker-log ReadOnly:true MountPath:/var/lib/docker/containers/ SubPath: MountPropagation:} {Name:default-token-kbdxx ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083787    1825 kuberuntime_manager.go:757] checking backoff for container "filebeat" in pod "logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"




This means that in the IBM Cloud Private UI, I don't see any logs .

Digging a bit further, I saw that the logging-elk-filebeat-ds indeed was not started.

root@icpboot ~]# kubectl get ds --namespace=kube-system
NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

auth-apikeys                         1         1         1         1            1           role=master     20h

auth-idp                             1         1         1         1            1           role=master     20h

auth-pap                             1         1         1         1            1           role=master     20h

auth-pdp                             1         1         1         1            1           role=master     20h

calico-node                          3         3         3         3            3                    20h

catalog-ui                           1         1         1         1            1           role=master     20h

icp-management-ingress               1         1         1         1            1           role=master     20h

kube-dns                             1         1         1         1            1           master=true     20h

logging-elk-filebeat-ds              3         3         2         3            0                    20h

metering-reader                      3         3         2         3            2                    20h

monitoring-prometheus-nodeexporter   3         3         3         3            3                    20h

nginx-ingress-controller             1         1         1         1            1           proxy=true      20h

platform-api                         1         1         1         1            1           master=true     20h

platform-deploy                      1         1         1         1            1           master=true     20h

platform-ui                          1         1         1         1            1           master=true     20h

rescheduler                          1         1         1         1            1           master=true     20h

service-catalog-apiserver            1         1         1         1            1           role=master     20h

unified-router                       1         1         1         1            1           master=true     20h


Now the problem is of course in the log file, but I did not know how to fix it :

Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.



On each node, execute these commands:

findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

mount --make-shared /var/lib/docker/containers



The result looks something like this:


[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers private

[root@icpworker1 ~]# mount --make-shared /var/lib/docker/containers

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers shared



After that, the logging-elk-filebeat DaemonSet is available :

root@icpboot ~]# kubectl get ds --namespace=kube-system

NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

...

logging-elk-filebeat-ds              3         3         2         3            2                    20h

..

I don't know if this is a bug, or if this is caused by me trying to run ICP on CentOS7 (which is not a supported platform) ...

Synology TFTP server for PXE Boot

Tom Bosmans  9 July 2018 10:54:08
Something I've been meaning to do for a while now, is setup my Synology NAS as a PXE boot server.

I want to be able to easily install new Operating systems on any new hardware I get, but more importantly, to easily install multiple Virtual Machines on my primary workstation without too much hassle.
This will involve setting up the Synology as TFTP server , supplying the correct PXE files (images and configuration) , and also configuring my DHCP server .

The official documentation from Synology is woefully inadequate to get PXE up and running, it is missing a number of vital steps.
https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_implement_PXE_with_Synology_NAS

Luckily, there are other sources on the internet that

Configure TFTP on Synology


Prepare a shared folder/volume.  In my case, I have a shared volume named "shared" , where I created a folder "PXEBOOT"

Go to Main Menu > Control Panel > File Services and select the TFTP tab.
Tick Enable TFTP service.
Image:Synology TFTP server for PXE Boot

Enter the folder you prepared earlier.

Now you need to add the folder structure for TFTP to be able to show a boot menu , and prepare the images.

Check out this exellent guide, that contains a link to a zip file with a configuration that contains CentOS and Ubuntu images.

https://synology.wordpress.com/2017/10/05/boot-from-any-iso-on-your-network-using-pxe/


(it actually uses this Github repository : https://github.com/paulmaunders/TFTP-PXE-Boot-Server )

The Github repository is not quite up to date, but it's easy to add newer images, I've added Ubuntu 18.04 and CentOS 7.5 .  It is configured to use the netinstall (http), so you do need an internet connection.

Unzip it, and put it on your shared folder, on your Synology, so it looks like this:
Image:Synology TFTP server for PXE Boot


Verify TFTP


I'm using Redhat 7.5 , and I wanted to quickly test TFTP.  Unfortunately, the tftp client is not a part of my configured repositories , so I just downloaded a client from http://rpmfind.net .


tftp

(to) <ip address of synology>

tftp> verbose

Verbose mode on.

tftp> get pxelinux.0

getting from <ip address of synology>:pxelinux.0 to pxelinux.0 [netascii]

Received 26579 bytes in 0.2 seconds [1063767 bit/s]

tftp>



This indicates that the location of pxelinux.0, that needs to be configured in the DHCP server, is in the root of the TFTP server and is accessible by everyone.

Configure Ubiquity Edge Router's DHCP


A very complete guide to do this can be found here. You need to use the Ubiquity CLI to do it.
https://blog.laslabs.com/2013/05/pxe-booting-with-ubiquiti-edgerouter/

I've configured the following (result of show service dhcp-server )

shared-network-name LAN2 {

    authoritative disable

    subnet 192.168.1.0/24 {

        bootfile-name /pxelinux.0

        bootfile-server <ip address of synology>

        default-router 192.168.1.0

        dns-server 192.168.1.1

        dns-server 8.8.8.8

        domain-name gwbasics.be

        ....

        subnet-parameters "filename &quot;/pxelinux.0&quot;;"

    }

}

use-dnsmasq disable



Note that you must use the &quote; syntax !

The following commands were used :

configure

edit service dhcp-server shared-network-name LAN2 subnet 192.168.1.0/24

set subnet-parameters "filename &quot;/pxelinux.0&quot;;"

set bootfile-name /pxelinux.0

set bootfile-server <ip address of synology>

commit

save

show service dhcp-server



Issuing a new "set" command does not overwrite a value, instead it adds a new line.  You need to remove the entries that are not correct (if you have multiple lines) :

show service dhcp-server

configure

edit service dhcp-server shared-network-name LAN2 subnet 192.168.1.0/24

delete subnet-parameters "filename &quot;/shared/PXEBOOT/pxelinux.0&quot;;"

commit

save

show service dhcp-server



If you have multiple lines or wrong lines, you will see PXE errors in the boot screen .

VMWare workstation


Lastly, I need to configure VMWare workstation.
2 important things here :
- I added a bridged network adapter , to obtain a dhcp address from my home network .  This adapter will receive the pxe boot instructions.
- I increased the memory size from 1024 MB to 2048 MB, because the CentOS 7.5 installer complained about "no space left on device" on the /tmp drive during installation (which effectively means, in memory).
Image:Synology TFTP server for PXE Boot

When booting, now get the configured menu options from my PXE boot server ....
Image:Synology TFTP server for PXE Boot

Then step through the installation options as you would perform a normal manual installation.  Of course it's also possible to prepare automated installations, but that is another topic .