Tips & tricks for installing and running ICS products

DKIM deployed on my mail servers

Tom Bosmans  16 June 2017 10:40:42
After moving my server to a new physical box (and new IP Address), some of the more difficult large mail systems started rejecting mail from my domains.
Google was OK with my mails, although not ecstatic, but Yahoo and especially Microsoft considered my systems dangerous apparently.

I googled around, found a lot of crap information, but resolved the issue and improved my mail setup in the end.  Turned out that I should be using TLS (for secure smtp) and DKIM (DomainKeys Identified Mail - http://dkim.org/ )


The bad stuff


- There's a lot of links advising you to use Return Path (ao. here :  https://blog.returnpath.com/google-is-failing-your-perfectly-good-dkim-key-and-why-thats-a-good-thing/)
Don't invest time here.  It's a service for spammers, I would say (they call it "email marketing").  You need to register and likely never get a response anyway.  
- Domino does not support DKIM natively, and likely never will (http://www-01.ibm.com/support/docview.wss?uid=swg21515751)
- Microsoft (with all their domains - hotmail.com, outlook.com, ...) are very tricky
- Yahoo is difficult as well, but should you care ?  You shouldn't be using Yahoo mail anyway these days.
- MailScanner breaks DKIM, so requires changes in the configuration (the problem being that it
It's a little tricky to find out all the details - because most test tools identify that "dkim is working", while google complains ....
- Postfix works with Letsencrypt certificates, but again , the information on the internet is sometimes incorrect or incomplete at best.
- DKIM relies on DNS configuration, and that can be tricky (depending on your DNS provider or your DNS server)

The good information


- Postfix support DKIM through the opendkim milter add-on (http://www.opendkim.org/)
- testing DKIM can be done using a tool like this  : http://www.appmaildev.com/en/dkim  
Very handy, fast, easy, no registration.
- the proof is in the pudding, and sending mail to gmail.com (Google) actually shows the information nice and tidy.
- Letsencrypt and Postfix work together nicely once the setup is done correctly.


Let's get to work


So what I had to do, in a nutshell :


  • Change my Domino configuration , so also send outgoing mail through Postfix.  This is as simple as setting the "Relay host for messages leaving the local internet domain".
    This is necessary, to allow opendkim to sign the outgoing mails as well.
    Relay host for messages leaving the local internet domain: mail.gwbasics.be



  • Configure Postfix - add the milter for dkim (and configure TLS with LetsEncrypt) in main.cf
  • Configure MailScanner  - apply the settings that are in the configuration file, that mention dkim.
  • Configure opendkim (generate the keys)
  • Configure DNS (create a new TXT record for the key you created.  In general, you can use "default", and you require a record for default._domainkey. )
  • Verify your key using opendkim-testkey
  • Test the DNS entry (eg. using http://dkimcore.org/tools/keycheck.html , or using host (eg. host -t txt default._domainkey.gwbasics.be)
  • Test the mails you send out (use  http://www.appmaildev.com/en/dkim  ).  Or use gmail to check.



Use Gmail to check your settings


Gmail actually has the possibly by default to verify various settings.  
Next to the "to me", click the dropdown button.
In the case that you have set up DKIM correctly, it will show a "signed-by" line.  You can see TLS information here as well .
Image:DKIM deployed on my mail servers
Additionally, you can also go to "Show original"
Image:DKIM deployed on my mail servers
This will show the source of the  email, and has a summary header that contains important information.
As you can see  , it shows that DKIM has PASS.  If it says something else here, you need to go back to the drawing board.
Image:DKIM deployed on my mail servers

This can contain a lot more options, btw.  If you use DMARC as well, it will show up here too.  For my domain, you see the SPF option.


Microsoft's domains



Once you're certain DNS is setup correctly and you're no open relay, you can easily contact Microsoft directly to unblock your mail server(s) here :
This immediatly works for hotmail.com, outlook.com and the other domains.

https://support.microsoft.com/en-us/getsupport?oaspworkflow=start_1.0.0.0&wfname=capsub&productkey=edfsmsbl3&locale=en-us&ccsid=636329734561893294

This took only a few hours in my case.

Server outage (disk failure)

Tom Bosmans  6 June 2017 10:08:04
Yesterday morning, I noticed that my server was running slow .   I couldn't see any processes hugging up resources, though.

Instead of really looking into the problem, I decided to reboot the machine .  That was a mistake.  As the server did not come back online, I realised that it was likely that there was a problem with the disks .
I have a dedicated server at http://www.hetzner.de , and it's really the first time I run into problems .  I can really recommend this hosting provider.

The server has a software raid with 2 disks , running Cent OS.  
I assumed that mdadm was trying to recover , but had no way of knowing, since the machine did not come back online.  
At this point, I got very scared - I feared loss of data.

Fortunately, the guys at hetzner supply a self-service console to the machine (you start a rescue system).

I could log in using that mechanism, and then I was able to mount the filesystems in raid.  It was quickly clear that indeed, 1 disk died.

Now I could do 2 things :
- request a disk replacement.  This was going to take a while, and during that time I don't have a redundant disk.  And chances are high, when 1 disk fails , the other will also fail.
- move my installation to a new server.  I know that between ordering a new server, and having the OS installed on it ready for use, only takes around 1 hour (did I mention these guys are great ?  Note that this is physical hardware, not some cloud service  !)

I decided to go with option 2 .

This consists of copying the data from the old server to the new one (this took a long time), reinstalling the software , reapplying the configuration for my mail servers and other stuff, and then adjusting the Domino configuration (change the ip addresses).

In the end, it took me 10 hours in all, to get the new server up and running...including copying the data.   Now I just have to decommision the old server , and I'm done :-)



Kubernetes and dns

Tom Bosmans  28 April 2017 11:00:25
Kubernetes apparently doesn't use a host file, but instead relies on DNS.  So when setting up Orient Me (for Connections 6) on a test environment, you may run into problems.
https://github.com/kubernetes/dns/issues/55

Then you may want to look back to this older blog entry :
Setup DNS Masq

You're welcome :-)

To keep with the docker mechanism, look at this to make your life easier :https://github.com/jpillora/docker-dnsmasq

Note that this is obviously not the only solution,  you can also follow these instructions :http://www.robertoboccadoro.com/2017/04/13/orientme-in-a-test-environment-how-to-make-it-work/





Security Reverse Proxy with Connections - forcing all trafic through the interservice url

Tom Bosmans  20 April 2017 15:17:50
In a recent project, we are using IBM Datapower as a security reverse proxy to handle authentication and coarse grained authorization for Connections 5.5 .

The approach we follow is similar to what I have described here :

https://www-10.lotus.com/ldd/lcwiki.nsf/dx/IBM_Connections_v4.5_and_WebSeal_integration_col_alternative_approach

In short : you want to avoid that the interservice traffic passes through the reverse proxy (Datapower or Webseal , that is not relevant at this point).

The picture below shows that you want to have 2 paths of access :

- for users and api access etc : through your reverse proxy

- the internal , backend connections : through your http server

Image:Security Reverse Proxy with Connections - forcing all trafic through the interservice url

To do that , you need to make sure you have different values for the href/ssl_href and interservice values in LotusConnections-config.xml.

<sloc:href>
             <sloc:hrefPathPrefix>/wikis</sloc:hrefPathPrefix>
             <sloc:static href="https://connections.company.com" ssl_href="https://connections.company.com"/>
             <sloc:interService href="https://ihs.internal.com"/>
     </sloc:href>


You can see a lot of things here :

- you need to do this for ALL services defined in LotusConnections-config.xml

- all url's are https

- the interservice url is different from the static.

- the interservice url points to the HTTP server (or a load balancer pointing to the HTTP Servers)

- the static urls point to your reverse proxy (or the load balancer pointing to your reverse proxy)

- bonus points  : put the interservice url in different domain from the static urls, to avoid potential xss problems.

Some additional remarks :

- do not use the dynamicHost section, that is generally recommended when using reverse proxies

- set the forceConfidentialCommunitication flag to "true" .  ALWAYS.  You don't want to use http in these times, you always want to use https.

Now for the problem : although this should instruct Connections to use the internal http server for interservice requests, in reality, the backend still makes calls to the static urls.


That is very annoying : if you don't allow access from your back-end servers to the reverse proxy, everything will fail.  If you do not allow unauthenticated access through Datapower (or your reverse proxy), widgets don't render.

This becomes apparent for Widgets in the following manner :

[3/27/17 19:07:21:459 CEST] 00000149 IWidgetMetada W   com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
[3/27/17 19:07:21:535 CEST] 00000149 IWidgetMetada W   com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
[3/27/17 19:07:21:845 CEST] 000001c6 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating https://connections.company.com/connections/resources/web/com.ibm.social.ee/ConnectionsEE.xml. Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.
[3/27/17 19:07:21:847 CEST] 000001c7 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating https://connections.company.com/connections/resources/web/lconn.calendar/CalendarGadget.xml. Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.

This means that the back-end application (the WidgetContainer in this case) tries to retrieve the Widget configuration xml file, through the Reverse Proxy.  Because the Reverse Proxy does not allow unauthenticated acces, it presents a (html) login form .  That is interpreted as "invalid xml" .

Now by following the instructions here, to allow unauthenticated URI's through your reverse proxy, this can be resolved.  https://www.ibm.com/support/knowledgecenter/SSYGQH_5.5.0/admin/secure/t_secure_with_tam.html

If you don't allow access from your backend to your reverse proxy, you're still out of luck though.  And that previous part does nothing for any custom widgets or third party widgets you may have deployed (eg. Kudos Boards)

Core Connections

There is an undocumented solution for this, luckily, that you may get through support.

You need to edit opensocial-config.xml , in your Deployment Manager's LotusConnections-config directory.

After this line :


<external-only-access-exceptions>none</external-only-access-exceptions>

Add these lines :


     <proxyInterServiceRewrite name="opensocial" />
     <proxyInterServiceRewrite name="webresources" />
     <proxyInterServiceRewrite name="activities" />
     <proxyInterServiceRewrite name="bookmarklet" />
`        <proxyInterServiceRewrite name="blogs" />
     <proxyInterServiceRewrite name="communities" />
     <proxyInterServiceRewrite name="dogear" />
     <proxyInterServiceRewrite name="files" />
     <proxyInterServiceRewrite name="forums" />
     <proxyInterServiceRewrite name="homepage" />
     <proxyInterServiceRewrite name="mediaGallery" />
     <proxyInterServiceRewrite name="microblogging" />
     <proxyInterServiceRewrite name="search" />
     <proxyInterServiceRewrite name="mobile" />
     <proxyInterServiceRewrite name="moderation" />
     <proxyInterServiceRewrite name="news" />
     <proxyInterServiceRewrite name="profiles" />
     <proxyInterServiceRewrite name="sand" />
     <proxyInterServiceRewrite name="search" />
     <proxyInterServiceRewrite name="thumbnail" />
     <proxyInterServiceRewrite name="wikis" />


Sync your nodes, and restart everything.  All trafic for the standard widgets (eg. on Homepage or in Communities) will now go render correctly.
Note that this is not valid for CCM nor for Mobile, these have separate settings in library-config.xml and mobile-config.xml respectively where you can select to "use interservice url" .
For Docs, the configuration is done in the json configuration files .  I'm not going into these details here.

Custom or third party Widgets Connections

So great, the core Connections widgets are now rendering, and all trafic for them is now going through the interservice URL you defined .

There is however the small problem of custom widgets.  These are not handled by the rules in opensocial-config.xml .
We use Kudos Boards (http://www.kudosbadges.com/subpages/Kudos%20Boards?OpenDocument), but this next section is valid for all (most) custom or third party widgets you need to behave properly.

There's 2 more files to edit :


  • service-location.vsd: to allow you to edit LotusConnections-config.xml
  • LotusConnections-config.xml


And you need widget-config.xml , and still need to edit opensocial-config.xml .

widget-config.xml


Find the custom widget's configurationn in widget-config.xml.  In this example, we're looking at Boards (this is a sample, not actual widget definitions !).
You need the defId value here, so in our case, Boards.

<widgetDef defId="Boards" description="Kudos Boards widget" primaryWidget="true" modes="fullpage edit search" themes="wpthemeNarrow wpthemeWide wpthemeBanner" url="/kudosboards/boards.xml" showInPalette="true" loginRequired="true"/>

service-location.vsd


In service-location.vsd , add a line for every custom/third party widget.  You need to use the defId name from widget-config.xml in the previous step



The values here need to match the Widget definition in widget-config.xml, the service reference in LCC.xml, and the proxyInterServiceRewrite name in opensocial-config.xml.

LotusConnections-config.xml


In LotusConnections-config.xml, you then add a serviceReference entry for every custom (or third party) widget.  To be able to do that, you must have changed the service-location.vsd .



<sloc:serviceReference enabled="true" serviceName="Boards" ssl_enabled="true">
     <sloc:href>
             <sloc:hrefPathPrefix>/kudosboards</sloc:hrefPathPrefix>
             <sloc:static href="https://connections.company.com" ssl_href="https://connections.company.com"/>
             <sloc:interService href="https://ihs.internal.com"/>
     </sloc:href>
</sloc:serviceReference>

opensocial-config.xml


Finally, in opensocial-config.xml, add the rule for your custom widget, after the rules you added earlier.


<external-only-access-exceptions>none</external-only-access-exceptions>
     <proxyInterServiceRewrite name="opensocial" />
     ...
     <proxyInterServiceRewrite name="thumbnail" />
     <proxyInterServiceRewrite name="wikis" />
     <proxyInterServiceRewrite name="Boards" />

That is it.  You now sync your nodes, and restart everything.  Your custom widget will now work correctly .



If all else fails ...


Now there is a simpler solution to all of this .  You can use your /etc/host file to just match the public url (connections.company.com) to the IP address of the internal http server.  
I don't particularly like this solution, though.  It is difficult to maintain , and it probably breaks your company's standards and rules.

CCM installation problems with Connections 5.5 - Connections Admin password changes

Tom Bosmans  5 October 2016 14:20:13
During installation of CCM with Connections 5.5 using Oracle RAC cluster by my colleagues, they ran in to a number of problems and got the environment in a completely broken state.

The core problem is that FileNet does not support the modern syntax for jdbc datasources.  This technote explains what to do.  

http://www-01.ibm.com/support/docview.wss?uid=swg21978233

That is simple enough .

However , my colleagues continued on a detour, where they also changed the ConnectionsAdmin password.  That created a bunch of problems on it's own.
It turns out that the Connections 5.5 documentation is incomplete on where to change the occurences of the Connections Admin user and/or password.

The CCM installer mostly uses the correct source for the username / password (the variables you enter in the installation wizard or the silent responsefile).
But the script to configure the GCD datasources , for some reason uses a DIFFERENT administrator user.

It goes back to look at the connectionsAdminPassword variable that's stored in the cfg.py file, in your Connections directory (eg. /data/Connections/cfg.py )

So when you change the password for the Connections Administrator, don't forget to update it in the cfg.py file as well , before running the CCM installation.

"connectionsAdminPassword": "{xor}xxxxxxxxxxx",


In the end, this took me over 1/2 day to resolve, also because the guys working on it enabled all traces they could find so I also ran into an out-of-diskspace exception ..... , but mostly because the installation process for CCM is slow.


Sametime business cards from Connections

Tom Bosmans  28 September 2016 10:27:37
 
After deploying Connections 5.0 CR4, the business cards and photo's integration in Sametime chat (the webbrowser version) suddenly stopped working.
The problem is more pronounced in Internet Explorer.
The photo doesn't load, nor does the business card information (the phonenumber, email address) .   See the screenshot below :
Image:Sametime business cards from Connections

In the traces in the browser, it is clear that there's a HTTP 403 error (forbidden) on this call :

https://-SERVER-
/profiles/json/profile.do?email=-EMAIL-&lang=en_us&callback=stproxy.uiControl.connections.businesscard.
onBusinessCard&dojo.preventCache=1463032209022




It wasn't very high on my priority list, but I've not found out what the problem is (thanks to IBM Support).

Apparently, in CR4, something changed in the profiles-config.xml configuration :

allowJsonpJavelin enabled
 is changed from true to false.  

So the solution is simple, change this back from false to true , sync the nodes , and restart the server(s) that contains your Profiles application.


 <!--
                      Optional security setting for Profiles javelin card.  This setting is to disallow JSONP security.
                      Older 3rd party software may will not work with this setting unless they include a reverse proxy.
                      All of the Connections application will work with JSONP disabled.
              -->
              <allowJsonpJavelin enabled="true"/>

Connections and file indexing

Tom Bosmans  16 June 2016 15:36:39
The Stellent code that handles extracting content from the files in Connections , relies on an old version of libstdc++.so .

It relies on
libstdc++.so.5

While for instance on SLES 12, this is replaced with
libstdc++.so.6


It may not be immediately apparent that this is the problem.

If you use ExportTest.sh , you get a java error, which can throw you off.  So use the "exporter" directly, when in doubt.
Check this older blog post, that is about the same problem (but then in Sametime).  Installation of Sametime Meeting Server

It also explains how to verify your search indexing settings.


How to determine a websphere server is running in bash ?

Tom Bosmans  8 June 2016 11:06:37
When creating a simple bash script (actually,  scripts for installing Connections using Puppet, but that's a different story), that would need to check if the Deployment Manager is running, I ran into the following problem :
The serverStatus.sh script always returns "0" as status code, even if the server is stopped.  So it's a bit useless in bash scripting, where normally I'd rely on the return code by a script to determine if it ran successfully.  So "$?" equals 0 when the script ran successfully and not equal "0" when something went wrong.
But like I said already  , serverStatus.sh ALWAYS returns "0".

There's more problems with the serverStatus.sh command, for one, it takes a (relatively ) long time to execute.

/opt/IBM/WebSphere/AppServer/profiles/Dmgr01/bin/serverStatus.sh
echo $?
0


Anyway, another way to check if the dmgr is running , is by using "grep" .  Note that there's differences in the options on the different flavors of Unix and Linux, but that is not the scope of this post.  I'm also not discussing the best practices, that you should look for process id's , and not rely on text ...
What is important, is that you use the "wide" option (so you see the full java command that is used to start the jvm).
On SLES :

ps -ef | grep dmgr

On Red hat :

ps -wwwef | grep dmgr


Now there's an annoying problem : this will return (if dmgr is running) 2 processes , the process of the dmgr, but also the grep command itself.
There's a trick for that - I found it here :  http://www.ibm.com/developerworks/library/l-keyc3/#code10

Basically, to get around that, make the grep expression a regex.  This will avoid that the grep command itself shows up :

ps -ef | grep "[d]mgr"


This will only show the process we're interested in.

So now we have a nice , correct variable we can use to determine the Dmgr (or any other WebSphere server, for that matter) is running.
If the Dmgr is running,

ps -ef | grep "[d]mgr"
echo $?
0


and if it's not running :

ps -ef | grep "[d]mgr"
echo $?
1




Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

Tom Bosmans  27 May 2016 14:04:18

Let's encrypt your Domino http server - Part 2


In the mean time, since my post, things are changing in the Let's Encrypt world - they're officially out of beta (https://letsencrypt.org/2016/04/12/leaving-beta-new-sponsors.html) and there's name changes (pending).  Tooling has evolved as well (https://www.eff.org/deeplinks/2016/05/announcing-certbot-new-tls-robot)

That is however not the scope of this update to my original post here Let's encrypt tls certificate in Domino

There's a slightly annoying thing with the certificates delivered by Let's Encrypt, they are only valid for 3 months.  So you have to renew them every 3 months .
I've done that manually so far, but obviously automating this is the better option. Wouldn't it be nice if this all went automatically :-)

So based on the previous post, this is a follow-up, on how to renew your certificates in Domino .

Update your tooling


Update your Letsencrypt client tooling to Certbot.

Get it here and follwo the instructions based on your OS.
https://certbot.eff.org/

Certbot continues to use the configuration directories created earlier, so no worries there.

Check if your certificates require updating


The certbot-auto tool checks your certificates and decides if it's necessary to update them.
Note that if to run certbot to just check that your certificates require updating, it's not necessary to stop the http server !

./certbot-auto renew

Check the output, to see if it's necessary to update.  There's no need to continue if the certiticates don't require updating .

Update your certificates


To actually update your certificates, you need to stop the http server.
To renew, the http server on Domino needs to be stopped.  You can do that using the pre-hook option :

./certbot-auto renew --pre-hook "su - notes -c \"/opt/ibm/domino/bin/server -c 'tell http quit'\""


This update the certifcates in your store.  

Copy the certificates


Now copy these renewed certificates to a temporary location (because the kyrtool cannot process the certificates directly from the /etc/letsencrypt/live/... location.

cp /etc/letsencrypt/live//cert.pem /tmp/cert.pem
cp /etc/letsencrypt/live//fullchain.pem /tmp/fullchain.pem
cp /etc/letsencrypt/live/ /tmp/privkey.pem


Update the certificates in the Domino keyring


Run the kyrtool command against the Keyring you configured in the Domino SSL configuration.  Check the previous post about this : Let's encrypt tls certificate in Domino


su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/fullchain.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import keys -k  /local/notesdata/keyring2.kyr -i /tmp/privkey.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import certs -k /local/notesdata/keyring2.kyr -i /tmp/cert.pem"


Remove the certificate file in tmp !

Restart the http server


Restart the http server in Domino, and the updated certificate is now available in the browser.


su - notes -c "/opt/ibm/domino/bin/server -c 'load http'"


Note that if you use this method on a Domino server running extensions (eg. a Traveler server, a Sametime server) you likely have to restart more tasks than just http.

Here's a sample script putting it all together.


This script is a sample you can use and adapt.

The guys at certbot recommend to check for updates 2 times a day - to cater for certificate redraws from Certbot .  
I've scheduled it once every 2 days, using crontab.

This script relies on the certbot-auto tool to fail, when actually updating the certificate from Letsencrypt.  In that case, it will stop the HTTP server running, and copy the certificates so the kyrtool can import them.

certbot-renew-public.sh

Update on chaining problems


After renewing my certificates, I ran into problems - specifically the mobile browser (on Android) did not accept the certificate anymore.  Verifying the SSL configurattion using SSL Labs, I was surprised to only receive a "B" .
After googling a little bit , I came to the conclusion that there's changes in the certificate chain , and that these apparently are not reflected in the fullchain.pem.



Verification


Using SSL Lab (https://www.ssllabs.com/ssltest/) -  (SSL Labs provides "deep analysis of the configuration of any SSL web server on the public Internet ")  you can see if you reach at least "A" .
Image:Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

If you don't have an A, most likely you run into the chaining problem I encountered : you need to have "none" chain issues , and the X3 certificate needs to be sent by server as well.

Image:Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

In my case (when there were chaining issues), this complained about a missing "Let's Encrypt Authority X3".  I've verified the stores, and they still used the X1 Authority.

Manually updating trusts


So... it appears to me that the fullchain.pem does not contain the correct (new) chain, or that the kyrtool does not import it correctly.
Anyway, I've manually updated the trusts , by downloading the new X3 and X4 certificates from here  https://letsencrypt.org/certificates/.

Download the X3 and X4 certificates, to a temporary location on your server (eg. /tmp)

lets-encrypt-x3-cross-signed.pem
lets-encrypt-x3-cross-signed.pem


Import these into your keyring :

su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/lets-encrypt-x3-cross-signed.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/lets-encrypt-x4-cross-signed.pem"


You can check the certificates and the trusted roots :

su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini show certs -k /local/notesdata/keyring2.kyr"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini show roots -k /local/notesdata/keyring2.kyr"


Restart the HTTP server, and everything is OK.

I expect that this problem will be resolved at some time in the future, so this manual steps is no longer necessary, but for now, this works.



Remove unwanted Basic Authentication prompts in Connections

Tom Bosmans  20 April 2016 21:53:09
There is no point in allowing Basic Authentication in environments where users don't have passwords , for instance when there's single sign on set up with SPNEGO .
Connections (like most IBM products) then relies on LTPA tokens for authentication.

Also, in these enterprise environments, you would generally secure the applications (meaning : using the J2EE roles in all applications in Connections, to not allow anonymous access (everywhere where it says "Everyone", use "All Authenticated users" instead).    

The challenge in this scenario is that SOME uri's in that scenario , will not redirect you to the standard login  form (Forms based authentication), but rather pop up the annoying Basic Authentication prompt.
A sample URL that will prompt for BA is for instance https://yourconnectionsserver.com/profiles/atom/profileService.do ... Another is accessing a Profile picture (only if you secured the Profiles application).

When your users use a normal browser to access Connections, you'll hardly ever see a BA prompt, because users would generally not access these url's as the initial call.  So they'll already be authenticated by other means.

But it's a different story when you use API access to Connections - for instance to integrate Connections content into an intranet (that does not offer LTPA SSO).
In that case, it's pretty difficult to avoid Basic Authentication prompts popping up, because it's not very easy to catch them in the browser.

So we went with a drastic solution - disable Basic Authentication prompts completely.
This does not disable Basic Authentication, it just disables the prompt.  In our specific case, this again enables the javascript code to catch the 401 HTTP response correctly, and start an authentication sequence.
The solution does not change the header when it's a Connections server making the connections.  Connections , by itself , also uses some Basic Authentication for it's interservice requests.  I don't want to mess with these.   I don't think this is really necessary (since, again, Basic Authentication is not disabled), but still.

The solution is based on what's written here :
https://coderwall.com/p/ca-2bq/modify-the-www-authenticate-response-header-in-apache

However, that did not exactly work for me: I had to remove the "always" keyword - otherwise the Header edit would not work.

###################
#
#       remove basic auth headers in the response except for the Connections nodes (incl. ccm, fileviewer, etc)
SetEnvIf Remote_Addr ".*" REMOVEBASICAUTH
SetEnvIf Remote_Addr "10\.*|127\..*" !REMOVEBASICAUTH
Header edit WWW-Authenticate ^Basic NGCBasic env=REMOVEBASICAUTH
# end half


So what these 3 lines do :
1. SetEnvIf Remote_Addr ".*" REMOVEBASICAUTH

Set the environment variable for all connections

2. SetEnvIf Remote_Addr "10\.*|127\..*" !REMOVEBASICAUTH

Remove the environment variable based on a regular expression (in this case, all ip addresses starting with 10. , and localhost).  This regex should match the IP addresses of all the Connections servers (and FileNet , and FileViewer, and Cognos and ...) - any server that would make calls to Connections .

3. Header edit WWW-Authenticate ^Basic ConnBasic env=REMOVEBASICAUTH

This line edits the WWW-Authenticate header, if it starts with "Basic" , and changes it to something else.  The result is, that you would not get a prompt in the browser.






Remark on Mobile access and Desktop plugin


The Mobile Connections application do use Basic Authentication for authenticating, as does the Desktop Plugin.
However, neither one relies on the Basic Authentication prompt working correctly.