Tips & tricks for installing and running ICS products

Kubernetes and dns

Tom Bosmans  28 April 2017 11:00:25
Kubernetes apparently doesn't use a host file, but instead relies on DNS.  So when setting up Orient Me (for Connections 6) on a test environment, you may run into problems.

Then you may want to look back to this older blog entry :
Setup DNS Masq

You're welcome :-)

To keep with the docker mechanism, look at this to make your life easier :

Note that this is obviously not the only solution,  you can also follow these instructions :

Security Reverse Proxy with Connections - forcing all trafic through the interservice url

Tom Bosmans  20 April 2017 15:17:50
In a recent project, we are using IBM Datapower as a security reverse proxy to handle authentication and coarse grained authorization for Connections 5.5 .

The approach we follow is similar to what I have described here :

In short : you want to avoid that the interservice traffic passes through the reverse proxy (Datapower or Webseal , that is not relevant at this point).

The picture below shows that you want to have 2 paths of access :

- for users and api access etc : through your reverse proxy

- the internal , backend connections : through your http server

Image:Security Reverse Proxy with Connections - forcing all trafic through the interservice url

To do that , you need to make sure you have different values for the href/ssl_href and interservice values in LotusConnections-config.xml.

             <sloc:static href="" ssl_href=""/>
             <sloc:interService href=""/>

You can see a lot of things here :

- you need to do this for ALL services defined in LotusConnections-config.xml

- all url's are https

- the interservice url is different from the static.

- the interservice url points to the HTTP server (or a load balancer pointing to the HTTP Servers)

- the static urls point to your reverse proxy (or the load balancer pointing to your reverse proxy)

- bonus points  : put the interservice url in different domain from the static urls, to avoid potential xss problems.

Some additional remarks :

- do not use the dynamicHost section, that is generally recommended when using reverse proxies

- set the forceConfidentialCommunitication flag to "true" .  ALWAYS.  You don't want to use http in these times, you always want to use https.

Now for the problem : although this should instruct Connections to use the internal http server for interservice requests, in reality, the backend still makes calls to the static urls.

That is very annoying : if you don't allow access from your back-end servers to the reverse proxy, everything will fail.  If you do not allow unauthenticated access through Datapower (or your reverse proxy), widgets don't render.

This becomes apparent for Widgets in the following manner :

[3/27/17 19:07:21:459 CEST] 00000149 IWidgetMetada W org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
[3/27/17 19:07:21:535 CEST] 00000149 IWidgetMetada W org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
[3/27/17 19:07:21:845 CEST] 000001c6 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.
[3/27/17 19:07:21:847 CEST] 000001c7 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.

This means that the back-end application (the WidgetContainer in this case) tries to retrieve the Widget configuration xml file, through the Reverse Proxy.  Because the Reverse Proxy does not allow unauthenticated acces, it presents a (html) login form .  That is interpreted as "invalid xml" .

Now by following the instructions here, to allow unauthenticated URI's through your reverse proxy, this can be resolved.

If you don't allow access from your backend to your reverse proxy, you're still out of luck though.  And that previous part does nothing for any custom widgets or third party widgets you may have deployed (eg. Kudos Boards)

Core Connections

There is an undocumented solution for this, luckily, that you may get through support.

You need to edit opensocial-config.xml , in your Deployment Manager's LotusConnections-config directory.

After this line :


Add these lines :

     <proxyInterServiceRewrite name="opensocial" />
     <proxyInterServiceRewrite name="webresources" />
     <proxyInterServiceRewrite name="activities" />
     <proxyInterServiceRewrite name="bookmarklet" />
`        <proxyInterServiceRewrite name="blogs" />
     <proxyInterServiceRewrite name="communities" />
     <proxyInterServiceRewrite name="dogear" />
     <proxyInterServiceRewrite name="files" />
     <proxyInterServiceRewrite name="forums" />
     <proxyInterServiceRewrite name="homepage" />
     <proxyInterServiceRewrite name="mediaGallery" />
     <proxyInterServiceRewrite name="microblogging" />
     <proxyInterServiceRewrite name="search" />
     <proxyInterServiceRewrite name="mobile" />
     <proxyInterServiceRewrite name="moderation" />
     <proxyInterServiceRewrite name="news" />
     <proxyInterServiceRewrite name="profiles" />
     <proxyInterServiceRewrite name="sand" />
     <proxyInterServiceRewrite name="search" />
     <proxyInterServiceRewrite name="thumbnail" />
     <proxyInterServiceRewrite name="wikis" />

Sync your nodes, and restart everything.  All trafic for the standard widgets (eg. on Homepage or in Communities) will now go render correctly.
Note that this is not valid for CCM nor for Mobile, these have separate settings in library-config.xml and mobile-config.xml respectively where you can select to "use interservice url" .
For Docs, the configuration is done in the json configuration files .  I'm not going into these details here.

Custom or third party Widgets Connections

So great, the core Connections widgets are now rendering, and all trafic for them is now going through the interservice URL you defined .

There is however the small problem of custom widgets.  These are not handled by the rules in opensocial-config.xml .
We use Kudos Boards (, but this next section is valid for all (most) custom or third party widgets you need to behave properly.

There's 2 more files to edit :

  • service-location.vsd: to allow you to edit LotusConnections-config.xml
  • LotusConnections-config.xml

And you need widget-config.xml , and still need to edit opensocial-config.xml .


Find the custom widget's configurationn in widget-config.xml.  In this example, we're looking at Boards (this is a sample, not actual widget definitions !).
You need the defId value here, so in our case, Boards.

<widgetDef defId="Boards" description="Kudos Boards widget" primaryWidget="true" modes="fullpage edit search" themes="wpthemeNarrow wpthemeWide wpthemeBanner" url="/kudosboards/boards.xml" showInPalette="true" loginRequired="true"/>


In service-location.vsd , add a line for every custom/third party widget.  You need to use the defId name from widget-config.xml in the previous step

The values here need to match the Widget definition in widget-config.xml, the service reference in LCC.xml, and the proxyInterServiceRewrite name in opensocial-config.xml.


In LotusConnections-config.xml, you then add a serviceReference entry for every custom (or third party) widget.  To be able to do that, you must have changed the service-location.vsd .

<sloc:serviceReference enabled="true" serviceName="Boards" ssl_enabled="true">
             <sloc:static href="" ssl_href=""/>
             <sloc:interService href=""/>


Finally, in opensocial-config.xml, add the rule for your custom widget, after the rules you added earlier.

     <proxyInterServiceRewrite name="opensocial" />
     <proxyInterServiceRewrite name="thumbnail" />
     <proxyInterServiceRewrite name="wikis" />
     <proxyInterServiceRewrite name="Boards" />

That is it.  You now sync your nodes, and restart everything.  Your custom widget will now work correctly .

If all else fails ...

Now there is a simpler solution to all of this .  You can use your /etc/host file to just match the public url ( to the IP address of the internal http server.  
I don't particularly like this solution, though.  It is difficult to maintain , and it probably breaks your company's standards and rules.

CCM installation problems with Connections 5.5 - Connections Admin password changes

Tom Bosmans  5 October 2016 14:20:13
During installation of CCM with Connections 5.5 using Oracle RAC cluster by my colleagues, they ran in to a number of problems and got the environment in a completely broken state.

The core problem is that FileNet does not support the modern syntax for jdbc datasources.  This technote explains what to do.

That is simple enough .

However , my colleagues continued on a detour, where they also changed the ConnectionsAdmin password.  That created a bunch of problems on it's own.
It turns out that the Connections 5.5 documentation is incomplete on where to change the occurences of the Connections Admin user and/or password.

The CCM installer mostly uses the correct source for the username / password (the variables you enter in the installation wizard or the silent responsefile).
But the script to configure the GCD datasources , for some reason uses a DIFFERENT administrator user.

It goes back to look at the connectionsAdminPassword variable that's stored in the file, in your Connections directory (eg. /data/Connections/ )

So when you change the password for the Connections Administrator, don't forget to update it in the file as well , before running the CCM installation.

"connectionsAdminPassword": "{xor}xxxxxxxxxxx",

In the end, this took me over 1/2 day to resolve, also because the guys working on it enabled all traces they could find so I also ran into an out-of-diskspace exception ..... , but mostly because the installation process for CCM is slow.

Sametime business cards from Connections

Tom Bosmans  28 September 2016 10:27:37
After deploying Connections 5.0 CR4, the business cards and photo's integration in Sametime chat (the webbrowser version) suddenly stopped working.
The problem is more pronounced in Internet Explorer.
The photo doesn't load, nor does the business card information (the phonenumber, email address) .   See the screenshot below :
Image:Sametime business cards from Connections

In the traces in the browser, it is clear that there's a HTTP 403 error (forbidden) on this call :


It wasn't very high on my priority list, but I've not found out what the problem is (thanks to IBM Support).

Apparently, in CR4, something changed in the profiles-config.xml configuration :

allowJsonpJavelin enabled
 is changed from true to false.  

So the solution is simple, change this back from false to true , sync the nodes , and restart the server(s) that contains your Profiles application.

                      Optional security setting for Profiles javelin card.  This setting is to disallow JSONP security.
                      Older 3rd party software may will not work with this setting unless they include a reverse proxy.
                      All of the Connections application will work with JSONP disabled.
              <allowJsonpJavelin enabled="true"/>

Connections and file indexing

Tom Bosmans  16 June 2016 15:36:39
The Stellent code that handles extracting content from the files in Connections , relies on an old version of .

It relies on

While for instance on SLES 12, this is replaced with

It may not be immediately apparent that this is the problem.

If you use , you get a java error, which can throw you off.  So use the "exporter" directly, when in doubt.
Check this older blog post, that is about the same problem (but then in Sametime).  Installation of Sametime Meeting Server

It also explains how to verify your search indexing settings.

How to determine a websphere server is running in bash ?

Tom Bosmans  8 June 2016 11:06:37
When creating a simple bash script (actually,  scripts for installing Connections using Puppet, but that's a different story), that would need to check if the Deployment Manager is running, I ran into the following problem :
The script always returns "0" as status code, even if the server is stopped.  So it's a bit useless in bash scripting, where normally I'd rely on the return code by a script to determine if it ran successfully.  So "$?" equals 0 when the script ran successfully and not equal "0" when something went wrong.
But like I said already  , ALWAYS returns "0".

There's more problems with the command, for one, it takes a (relatively ) long time to execute.

echo $?

Anyway, another way to check if the dmgr is running , is by using "grep" .  Note that there's differences in the options on the different flavors of Unix and Linux, but that is not the scope of this post.  I'm also not discussing the best practices, that you should look for process id's , and not rely on text ...
What is important, is that you use the "wide" option (so you see the full java command that is used to start the jvm).

ps -ef | grep dmgr

On Red hat :

ps -wwwef | grep dmgr

Now there's an annoying problem : this will return (if dmgr is running) 2 processes , the process of the dmgr, but also the grep command itself.
There's a trick for that - I found it here :

Basically, to get around that, make the grep expression a regex.  This will avoid that the grep command itself shows up :

ps -ef | grep "[d]mgr"

This will only show the process we're interested in.

So now we have a nice , correct variable we can use to determine the Dmgr (or any other WebSphere server, for that matter) is running.
If the Dmgr is running,

ps -ef | grep "[d]mgr"
echo $?

and if it's not running :

ps -ef | grep "[d]mgr"
echo $?

Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

Tom Bosmans  27 May 2016 14:04:18

Let's encrypt your Domino http server - Part 2

In the mean time, since my post, things are changing in the Let's Encrypt world - they're officially out of beta ( and there's name changes (pending).  Tooling has evolved as well (

That is however not the scope of this update to my original post here Let's encrypt tls certificate in Domino

There's a slightly annoying thing with the certificates delivered by Let's Encrypt, they are only valid for 3 months.  So you have to renew them every 3 months .
I've done that manually so far, but obviously automating this is the better option. Wouldn't it be nice if this all went automatically :-)

So based on the previous post, this is a follow-up, on how to renew your certificates in Domino .

Update your tooling

Update your Letsencrypt client tooling to Certbot.

Get it here and follwo the instructions based on your OS.

Certbot continues to use the configuration directories created earlier, so no worries there.

Check if your certificates require updating

The certbot-auto tool checks your certificates and decides if it's necessary to update them.
Note that if to run certbot to just check that your certificates require updating, it's not necessary to stop the http server !

./certbot-auto renew

Check the output, to see if it's necessary to update.  There's no need to continue if the certiticates don't require updating .

Update your certificates

To actually update your certificates, you need to stop the http server.
To renew, the http server on Domino needs to be stopped.  You can do that using the pre-hook option :

./certbot-auto renew --pre-hook "su - notes -c \"/opt/ibm/domino/bin/server -c 'tell http quit'\""

This update the certifcates in your store.  

Copy the certificates

Now copy these renewed certificates to a temporary location (because the kyrtool cannot process the certificates directly from the /etc/letsencrypt/live/... location.

cp /etc/letsencrypt/live//cert.pem /tmp/cert.pem
cp /etc/letsencrypt/live//fullchain.pem /tmp/fullchain.pem
cp /etc/letsencrypt/live/ /tmp/privkey.pem

Update the certificates in the Domino keyring

Run the kyrtool command against the Keyring you configured in the Domino SSL configuration.  Check the previous post about this : Let's encrypt tls certificate in Domino

su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/fullchain.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import keys -k  /local/notesdata/keyring2.kyr -i /tmp/privkey.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import certs -k /local/notesdata/keyring2.kyr -i /tmp/cert.pem"

Remove the certificate file in tmp !

Restart the http server

Restart the http server in Domino, and the updated certificate is now available in the browser.

su - notes -c "/opt/ibm/domino/bin/server -c 'load http'"

Note that if you use this method on a Domino server running extensions (eg. a Traveler server, a Sametime server) you likely have to restart more tasks than just http.

Here's a sample script putting it all together.

This script is a sample you can use and adapt.

The guys at certbot recommend to check for updates 2 times a day - to cater for certificate redraws from Certbot .  
I've scheduled it once every 2 days, using crontab.

This script relies on the certbot-auto tool to fail, when actually updating the certificate from Letsencrypt.  In that case, it will stop the HTTP server running, and copy the certificates so the kyrtool can import them.

Update on chaining problems

After renewing my certificates, I ran into problems - specifically the mobile browser (on Android) did not accept the certificate anymore.  Verifying the SSL configurattion using SSL Labs, I was surprised to only receive a "B" .
After googling a little bit , I came to the conclusion that there's changes in the certificate chain , and that these apparently are not reflected in the fullchain.pem.


Using SSL Lab ( -  (SSL Labs provides "deep analysis of the configuration of any SSL web server on the public Internet ")  you can see if you reach at least "A" .
Image:Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

If you don't have an A, most likely you run into the chaining problem I encountered : you need to have "none" chain issues , and the X3 certificate needs to be sent by server as well.

Image:Let’s encrypt certifates for Domino Part 2 - renew certificates (UPDATED)

In my case (when there were chaining issues), this complained about a missing "Let's Encrypt Authority X3".  I've verified the stores, and they still used the X1 Authority.

Manually updating trusts

So... it appears to me that the fullchain.pem does not contain the correct (new) chain, or that the kyrtool does not import it correctly.
Anyway, I've manually updated the trusts , by downloading the new X3 and X4 certificates from here

Download the X3 and X4 certificates, to a temporary location on your server (eg. /tmp)


Import these into your keyring :

su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/lets-encrypt-x3-cross-signed.pem"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keyring2.kyr -i /tmp/lets-encrypt-x4-cross-signed.pem"

You can check the certificates and the trusted roots :

su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini show certs -k /local/notesdata/keyring2.kyr"
su - notes -c "/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini show roots -k /local/notesdata/keyring2.kyr"

Restart the HTTP server, and everything is OK.

I expect that this problem will be resolved at some time in the future, so this manual steps is no longer necessary, but for now, this works.

Remove unwanted Basic Authentication prompts in Connections

Tom Bosmans  20 April 2016 21:53:09
There is no point in allowing Basic Authentication in environments where users don't have passwords , for instance when there's single sign on set up with SPNEGO .
Connections (like most IBM products) then relies on LTPA tokens for authentication.

Also, in these enterprise environments, you would generally secure the applications (meaning : using the J2EE roles in all applications in Connections, to not allow anonymous access (everywhere where it says "Everyone", use "All Authenticated users" instead).    

The challenge in this scenario is that SOME uri's in that scenario , will not redirect you to the standard login  form (Forms based authentication), but rather pop up the annoying Basic Authentication prompt.
A sample URL that will prompt for BA is for instance ... Another is accessing a Profile picture (only if you secured the Profiles application).

When your users use a normal browser to access Connections, you'll hardly ever see a BA prompt, because users would generally not access these url's as the initial call.  So they'll already be authenticated by other means.

But it's a different story when you use API access to Connections - for instance to integrate Connections content into an intranet (that does not offer LTPA SSO).
In that case, it's pretty difficult to avoid Basic Authentication prompts popping up, because it's not very easy to catch them in the browser.

So we went with a drastic solution - disable Basic Authentication prompts completely.
This does not disable Basic Authentication, it just disables the prompt.  In our specific case, this again enables the javascript code to catch the 401 HTTP response correctly, and start an authentication sequence.
The solution does not change the header when it's a Connections server making the connections.  Connections , by itself , also uses some Basic Authentication for it's interservice requests.  I don't want to mess with these.   I don't think this is really necessary (since, again, Basic Authentication is not disabled), but still.

The solution is based on what's written here :

However, that did not exactly work for me: I had to remove the "always" keyword - otherwise the Header edit would not work.

#       remove basic auth headers in the response except for the Connections nodes (incl. ccm, fileviewer, etc)
SetEnvIf Remote_Addr ".*" REMOVEBASICAUTH
SetEnvIf Remote_Addr "10\.*|127\..*" !REMOVEBASICAUTH
Header edit WWW-Authenticate ^Basic NGCBasic env=REMOVEBASICAUTH
# end half

So what these 3 lines do :
1. SetEnvIf Remote_Addr ".*" REMOVEBASICAUTH

Set the environment variable for all connections

2. SetEnvIf Remote_Addr "10\.*|127\..*" !REMOVEBASICAUTH

Remove the environment variable based on a regular expression (in this case, all ip addresses starting with 10. , and localhost).  This regex should match the IP addresses of all the Connections servers (and FileNet , and FileViewer, and Cognos and ...) - any server that would make calls to Connections .

3. Header edit WWW-Authenticate ^Basic ConnBasic env=REMOVEBASICAUTH

This line edits the WWW-Authenticate header, if it starts with "Basic" , and changes it to something else.  The result is, that you would not get a prompt in the browser.

Remark on Mobile access and Desktop plugin

The Mobile Connections application do use Basic Authentication for authenticating, as does the Desktop Plugin.
However, neither one relies on the Basic Authentication prompt working correctly.  

Let’s encrypt TLS certificate in Domino

Tom Bosmans  4 December 2015 20:35:44

Let's encrypt your Domino http server

"Let's encrypt" ( ) is a new service that's currently in public beta (since 3rd of december 2015

Good news, it's free, and it allows you to get certificates from a trusted CA (yes, trusted , they are included in recent Chrome , Firefox and Safari browsers).  From their site :
Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.

And even better, you can use it in Domino (at least since Domino 9.0.1FP2 , that adds TLS encryption).  So now you have no more reasons to keep running a http only site, you can switch everything to https .

How does it work ?  Insanely simple !
If you use Apache, it's even fully automated, but for Domino, you can also generate certificates only, and then import them into the Domino keyring.

This shows how to do it on Linux (I use CentOS 7).

Install the let's encrypt software

You need to install git first, then install the let's encrypt client :

Log in as root, and I executed this from the /root folder :

yum install git
git clone

Stop your http server

To run Let's encrypt in the "certonly" mode, you need to have a "free" port 80 or 443.  So stop the http server on your domino server (or stop the Domino server completely).

Create the certificate

Now run the client
cd letsencrypt/
./letsencrypt-auto certonly

There's a lot more options, but this just focuses on creating a certificate to use in Domino.

The interface opens, with only one option really, a field where you can enter your hostnames.

Image:Let’s encrypt TLS certificate in Domino
Note that you need to enter hostnames here , Let's encrypt does not support wildcard certificates for now.   You can add multiple hostnames , they will be added as "alternates".

The result is that the pem certificates are created in this directory :

Obviously; it depends on the domain name you entered.
You'll find 4 files.

cert.pem -> ../../archive/
chain.pem -> ../../archive/
fullchain.pem -> ../../archive/
privkey.pem -> ../../archive/

You'll need all 4 , in the next steps.

Copy the certificates to a temporary location

You'll need to run the kyrtool in the next steps as the user running your Domino server, so you need to put the .pem files in a location that that user can access.

I copied them to /tmp.
You must remove them afterwards, btw !

Install kyrtool

Download they kyrtool from FixCentral.  You want to get  KYRTool_9x_ClientServer .
Extract the zip, and then copy the correct kyrtool file to your Notes program directory, on your server.

cp kyrtool /opt/ibm/domino/notes/latest/linux/

Change the permissions on the file, so it's executable by your notes user.

cd /opt/ibm/domino/notes/latest/linux/
chmod 755 kyrtool

You can also perform these tasks on an Administrator client - but why bother.
More information here :

Create a new keyring and import the certificates

Now we create the keyring, and import our certificates that we put in /tmp .

You 'll need to execute the commands as the user that runs your Domino server, in my case, that's "notes".
Note that you can't run kyrtool directly, but you need to run it through the "startup" executable.

su - notes
/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini create -k /local/notesdata/keystore2.kyr -p

Now import your root certificate, the keys and the certificate that we got from Let's Encrypt.

/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import roots -k /local/notesdata/keystore2.kyr -i /tmp/fullchain.pem
/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import keys -k  /local/notesdata/keystore2.kyr -i /tmp/privkey.pem
/opt/ibm/domino/bin/tools/startup kyrtool =/local/notesdata/notes.ini import certs -k /local/notesdata/keystore2.kyr r -i /tmp/cert.pem

Configure your Sites to use the new keyring

Use a Notes client to go to your Domain's names.nsf.  Open the document that configures your SSL security (which can be a server document, or an internet site document).
Obviously, I use Internet Site documents.

Image:Let’s encrypt TLS certificate in Domino

The only thing you need to change, is the Key File name.  It needs to point to the  /local/notesdata/keystore2.kyr you created earlier.
Save and close the document, do the same for the other internet site document (eg. SMTP and LDAP, and other HTTP) and restart http.

Done !

The site is now using a secure TLS encryption.
Image:Let’s encrypt TLS certificate in Domino

Continued here :
Part 2 - autorenew the certificates

How to reset all custom themes in Connections communities to default

Tom Bosmans  25 September 2015 17:27:27
We started of with Communities in Connections allowing custom themes, but after applying branding, we wanted to disable the custom themes.
The custom themes in Connections Communities 5.0 don't all use the next gen theme yet, so with custom branding, some things look really ugly.

Now when you disable custom themes, but this does not affect existing communities.  You can ask your community owners to please reset their themes to default, but you can also do it for them.
Unfortunately, doing this through the API does not work (you can set everything through the API, but in this version 5.0 CR2, setting the theme appears to be broken - we've opened a PMR), so we're left with editing the database directly.
You need to update multiple databases/schema's, not just the Community schema !  Here's what to do :

Of course, don't edit the Connections databases directly, ever .... so use these at your own risk :-)