Tips & tricks for installing and running ICS products

Trying out Domino data services with Chart.js

Tom Bosmans  4 December 2017 11:06:48
Domino Data Access Services have been around for a few years now, but I never actually used them myself.

https://www-10.lotus.com/ldd/ddwiki.nsf/xpAPIViewer.xsp?lookupName=IBM+Domino+Access+Services+9.0.1#action=openDocument&content=catcontent&ct=api

Since I recently started to dabble in Ethereum mining, I was looking for a place to store my data and draw some graphs and the likes.  I first tried out LibreOffice Calc, but I couldn't find an easy way to automatically update it with data from a REST API.  
So I turned to good old Domino, being the grandpa of NoSQL databases (before it was cool).

The solution I came up with, retrieves multiple JSON streams from various sources, combines it into a single JSON , that is then uploaded into a Domino database (using Python).
To look at the data, I created a literal "SPA" (single page application) - I use a Page in Domino to run Javascript code , to retrieve the data , again in JSON format, and turn it into a nice graph (using charts.js) .
So I don't actually use any Domino code to display anything, Domino is simply used to store and manage the data.

This article consists of 2 parts :


  • loading of data into Domino using Python and REST services.
  • displaying data from Domino using the Domino Data Access Services and an open-source javascript library to display charts ( http://www.chartjs.org/ )


Python to Domino


Domino preparation


To use the Domino Data Access services in a database, you need to enable them

  • On the server
  • In the Database properties (Allow Domino Data Service)
  • In the View properties


Server configuration


Open the internet site document for the server/site you are interested in.
In the Configuration tab, scroll down to the "Domino Acces Services"  .  Enable "Data" here.

Note that you may want to verify the enabled methods as well - enable PUT if you plan to use the services that use PUT requests.
And if you're not use Internet Site documents yet, well, then I can't help you :-)

After modifying the Internet Site document, you need to restart the HTTP task on your Domino server.
Image:Trying out Domino data services with Chart.js

Database properties


In the Advanced properties, select "Views and Documents" for the "Allow Domino Data Service" option.
Image:Trying out Domino data services with Chart.js

View properties


Open the View properties. and on the second to last tab, enable "Allow Domino Data Service operations".
Image:Trying out Domino data services with Chart.js

There is no equivalent option in Forms.

Python code


Instead of figuring out how to load JSON data in a Notes agent or Xpages (which no doubt is possible, but seems a lot of work), I choose to use a simple Python script, that I kick of using a cron job. I run this code collocated with the Domino server, but that is not necessary .  Because the POST requires authentication, and the url used it using TLS, this could just as well run anywhere else.
Any other server-side code would do the same thing , so Node.js or Perl or ... are all valid options.

There's 2 JSON  objects being retrieved :

resultseth = requests.get('http://dwarfpool.com/eth/api?wallet={wallet}&email={email address}')
data = resultseth.json()

and

currentprice = requests.get('https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR')
pricedata = currentprice.json()


The first JSON that's returned , contains nested data (the workers object) .

{
"autopayout_from": "1.0",
"earning_24_hours": "0.1123",
"error": false,
"immature_earning": 0.000890178102,
"last_payment_amount": "1.0",
"last_payment_date": "Thu, 16 Nov 2017 16:24:01 GMT",
"last_share_date": "Mon, 04 Dec 2017 12:41:33 GMT",
"payout_daily": true,
"payout_request": false,
"total_hashrate": 30,
"total_hashrate_calculated": 31,
"transferring_to_balance": 0.0155,
"wallet": "0x5ac81ec3457a71dda2af0e15688d04da9a98df3c",
"wallet_balance": "5411",
"workers": {
"worker1": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 12:38:42 GMT",
"second_since_submit": 587,
"worker": "worker1"
},

"worker2": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 11:38:42 GMT",
"second_since_submit": 111,
"worker": "worker2"
}
}
}


It turns out that Domino does not like that very much or rather cannot handle nested JSON, but there is a simple solution - flatten the JSON.

This uses the "flatten_json" class in Python, so it easy to use.  

In the sample above, it would translate


{ "workers":
{ "worker1":
  "worker": "worker1"
}
}



into


{workers_worker1_worker: "worker1"}


(Information about this particular API , is here http://dwarfpool.com/api/ )

The flatten_json, can be installed using pip

pip install flatten_json


From a public API , I can get the current price of ETH expressed in EUR, dollars and Bitcoin.

In Python, I now have 2 dictionary objects , with the JSON data (key - value pairs)
I combine them into a a single one, by adding the data of the 2nd dictionary to the first.

for lines in pricedata:
   data[lines] = pricedata[lines]


The nice thing about this Python classes, is that it allows to dynamically edit the JSON before submitting it again.  I could remove the data I don't want, for instance.
In this case, I need to do something about the boolean values that get returned by the Dwarfpool API, because Domino Data Access Services does not like them!

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"


The next step is to post the JSON file to Domino.
It's very straightforward : The url used will create a new Notes document, based on the Form named "Data" .  ( https://www-10.lotus.com/ldd/ddwiki.nsf/xpAPIViewer.xsp?lookupName=IBM+Domino+Access+Services+9.0.1#action=openDocument&res_title=Document_Collection_POST_dds10&content=apicontent )

The Domino Form needs to exist of course, but it's not very important that the fields are on there .  


url = 'https://www.gwbasics.be/dev/dataservices.nsf/api/data/documents?form=Data'


There's some headers to set, in particular "Content-Type" must be set , to "application/json"

To Authenticate, I use a Basic Authentication header .  In this case, the user I authenticate with , only has Depositor access to the Database (which is the first time in 20 years of Domino experience  , I see the point in having this role in an ACL :-)  )

The service responds with an HTTP Code 201, if everything went correctly .  This is of course something you can work with (if the response code does not match 201, do something to notify the administrator, for instance) .

The full script:


# retrieves dwarfpool data for my wallet
# retrieves current price ETH
# merges the 2 in a flattened JSON
# uploads the JSON into a Domino database using the domino rest api
import requests
import json
from flatten_json import flatten

resultseth = requests.get('http://dwarfpool.com/eth/api?wallet=<wallet>&email=<email address>')
data = resultseth.json()
print "-----------------"

# retrieve eth price
currentprice = requests.get('https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR')
pricedata = currentprice.json()

print "------------------"
data = flatten(data)

# merge json data
for lines in pricedata:
   data[lines] = pricedata[lines]

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"

url = 'https://www.gwbasics.be/dev/dataservices.nsf/api/data/documents?form=Data'
myheaders = {'Content-Type': 'application/json'}
authentication = ("<Depositor userid>", "<password>")
response = requests.post(url, data=json.dumps(data), headers=myheaders, auth=authentication)
print response.status_code



Lessons learned




  • The Domino DAS are fast and easy to use , from Python .
  • The Domino Data Access Services POST requests do not handle nested JSON, so you need to first massage your JSON into a flat format .
  • The Domino DAS is pretty picky about the types - it does not support Boolean values (true/false)
  • Finally, I have seen a good use of the Depositor role in action !


Chart.js and Domino


Now the data is in Domino, and we can start thinking about

The Single Page Application


I created a Page in Domino, and put all HTML and Javascript on that page as pass-tru HTML.

Having the code in Domino has the advantage that the Domino security model is used.  So I need to authenticate first , to be able to use the SPA.
The same code can live anywhere else (eg. as a html page on any webserver),  but then I'd have to worry about authenticating the Ajax calls that retrieve the data.  
I set the Page to be the "Homepage" of the Database .

I use several javascript libraries, Jquery and Chart.js.  

For Chart.js, there's several ways to include the code, I chose  to use a Content Delivery Network ( http://www.chartjs.org/docs/latest/getting-started/installation.html )

<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.bundle.js" integrity="sha256-vyehT44mCOPZg7SbqfOZ0HNYXjPKgBCaqxBkW3lh6bg=" crossorigin="anonymous"></script>


For Jquery, I learned that the "slim" version does not have the JSON libraries, so use the minimized or full version.

Chart.js



Chart.js is a simple charting engine, that is easy to use and apparently also very commonly used.
I did have problems getting it to work correctly with my Domino Data, but that turned out to be related to Domino, not to Chart.js.

The samples that are out there for Chart.js generally do not include dynamic data, so here's how to use dynamic data from Chart.js using Domino.

Initialize


What worked best for me,  is to initialize the Chart in the $.document.ready function.  Without Jquery, you can do the same with window.onload .

The chart is stored in a global variable, myChart, so it is accessible from everywhere.

The trick here, is to initialize the Chart's data and labels as empty arrays.  The arrays will be loaded with data in the next step (the title is also dynamic, you may notice).

In this sample, I have 2 datasets, and only at the end of this function, I call the first load of the data (updateChartData)


<script language="JavaScript" type="text/javascript">
var pageNumber = 0;
var pageSize = 24;
var myChart = {};
// prepare chart with an empty array for data within the datasets
// 2 datasets, 1 for EUR , 1 for ETH
$(document).ready(function() {
   // remove data button needs to be disabled when we start .
   document.getElementById('removeData').disabled = true;
   var ctx = document.getElementById("canvas").getContext("2d");
   myChart = new Chart(ctx, {
   type: 'line',
   data:{
                   labels: [],
                   datasets: [
                                             {
                                                   label: "EURO",
                                           data: [],
                                           borderColor: '#ff6384',
                                           yAxisID: "y-axis-eur"
                                               },
                                   {
                                           label: "ETH",
                                           data: [],
                                           borderColor: '#36a2eb',
                                           yAxisID: "y-axis-eth"
                                   }
                                           ]
           },
   options:  {
                   responsive: true,
                   animation: {         easing: 'easeInOutCubic',
                                   duration: 200,
                           },
                   tooltips: {
                                               mode: 'index',
                                                  intersect: false,
                                   },
           hover: {
                               mode: 'nearest',
                               intersect: true
                                   },
                     scales: {
               xAxes: [{
                   display: true,
                   scaleLabel: {
                       display: true,
                       labelString: 'History'
                   }
               }],
                 yAxes: [{
                           type: "linear",
                   display: true,
                   position: "left",
                   id: "y-axis-eth",
           // grid line settings
                   gridLines: {
                       drawOnChartArea: false, // only want the grid lines for one axis to show up
                   },
               }, {
                   type: "linear",
                   display: true,
                   position: "right",
                   id: "y-axis-eur",
               }],
           }
       }})
   updateChartData(pageSize,pageNumber);
});


Load data


The getJSON call (Jquery) connects to the Domino view. and gives 3 parameters :
- pagesize -  set to 24 to retrieve the last 24 documents (there is a document generated every hour by the Python cron job)
- page number  - set the paging - initially set to 0.
- systemcolums = 0 - avoids Domino specific data being returned (data that we'll not use anyway in this scneario)

The JSON that is retrieved from the Domino view is now loaded into an array of objects, that we can loop through.

The Chart data is directly accessible :
Labels : myChart.data.labels
Dataset 1 : myChart.data.datasets[0].data
Dataset 2 : myChart.data.datasets[1].data

The last call , myChart.Update, updates the Chart and redraws the chart.


var updateChartData = function(ps,pn) {
   $.ajaxSetup({
               async:false,
           type: "GET",
   });
   myChart.options.title =  {                 display:true,
                                   text: 'Last 24 hour performance - ' + $.format.date(Date.now(), "d MMM yyyy HH:mm")
                           };
   $.getJSON("/dev/dataservices.nsf/api/data/collections/name/GraphData?systemcolumns=0&ps="+ps+"&page="+pn, function(data){
           console.log(" Loading page " + pn + " with pagesize " + ps + " returned " + data.length + " entries");;
           for (var i=0; i < data.length; i++) {
                   //console.log( " index: " + i + "  EUR : " + data[i].TOTAL_VALUE_IN_EUR );
                   myChart.data.labels.unshift($.format.prettyDate(data[i].CREATED));
                   myChart.data.datasets[0].data.unshift(data[i].TOTAL_VALUE_IN_EUR);
                   myChart.data.datasets[1].data.unshift(data[i].TOTAL_ETH);
           }
           //shift to delete first element in arrays, not necessary in this case
           myChart.update();
    });
};


This is the end result :
Image:Trying out Domino data services with Chart.js

Actions


To code the buttons, I used an EventListener (copied from the Chart.js samples : http://www.chartjs.org/samples/latest/charts/line/basic.html )
However , they did not work as expected initially.

On every click, the whole page reloaded - this is not what you want in a Single Page Application !

To counter that, I added the "e" in the function to pass the Event handler , and then use preventDefault,  to avoid reloading of the page.


$( "#addData" ).click(function(e) {
    // --------- prevent page from reloading ------
   e.preventDefault();

    // ----
   pageNumber++;
   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;
   });


Without Jquery, it would look like this (it needs some additional code for cross browser compatibiiltiy).
The first line is there for cross-browser compatibiltiy (Firefox does not know window.event, that is actually an ugly IE hack).


document.getElementById('addData').addEventListener('click', function(e) {
    if(!e){ e = window.event; } ;
   e.preventDefault();

   pageNumber++;
   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;
   });


Only after I made that change, I realized that this behaviour was in  fact caused by Domino, and that disabling the Database propery "Use Javascript when generating pages" would fix this.
Why our Domino developers ever thought it was a good idea to put HTML forms in Pages, I will never understand (I understand they used this in Forms).

And in my testing, I still needed the preventDefault, even with the Database property set .....

Some after the fact googling, suggests to me that using preventDefault is in fact the way to go (eg. https://xpagesandmore.blogspot.be/2015/06/bootstrap-js-modal-plugin-in-xpages.html )

Lessons learned




  • Using a Domino Page to host the Javascript code, enables the Domino security model .
  • I forgot about the Domino quirks with regards to web applications (e.preventDefault)
  • $.getJSON can be set up using $.ajaxSetup , although it's not necessary.
  • I didn't find good Chart.js samples for dynamic loading of data.



Since we're talking Ethereum, you may of course donate here :-)  0x5ac81ec3457a71dda2af0e15688d04da9a98df3c

    Check limits on open files for running processes

    Tom Bosmans  10 November 2017 17:02:41
    OK, setting the correct limits in /etc/sysconfig/limits.conf, and messing around with ulimit can leave you thinking everything is ok, while it is not.
    This little line shows you an overview of all the running java processes, to quickly check the Open File limit is correct .

    check the limits (open files) for all running java processes
    (as root)

    for i in $(pgrep java); do prlimit -p $i|grep NOFILE; done


    In this example, you see that there's just 2 of the jvm's are running with the correct limits.  The easiest way to resolve this (if  /etc/sysconfig/limits.conf is correct, and you have a service that starts your nodeagent) , is to reboot :

    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096


    DKIM deployed on my mail servers

    Tom Bosmans  16 June 2017 10:40:42
    After moving my server to a new physical box (and new IP Address), some of the more difficult large mail systems started rejecting mail from my domains.
    Google was OK with my mails, although not ecstatic, but Yahoo and especially Microsoft considered my systems dangerous apparently.

    I googled around, found a lot of crap information, but resolved the issue and improved my mail setup in the end.  Turned out that I should be using TLS (for secure smtp) and DKIM (DomainKeys Identified Mail - http://dkim.org/ )


    The bad stuff


    - There's a lot of links advising you to use Return Path (ao. here :  https://blog.returnpath.com/google-is-failing-your-perfectly-good-dkim-key-and-why-thats-a-good-thing/)
    Don't invest time here.  It's a service for spammers, I would say (they call it "email marketing").  You need to register and likely never get a response anyway.  
    - Domino does not support DKIM natively, and likely never will (http://www-01.ibm.com/support/docview.wss?uid=swg21515751)
    - Microsoft (with all their domains - hotmail.com, outlook.com, ...) are very tricky
    - Yahoo is difficult as well, but should you care ?  You shouldn't be using Yahoo mail anyway these days.
    - MailScanner breaks DKIM, so requires changes in the configuration (the problem being that it
    It's a little tricky to find out all the details - because most test tools identify that "dkim is working", while google complains ....
    - Postfix works with Letsencrypt certificates, but again , the information on the internet is sometimes incorrect or incomplete at best.
    - DKIM relies on DNS configuration, and that can be tricky (depending on your DNS provider or your DNS server)

    The good information


    - Postfix support DKIM through the opendkim milter add-on (http://www.opendkim.org/)
    - testing DKIM can be done using a tool like this  : http://www.appmaildev.com/en/dkim  
    Very handy, fast, easy, no registration.
    - the proof is in the pudding, and sending mail to gmail.com (Google) actually shows the information nice and tidy.
    - Letsencrypt and Postfix work together nicely once the setup is done correctly.


    Let's get to work


    So what I had to do, in a nutshell :


    • Change my Domino configuration , so also send outgoing mail through Postfix.  This is as simple as setting the "Relay host for messages leaving the local internet domain".
      This is necessary, to allow opendkim to sign the outgoing mails as well.
      Relay host for messages leaving the local internet domain: mail.gwbasics.be



    • Configure Postfix - add the milter for dkim (and configure TLS with LetsEncrypt) in main.cf
    • Configure MailScanner  - apply the settings that are in the configuration file, that mention dkim.
    • Configure opendkim (generate the keys)
    • Configure DNS (create a new TXT record for the key you created.  In general, you can use "default", and you require a record for default._domainkey. )
    • Verify your key using opendkim-testkey
    • Test the DNS entry (eg. using http://dkimcore.org/tools/keycheck.html , or using host (eg. host -t txt default._domainkey.gwbasics.be)
    • Test the mails you send out (use  http://www.appmaildev.com/en/dkim  ).  Or use gmail to check.



    Use Gmail to check your settings


    Gmail actually has the possibly by default to verify various settings.  
    Next to the "to me", click the dropdown button.
    In the case that you have set up DKIM correctly, it will show a "signed-by" line.  You can see TLS information here as well .
    Image:DKIM deployed on my mail servers
    Additionally, you can also go to "Show original"
    Image:DKIM deployed on my mail servers
    This will show the source of the  email, and has a summary header that contains important information.
    As you can see  , it shows that DKIM has PASS.  If it says something else here, you need to go back to the drawing board.
    Image:DKIM deployed on my mail servers

    This can contain a lot more options, btw.  If you use DMARC as well, it will show up here too.  For my domain, you see the SPF option.


    Microsoft's domains



    Once you're certain DNS is setup correctly and you're no open relay, you can easily contact Microsoft directly to unblock your mail server(s) here :
    This immediatly works for hotmail.com, outlook.com and the other domains.

    https://support.microsoft.com/en-us/getsupport?oaspworkflow=start_1.0.0.0&wfname=capsub&productkey=edfsmsbl3&locale=en-us&ccsid=636329734561893294

    This took only a few hours in my case.

    Server outage (disk failure)

    Tom Bosmans  6 June 2017 10:08:04
    Yesterday morning, I noticed that my server was running slow .   I couldn't see any processes hugging up resources, though.

    Instead of really looking into the problem, I decided to reboot the machine .  That was a mistake.  As the server did not come back online, I realised that it was likely that there was a problem with the disks .
    I have a dedicated server at http://www.hetzner.de , and it's really the first time I run into problems .  I can really recommend this hosting provider.

    The server has a software raid with 2 disks , running Cent OS.  
    I assumed that mdadm was trying to recover , but had no way of knowing, since the machine did not come back online.  
    At this point, I got very scared - I feared loss of data.

    Fortunately, the guys at hetzner supply a self-service console to the machine (you start a rescue system).

    I could log in using that mechanism, and then I was able to mount the filesystems in raid.  It was quickly clear that indeed, 1 disk died.

    Now I could do 2 things :
    - request a disk replacement.  This was going to take a while, and during that time I don't have a redundant disk.  And chances are high, when 1 disk fails , the other will also fail.
    - move my installation to a new server.  I know that between ordering a new server, and having the OS installed on it ready for use, only takes around 1 hour (did I mention these guys are great ?  Note that this is physical hardware, not some cloud service  !)

    I decided to go with option 2 .

    This consists of copying the data from the old server to the new one (this took a long time), reinstalling the software , reapplying the configuration for my mail servers and other stuff, and then adjusting the Domino configuration (change the ip addresses).

    In the end, it took me 10 hours in all, to get the new server up and running...including copying the data.   Now I just have to decommision the old server , and I'm done :-)



    Kubernetes and dns

    Tom Bosmans  28 April 2017 11:00:25
    Kubernetes apparently doesn't use a host file, but instead relies on DNS.  So when setting up Orient Me (for Connections 6) on a test environment, you may run into problems.
    https://github.com/kubernetes/dns/issues/55

    Then you may want to look back to this older blog entry :
    Setup DNS Masq

    You're welcome :-)

    To keep with the docker mechanism, look at this to make your life easier :https://github.com/jpillora/docker-dnsmasq

    Note that this is obviously not the only solution,  you can also follow these instructions :http://www.robertoboccadoro.com/2017/04/13/orientme-in-a-test-environment-how-to-make-it-work/





    Security Reverse Proxy with Connections - forcing all trafic through the interservice url

    Tom Bosmans  20 April 2017 15:17:50
    In a recent project, we are using IBM Datapower as a security reverse proxy to handle authentication and coarse grained authorization for Connections 5.5 .

    The approach we follow is similar to what I have described here :

    https://www-10.lotus.com/ldd/lcwiki.nsf/dx/IBM_Connections_v4.5_and_WebSeal_integration_col_alternative_approach

    In short : you want to avoid that the interservice traffic passes through the reverse proxy (Datapower or Webseal , that is not relevant at this point).

    The picture below shows that you want to have 2 paths of access :

    - for users and api access etc : through your reverse proxy

    - the internal , backend connections : through your http server

    Image:Security Reverse Proxy with Connections - forcing all trafic through the interservice url

    To do that , you need to make sure you have different values for the href/ssl_href and interservice values in LotusConnections-config.xml.

    <sloc:href>
                 <sloc:hrefPathPrefix>/wikis</sloc:hrefPathPrefix>
                 <sloc:static href="https://connections.company.com" ssl_href="https://connections.company.com"/>
                 <sloc:interService href="https://ihs.internal.com"/>
         </sloc:href>


    You can see a lot of things here :

    - you need to do this for ALL services defined in LotusConnections-config.xml

    - all url's are https

    - the interservice url is different from the static.

    - the interservice url points to the HTTP server (or a load balancer pointing to the HTTP Servers)

    - the static urls point to your reverse proxy (or the load balancer pointing to your reverse proxy)

    - bonus points  : put the interservice url in different domain from the static urls, to avoid potential xss problems.

    Some additional remarks :

    - do not use the dynamicHost section, that is generally recommended when using reverse proxies

    - set the forceConfidentialCommunitication flag to "true" .  ALWAYS.  You don't want to use http in these times, you always want to use https.

    Now for the problem : although this should instruct Connections to use the internal http server for interservice requests, in reality, the backend still makes calls to the static urls.


    That is very annoying : if you don't allow access from your back-end servers to the reverse proxy, everything will fail.  If you do not allow unauthenticated access through Datapower (or your reverse proxy), widgets don't render.

    This becomes apparent for Widgets in the following manner :

    [3/27/17 19:07:21:459 CEST] 00000149 IWidgetMetada W   com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
    [3/27/17 19:07:21:535 CEST] 00000149 IWidgetMetada W   com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: com.ibm.cre.iwidget.widget.parser.InvalidWidgetDefinitionException: org.xml.sax.SAXParseException: The element type "meta" must be terminated by the matching end-tag "".
    [3/27/17 19:07:21:845 CEST] 000001c6 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating https://connections.company.com/connections/resources/web/com.ibm.social.ee/ConnectionsEE.xml. Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.
    [3/27/17 19:07:21:847 CEST] 000001c7 AbstractSpecF W org.apache.shindig.gadgets.AbstractSpecFactory SpecUpdater An error occurred when updating https://connections.company.com/connections/resources/web/lconn.calendar/CalendarGadget.xml. Status code 500 was returned. Exception: org.apache.shindig.common.xml.XmlException: The element type "meta" must be terminated by the matching end-tag "". At: (1,415). A cached version is being used instead.

    This means that the back-end application (the WidgetContainer in this case) tries to retrieve the Widget configuration xml file, through the Reverse Proxy.  Because the Reverse Proxy does not allow unauthenticated acces, it presents a (html) login form .  That is interpreted as "invalid xml" .

    Now by following the instructions here, to allow unauthenticated URI's through your reverse proxy, this can be resolved.  https://www.ibm.com/support/knowledgecenter/SSYGQH_5.5.0/admin/secure/t_secure_with_tam.html

    If you don't allow access from your backend to your reverse proxy, you're still out of luck though.  And that previous part does nothing for any custom widgets or third party widgets you may have deployed (eg. Kudos Boards)

    Core Connections

    There is an undocumented solution for this, luckily, that you may get through support.

    You need to edit opensocial-config.xml , in your Deployment Manager's LotusConnections-config directory.

    After this line :


    <external-only-access-exceptions>none</external-only-access-exceptions>

    Add these lines :


         <proxyInterServiceRewrite name="opensocial" />
         <proxyInterServiceRewrite name="webresources" />
         <proxyInterServiceRewrite name="activities" />
         <proxyInterServiceRewrite name="bookmarklet" />
    `        <proxyInterServiceRewrite name="blogs" />
         <proxyInterServiceRewrite name="communities" />
         <proxyInterServiceRewrite name="dogear" />
         <proxyInterServiceRewrite name="files" />
         <proxyInterServiceRewrite name="forums" />
         <proxyInterServiceRewrite name="homepage" />
         <proxyInterServiceRewrite name="mediaGallery" />
         <proxyInterServiceRewrite name="microblogging" />
         <proxyInterServiceRewrite name="search" />
         <proxyInterServiceRewrite name="mobile" />
         <proxyInterServiceRewrite name="moderation" />
         <proxyInterServiceRewrite name="news" />
         <proxyInterServiceRewrite name="profiles" />
         <proxyInterServiceRewrite name="sand" />
         <proxyInterServiceRewrite name="search" />
         <proxyInterServiceRewrite name="thumbnail" />
         <proxyInterServiceRewrite name="wikis" />


    Sync your nodes, and restart everything.  All trafic for the standard widgets (eg. on Homepage or in Communities) will now go render correctly.
    Note that this is not valid for CCM nor for Mobile, these have separate settings in library-config.xml and mobile-config.xml respectively where you can select to "use interservice url" .
    For Docs, the configuration is done in the json configuration files .  I'm not going into these details here.

    Custom or third party Widgets Connections

    So great, the core Connections widgets are now rendering, and all trafic for them is now going through the interservice URL you defined .

    There is however the small problem of custom widgets.  These are not handled by the rules in opensocial-config.xml .
    We use Kudos Boards (http://www.kudosbadges.com/subpages/Kudos%20Boards?OpenDocument), but this next section is valid for all (most) custom or third party widgets you need to behave properly.

    There's 2 more files to edit :


    • service-location.vsd: to allow you to edit LotusConnections-config.xml
    • LotusConnections-config.xml


    And you need widget-config.xml , and still need to edit opensocial-config.xml .

    widget-config.xml


    Find the custom widget's configurationn in widget-config.xml.  In this example, we're looking at Boards (this is a sample, not actual widget definitions !).
    You need the defId value here, so in our case, Boards.

    <widgetDef defId="Boards" description="Kudos Boards widget" primaryWidget="true" modes="fullpage edit search" themes="wpthemeNarrow wpthemeWide wpthemeBanner" url="/kudosboards/boards.xml" showInPalette="true" loginRequired="true"/>

    service-location.vsd


    In service-location.vsd , add a line for every custom/third party widget.  You need to use the defId name from widget-config.xml in the previous step



    The values here need to match the Widget definition in widget-config.xml, the service reference in LCC.xml, and the proxyInterServiceRewrite name in opensocial-config.xml.

    LotusConnections-config.xml


    In LotusConnections-config.xml, you then add a serviceReference entry for every custom (or third party) widget.  To be able to do that, you must have changed the service-location.vsd .



    <sloc:serviceReference enabled="true" serviceName="Boards" ssl_enabled="true">
         <sloc:href>
                 <sloc:hrefPathPrefix>/kudosboards</sloc:hrefPathPrefix>
                 <sloc:static href="https://connections.company.com" ssl_href="https://connections.company.com"/>
                 <sloc:interService href="https://ihs.internal.com"/>
         </sloc:href>
    </sloc:serviceReference>

    opensocial-config.xml


    Finally, in opensocial-config.xml, add the rule for your custom widget, after the rules you added earlier.


    <external-only-access-exceptions>none</external-only-access-exceptions>
         <proxyInterServiceRewrite name="opensocial" />
         ...
         <proxyInterServiceRewrite name="thumbnail" />
         <proxyInterServiceRewrite name="wikis" />
         <proxyInterServiceRewrite name="Boards" />

    That is it.  You now sync your nodes, and restart everything.  Your custom widget will now work correctly .



    If all else fails ...


    Now there is a simpler solution to all of this .  You can use your /etc/host file to just match the public url (connections.company.com) to the IP address of the internal http server.  
    I don't particularly like this solution, though.  It is difficult to maintain , and it probably breaks your company's standards and rules.

    CCM installation problems with Connections 5.5 - Connections Admin password changes

    Tom Bosmans  5 October 2016 14:20:13
    During installation of CCM with Connections 5.5 using Oracle RAC cluster by my colleagues, they ran in to a number of problems and got the environment in a completely broken state.

    The core problem is that FileNet does not support the modern syntax for jdbc datasources.  This technote explains what to do.  

    http://www-01.ibm.com/support/docview.wss?uid=swg21978233

    That is simple enough .

    However , my colleagues continued on a detour, where they also changed the ConnectionsAdmin password.  That created a bunch of problems on it's own.
    It turns out that the Connections 5.5 documentation is incomplete on where to change the occurences of the Connections Admin user and/or password.

    The CCM installer mostly uses the correct source for the username / password (the variables you enter in the installation wizard or the silent responsefile).
    But the script to configure the GCD datasources , for some reason uses a DIFFERENT administrator user.

    It goes back to look at the connectionsAdminPassword variable that's stored in the cfg.py file, in your Connections directory (eg. /data/Connections/cfg.py )

    So when you change the password for the Connections Administrator, don't forget to update it in the cfg.py file as well , before running the CCM installation.

    "connectionsAdminPassword": "{xor}xxxxxxxxxxx",


    In the end, this took me over 1/2 day to resolve, also because the guys working on it enabled all traces they could find so I also ran into an out-of-diskspace exception ..... , but mostly because the installation process for CCM is slow.


    Sametime business cards from Connections

    Tom Bosmans  28 September 2016 10:27:37
     
    After deploying Connections 5.0 CR4, the business cards and photo's integration in Sametime chat (the webbrowser version) suddenly stopped working.
    The problem is more pronounced in Internet Explorer.
    The photo doesn't load, nor does the business card information (the phonenumber, email address) .   See the screenshot below :
    Image:Sametime business cards from Connections

    In the traces in the browser, it is clear that there's a HTTP 403 error (forbidden) on this call :

    https://-SERVER-
    /profiles/json/profile.do?email=-EMAIL-&lang=en_us&callback=stproxy.uiControl.connections.businesscard.
    onBusinessCard&dojo.preventCache=1463032209022




    It wasn't very high on my priority list, but I've not found out what the problem is (thanks to IBM Support).

    Apparently, in CR4, something changed in the profiles-config.xml configuration :

    allowJsonpJavelin enabled
     is changed from true to false.  

    So the solution is simple, change this back from false to true , sync the nodes , and restart the server(s) that contains your Profiles application.


     <!--
                          Optional security setting for Profiles javelin card.  This setting is to disallow JSONP security.
                          Older 3rd party software may will not work with this setting unless they include a reverse proxy.
                          All of the Connections application will work with JSONP disabled.
                  -->
                  <allowJsonpJavelin enabled="true"/>

    Connections and file indexing

    Tom Bosmans  16 June 2016 15:36:39
    The Stellent code that handles extracting content from the files in Connections , relies on an old version of libstdc++.so .

    It relies on
    libstdc++.so.5

    While for instance on SLES 12, this is replaced with
    libstdc++.so.6


    It may not be immediately apparent that this is the problem.

    If you use ExportTest.sh , you get a java error, which can throw you off.  So use the "exporter" directly, when in doubt.
    Check this older blog post, that is about the same problem (but then in Sametime).  Installation of Sametime Meeting Server

    It also explains how to verify your search indexing settings.


    How to determine a websphere server is running in bash ?

    Tom Bosmans  8 June 2016 11:06:37
    When creating a simple bash script (actually,  scripts for installing Connections using Puppet, but that's a different story), that would need to check if the Deployment Manager is running, I ran into the following problem :
    The serverStatus.sh script always returns "0" as status code, even if the server is stopped.  So it's a bit useless in bash scripting, where normally I'd rely on the return code by a script to determine if it ran successfully.  So "$?" equals 0 when the script ran successfully and not equal "0" when something went wrong.
    But like I said already  , serverStatus.sh ALWAYS returns "0".

    There's more problems with the serverStatus.sh command, for one, it takes a (relatively ) long time to execute.

    /opt/IBM/WebSphere/AppServer/profiles/Dmgr01/bin/serverStatus.sh
    echo $?
    0


    Anyway, another way to check if the dmgr is running , is by using "grep" .  Note that there's differences in the options on the different flavors of Unix and Linux, but that is not the scope of this post.  I'm also not discussing the best practices, that you should look for process id's , and not rely on text ...
    What is important, is that you use the "wide" option (so you see the full java command that is used to start the jvm).
    On SLES :

    ps -ef | grep dmgr

    On Red hat :

    ps -wwwef | grep dmgr


    Now there's an annoying problem : this will return (if dmgr is running) 2 processes , the process of the dmgr, but also the grep command itself.
    There's a trick for that - I found it here :  http://www.ibm.com/developerworks/library/l-keyc3/#code10

    Basically, to get around that, make the grep expression a regex.  This will avoid that the grep command itself shows up :

    ps -ef | grep "[d]mgr"


    This will only show the process we're interested in.

    So now we have a nice , correct variable we can use to determine the Dmgr (or any other WebSphere server, for that matter) is running.
    If the Dmgr is running,

    ps -ef | grep "[d]mgr"
    echo $?
    0


    and if it's not running :

    ps -ef | grep "[d]mgr"
    echo $?
    1