Tips & tricks for installing and running IBM products

IBM Cloud Private installation - Filebeat problem (CentOS7)

Tom Bosmans  13 July 2018 13:17:00
After installation of IBM Cloud Private 2.1.0.3, I noticed I did not see any log information in the ICP UI.

While checking the logs, I saw that filebeat did not start correctly (or rather, completely failed to start).


(on the master node: )
root@icpboot ~]#journalctl -xelf



Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168699    1825 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

Jul 13 11:54:17 icpboot.tombosmans.eu hyperkube[1825]: E0713 11:54:17.168734    1825 pod_workers.go:186] Error syncing pod 2406dc66-85e4-11e8-8135-000c299e5111 ("logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"), skipping: failed to "StartContainer" for "filebeat" with RunContainerError: "failed to start container \"ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18\": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount."

Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083562    1825 kuberuntime_manager.go:513] Container {Name:filebeat Image:ibmcom/filebeat:5.5.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:NODE_HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/usr/share/filebeat/filebeat.yml SubPath:filebeat.yml MountPropagation:} {Name:data ReadOnly:false MountPath:/usr/share/filebeat/data SubPath: MountPropagation:} {Name:container-log ReadOnly:true MountPath:/var/log/containers SubPath: MountPropagation:} {Name:pod-log ReadOnly:true MountPath:/var/log/pods SubPath: MountPropagation:} {Name:docker-log ReadOnly:true MountPath:/var/lib/docker/containers/ SubPath: MountPropagation:} {Name:default-token-kbdxx ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Jul 13 11:54:28 icpboot.tombosmans.eu hyperkube[1825]: I0713 11:54:28.083787    1825 kuberuntime_manager.go:757] checking backoff for container "filebeat" in pod "logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"




This means that in the IBM Cloud Private UI, I don't see any logs .

Digging a bit further, I saw that the logging-elk-filebeat-ds indeed was not started.

root@icpboot ~]# kubectl get ds --namespace=kube-system
NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

auth-apikeys                         1         1         1         1            1           role=master     20h

auth-idp                             1         1         1         1            1           role=master     20h

auth-pap                             1         1         1         1            1           role=master     20h

auth-pdp                             1         1         1         1            1           role=master     20h

calico-node                          3         3         3         3            3                    20h

catalog-ui                           1         1         1         1            1           role=master     20h

icp-management-ingress               1         1         1         1            1           role=master     20h

kube-dns                             1         1         1         1            1           master=true     20h

logging-elk-filebeat-ds              3         3         2         3            0                    20h

metering-reader                      3         3         2         3            2                    20h

monitoring-prometheus-nodeexporter   3         3         3         3            3                    20h

nginx-ingress-controller             1         1         1         1            1           proxy=true      20h

platform-api                         1         1         1         1            1           master=true     20h

platform-deploy                      1         1         1         1            1           master=true     20h

platform-ui                          1         1         1         1            1           master=true     20h

rescheduler                          1         1         1         1            1           master=true     20h

service-catalog-apiserver            1         1         1         1            1           role=master     20h

unified-router                       1         1         1         1            1           master=true     20h


Now the problem is of course in the log file, but I did not know how to fix it :

Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.



On each node, execute these commands:

findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

mount --make-shared /var/lib/docker/containers



The result looks something like this:


[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers private

[root@icpworker1 ~]# mount --make-shared /var/lib/docker/containers

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers shared



After that, the logging-elk-filebeat DaemonSet is available :

root@icpboot ~]# kubectl get ds --namespace=kube-system

NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

...

logging-elk-filebeat-ds              3         3         2         3            2                    20h

..

I don't know if this is a bug, or if this is caused by me trying to run ICP on CentOS7 (which is not a supported platform) ...

Synology TFTP server for PXE Boot

Tom Bosmans  9 July 2018 10:54:08
Something I've been meaning to do for a while now, is setup my Synology NAS as a PXE boot server.

I want to be able to easily install new Operating systems on any new hardware I get, but more importantly, to easily install multiple Virtual Machines on my primary workstation without too much hassle.
This will involve setting up the Synology as TFTP server , supplying the correct PXE files (images and configuration) , and also configuring my DHCP server .

The official documentation from Synology is woefully inadequate to get PXE up and running, it is missing a number of vital steps.
https://www.synology.com/en-us/knowledgebase/DSM/tutorial/General/How_to_implement_PXE_with_Synology_NAS

Luckily, there are other sources on the internet that

Configure TFTP on Synology


Prepare a shared folder/volume.  In my case, I have a shared volume named "shared" , where I created a folder "PXEBOOT"

Go to Main Menu > Control Panel > File Services and select the TFTP tab.
Tick Enable TFTP service.
Image:Synology TFTP server for PXE Boot

Enter the folder you prepared earlier.

Now you need to add the folder structure for TFTP to be able to show a boot menu , and prepare the images.

Check out this exellent guide, that contains a link to a zip file with a configuration that contains CentOS and Ubuntu images.

https://synology.wordpress.com/2017/10/05/boot-from-any-iso-on-your-network-using-pxe/


(it actually uses this Github repository : https://github.com/paulmaunders/TFTP-PXE-Boot-Server )

The Github repository is not quite up to date, but it's easy to add newer images, I've added Ubuntu 18.04 and CentOS 7.5 .  It is configured to use the netinstall (http), so you do need an internet connection.

Unzip it, and put it on your shared folder, on your Synology, so it looks like this:
Image:Synology TFTP server for PXE Boot


Verify TFTP


I'm using Redhat 7.5 , and I wanted to quickly test TFTP.  Unfortunately, the tftp client is not a part of my configured repositories , so I just downloaded a client from http://rpmfind.net .


tftp

(to) <ip address of synology>

tftp> verbose

Verbose mode on.

tftp> get pxelinux.0

getting from <ip address of synology>:pxelinux.0 to pxelinux.0 [netascii]

Received 26579 bytes in 0.2 seconds [1063767 bit/s]

tftp>



This indicates that the location of pxelinux.0, that needs to be configured in the DHCP server, is in the root of the TFTP server and is accessible by everyone.

Configure Ubiquity Edge Router's DHCP


A very complete guide to do this can be found here. You need to use the Ubiquity CLI to do it.
https://blog.laslabs.com/2013/05/pxe-booting-with-ubiquiti-edgerouter/

I've configured the following (result of show service dhcp-server )

shared-network-name LAN2 {

    authoritative disable

    subnet 192.168.1.0/24 {

        bootfile-name /pxelinux.0

        bootfile-server <ip address of synology>

        default-router 192.168.1.0

        dns-server 192.168.1.1

        dns-server 8.8.8.8

        domain-name gwbasics.be

        ....

        subnet-parameters "filename &quot;/pxelinux.0&quot;;"

    }

}

use-dnsmasq disable



Note that you must use the &quote; syntax !

The following commands were used :

configure

edit service dhcp-server shared-network-name LAN2 subnet 192.168.1.0/24

set subnet-parameters "filename &quot;/pxelinux.0&quot;;"

set bootfile-name /pxelinux.0

set bootfile-server <ip address of synology>

commit

save

show service dhcp-server



Issuing a new "set" command does not overwrite a value, instead it adds a new line.  You need to remove the entries that are not correct (if you have multiple lines) :

show service dhcp-server

configure

edit service dhcp-server shared-network-name LAN2 subnet 192.168.1.0/24

delete subnet-parameters "filename &quot;/shared/PXEBOOT/pxelinux.0&quot;;"

commit

save

show service dhcp-server



If you have multiple lines or wrong lines, you will see PXE errors in the boot screen .

VMWare workstation


Lastly, I need to configure VMWare workstation.
2 important things here :
- I added a bridged network adapter , to obtain a dhcp address from my home network .  This adapter will receive the pxe boot instructions.
- I increased the memory size from 1024 MB to 2048 MB, because the CentOS 7.5 installer complained about "no space left on device" on the /tmp drive during installation (which effectively means, in memory).
Image:Synology TFTP server for PXE Boot

When booting, now get the configured menu options from my PXE boot server ....
Image:Synology TFTP server for PXE Boot

Then step through the installation options as you would perform a normal manual installation.  Of course it's also possible to prepare automated installations, but that is another topic .

Letsencrypt certificates for my own test servers

Tom Bosmans  26 June 2018 14:02:15
Yes, it's a bit over the top to use Let's Encrypt certificates for test systems, where a self-signed certificate would serve a similar purpose.  Furthermore, a Let's Encrypt certificate has a short lifietime and needs to be replaced every 3 months.

But since Let's Encrypt brought wild-card certificates to us fairly recently (march 2018), there is an advantage here.  You only need this single certificate and you can use it on all your systems.  Of course, in most case  you don't want to use wild-card certificates, but for my case (non-production test systems) , this is perfectly valid.
https://community.letsencrypt.org/t/acme-v2-and-wildcard-certificate-support-is-live/55579

You also must use the DNS challenge (instead of the more traditional https challenge that Let's encrypt uses for verify ) .  The mechanism that is used, is similar to other verification mechanisms like DKIM for smtp (see DKIM deployed on my mail servers )

My usecase here, is a test environment running ISAM - IBM Security Access Manager (https://www.ibm.com/us-en/marketplace/access-management).  Not having to trust the signer each time I access a page with a sefl-signed certificate , is a huge plus when demoing a solution :-)

1. Prerequisites
  • You need a recent version of certbot (that has the support for dns challenge, and the support for ACMEv2) , I'm using certbot 0.24.0
  • This certbot needs to run on a system with Internet access (outbound only, it needs to connect to the letsencrypt systems)
  • You also need a public dns domain, because Let's Encrypt uses DNS for the verification.   The only thing that needs to be in the domain records, is a TXT record btw.  You don't need to configure anything else.

2. DNS Preparation

I ordered a dns domain from my prefered DNS provider (https://www.domaindiscount24.com) .
I could get a EU domain for something like 3 Euro for the first year .

There is nothing to configure for now , the configuration is done during the certbot action.

3. Certbot


Run certbot with the option --preferred-challenges dns , and define your domain as *.. (mine is *.tombosmans.eu) .
You can also use certbot-auto, and you can use a single commandline here, but I used this method :


[root@system ~]# certbot certonly --manual --server
https://acme-v02.api.letsencrypt.org/directory --preferred-challenges dns
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None

Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org

Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c'

to cancel): *.tombosmans.eu

Obtaining a new certificate

Performing the following challenges:

dns-01 challenge for tombosmans.eu


-------------------------------------------------------------------------------

NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.


Are you OK with your IP being logged?

-------------------------------------------------------------------------------

(Y)es/(N)o: Y


-------------------------------------------------------------------------------

Please deploy a DNS TXT record under the name

_acme-challenge.tombosmans.eu with the following value:


9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw


Before continuing, verify the record is deployed.

-------------------------------------------------------------------------------

Press Enter to Continue



So now you need to go to your DNS provider, and create a TXT DNS record for _acme-challenge. , in my case, _acme-challenge.tombosmans.eu

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw


In the interface of my DNS provider, it looks like this :  I need to create a new subdomain, named _acme-challenge.tombosmans.eu .

Image:Letsencrypt certificates for my own test servers
In the next step , I can then enter the value that certbot provided , in a TXT field.

Now once you saved your DNS entry, DO NOT continue immediately.
 

Give it at least 1 minute, so you're certain the DNS entry is available, or even better, verify that your nameserver is up-to-date by performing a dns lookup, for instance using dig .

In my case, I can use this command, to use the nameserver of my provider.  Do this on another system you have your certbot command running, or open a new session .

dig -t txt +short @ns1.domaindiscount24.net _acme-challenge.tombosmans.eu

"9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw"



It needs to return the value of the TXT record.  As long as it doesn't, DO NOT continue in the certbot session, because it will fail and you need to start over.

But if it does return the key, continue.  



Waiting for verification...

Cleaning up challenges


IMPORTANT NOTES:

- Congratulations! Your certificate and chain have been saved at:

/etc/letsencrypt/live/tombosmans.eu/fullchain.pem

Your key file has been saved at:

/etc/letsencrypt/live/tombosmans.eu/privkey.pem

Your cert will expire on 2018-09-24. To obtain a new or tweaked

version of this certificate in the future, simply run certbot

again. To non-interactively renew *all* of your certificates, run

"certbot renew"

- If you like Certbot, please consider supporting our work by:


Donating to ISRG / Let's Encrypt:  
https://letsencrypt.org/donate
Donating to EFF:                    
https://eff.org/donate-le



Now the chain and certificate files are in the standard Let's Encrypt locations described (/etc/letsencrypt/live/./
Since this is a wildcard certificate, you likely want to copy it elsewhere and distribute across your systems.

4. Let's Encrypt keys and ISAM


IBM Security Access Manager expects pkcs12 certificates , so we first need to use openssl to convert the letsencrypt certificates to a .p12 .
I'm using ISAM 9.0.5, as OVA .


openssl pkcs12 -export -out tombosmans.eu.p12 \

-inkey /etc/letsencrypt/live/tombosmans.eu/privkey.pem \

-in /etc/letsencrypt/live/tombosmans.eu/cert.pem \

-certfile /etc/letsencrypt/live/tombosmans.eu/chain.pem


Enter Export Password:

Verifying - Enter Export Password:



Use a strong password to protect your key !
( https://community.letsencrypt.org/t/combining-key-and-certificate-into-a-pkcs12-file/21113/3 )

Now get the certificate to a system where you can upload it to ISAM .

In the LMI, I want it in 2 places :
- the management certificate
- the default certificate for the reverse proxies


hosts file on ISAM

Add the ip addresses for the interfaces you want to use in the hosts file on ISAM.  We could use DNS as well (since we have the public dns domain), but since this is internal, I am not going to do that and use simple hosts files .
Image:Letsencrypt certificates for my own test servers

Also, I use the following hosts file on my local machine to access my environment :

192.168.42.42  isam.tombosmans.eu

192.168.42.100
frontend.tombosmans.eu


management certificate

Go to Manage System Settings/System Settings/Management SSL Certificate

Image:Letsencrypt certificates for my own test servers
The LMI will be restarted after this.

pdsrv keydb


Edit the pdsrv keydb, go to Personal certificates and select "Import"
Image:Letsencrypt certificates for my own test servers
Then select the "Let's Encrypt" certificate , click "Edit" and set it as the default certificate.

Image:Letsencrypt certificates for my own test servers


The DST ROOT CA is missing from the IBM provided keydbs (this is actually a bug in my opinion -  see this link :
https://community.letsencrypt.org/t/dst-root-missing-from-p12/48648/4 ), so you need to add it to the Signer Certificates in all key databases basically.  You can download it from the website in the link below, or you can export it from any modern browser (for example FireFox, below).  

Image:Letsencrypt certificates for my own test servers

Note that the reverse proxy can't handle the missing root CA, while the LMI does not seem to require it.   In each case, any server you want to protect using ISAM that would use TLS/SSL and a Let's Encrypt certificate would also require you to add this DST Root CA X3.


You need to restart the reverse proxies after saving and deploying this.

5. End result


I can now access the LMI on this url :

Image:Letsencrypt certificates for my own test servers

... and the reverse proxy (using Chrome this time round)
Image:Letsencrypt certificates for my own test servers

Everything is green, so everything is OK (at least , OK enough for my test environment).

Additional information


ISAM automation

To automate all these manual actions, I actually should use automation tooling like Ansible .
Fortunately , there is a publicly available repository with ansible-roles and playbooks for ISAM .  It would be relatively straightforward to automate the management of the certificates here (generate a new one, use openssl to convert it, upload it to ISAM for the reverse proxies and for the management interface).
https://github.com/IBM-Security

My zonefile, for your information


This zonefile is obviously pretty specific to my DNS provider and to my situation, but still, it may serve as an example for what you would need to have to make this work .
It's the _acme-challenge entry that does the trick .


$ORIGIN tombosmans.eu.

@ 28800 IN SOA ns1.domaindiscount24.net. tech.key-systems.net. 2018062619 10800 3600 604800 28800

@ 28800 IN NS ns1.domaindiscount24.net.

@ 28800 IN NS ns2.domaindiscount24.net.

@ 28800 IN NS ns3.domaindiscount24.net.

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw


WebSphere liberty docker on Synology NAS

Tom Bosmans  21 June 2018 16:44:55
I've got a Synology DS415+ at home, and have Docker running on it.  I needed a quick way to install a WebSphere liberty server, and since the Synology NAS support Docker containers, why not ...  It's very easy to get up and running, you just need a few extra configuration settings.

Please note that I'm not sure if this would work on any Synology, though.  I think you need a Synology that has an Intel CPU (mine is an INTEL Atom C2538)  ...

Preparation

Install the Docker package on your synology nas using the Package Center .

Image:WebSphere liberty docker on Synology NAS - updated



Start Docker once it's installed.  In the Registry, you can search for "liberty".  Use the "Download" button to download the image .

The synology uses Docker hub, and it's this version you want to download :
https://hub.docker.com/_/websphere-liberty/

There's more information there, for instance how to handle your certificates and key databases .  


Image:WebSphere liberty docker on Synology NAS - updated

Once the download of the image is complete, select Liberty and click "Launch".  This creates an actual container from the image.


Image:WebSphere liberty docker on Synology NAS - updated

You can then configure the container.  In particular, what needs to be configured are the volumes, and the ports .
Since the Docker container cannot be edited, you need volumes to save data between restarts .

Image:WebSphere liberty docker on Synology NAS - updated

These 3 volumes are needed for the following paths :

/opt/ibm/wlp/output/ (or, more precisely, the path that's in the WLP_OUTPUT_DIR variable)

/logs  (or , more precisely, the path in the LOG_DIR variable)
/config


The documentation states you just need /logs and /config, but I found that the first path is also necessary.

You can also choose to do this later , by using the "Edit" button:
This is my Volume configuration

Image:WebSphere liberty docker on Synology NAS - updated

The ports , by default , are set to Automatic.  This means that they change after every restart, and that's not very handy.
I choose the ports 19080 and 19443 for the http and https ports respectively.

Image:WebSphere liberty docker on Synology NAS - updated

The environment varialbes can be used to give the Liberty container some correct startup options.   A very useful one, is the java options that are used to start the Liberty jvm.
By default, the jvm would be started in UTC time, and there's no "global" way to configure your Docker containers to start in a the correct timezone by default.

So add -Duser.timezone=Europe/Brussels  (or your timezone specification of choice) to the IBM_JAVA_OPTIONS environment variable :


IBM_JAVA_OPTIONS    -XX:+UseContainerSupport -Duser.timezone=Europe/Brussels


Image:WebSphere liberty docker on Synology NAS - updated
This concludes the configuration for the Docker container .  

Configure Liberty server (server.xml)

To get a meaningful Liberty server, you probably want to deploy your own configuration .
Using the File Station in Synology, I have the following folder structure (that contains the volume configuration of the container).

Image:WebSphere liberty docker on Synology NAS - updated

In the config directory, the magic happens.   As with a "normal" Liberty installation, you have a server.xml file here (that is empty by default).
There's also an "apps" directory , that contains your ear files.

In my case, I've used a simple configuration that you can download here : server.xml

Image:WebSphere liberty docker on Synology NAS - updated

This configuration contains a basic user registry, an LTPA configuration and has 2 applications installed : the adminCenter and the defaultApplication.ear (Snoop)

The LTPA keys are generated automatically when you first start the container.  Note that for LTPA SSO to work, you must configure your Liberty Server to run in the correct timezone (see previous topic) !

There are some specific steps to take , before everything will work :

SSL configuration

When you start the docker image, a default key configuration is generated.  You can of course use your own key database , but I choose the quick and easy solution.

Open the keystore.xml file that's in config/configDropins/defaults .  Use the password for the defaultKeyStore in the keystore parameter in your own server.xml.  

<keyStore id="defaultKeyStore" password="<replace with your keystore.xml password>" />



AdminCenter

There's multiple ways to install the adminCenter, this is the method I followed :

Click on "Details" , with the websphere-liberty container selected.  
Switch to the "Terminal" tab .
Click on "Create" to create a new Bash terminal session .

Image:WebSphere liberty docker on Synology NAS - updated


Use the following commands to install the adminCenter :


root@websphere-liberty1:/# cd /opt/ibm/wlp/bin                                                                          
root@websphere-liberty1:/opt/ibm/wlp/bin# ./installUtility install adminCenter-1.0      



After restarting the Docker container, the adminCenter is available on the following url : https://:19443/adminCenter .
Image:WebSphere liberty docker on Synology NAS - updated

You need to log in using the admin user (if using the server.xml that's provided here, the password is : Passw0rd ) .

Image:WebSphere liberty docker on Synology NAS - updated

More information on the adminCenter application can be found here :
https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_ui_setup.html

Default Application

WebSphere Application server comes out of the box with a DefaultApplication (aka snoop), that is handy to see if your server is working correctly,  Now unfortunately, there is no DefaultApplication.ear that comes with Liberty.
This version DefaultApplication.ear works with Liberty .

So download this file, and upload it to your Synology, in the "apps" directory.  Your Liberty server will install it automatically (or restart the Docker image , so the server.xml also becomes active).

The Snoop Servlet is then available on https://:19443/snoop  .  You do need to login (if you use the server.xml that's provided here)

Image:WebSphere liberty docker on Synology NAS - updated


Log files


The log that's in the "Detail" page is not very useful.  
Image:WebSphere liberty docker on Synology NAS - updated
Fortunately, you can use File Station the Synology to access the "log" directory, where the standard messages.log is (and the other log files , like ffdc logs, if you're interested in those)

How to convert Notes names to email addresses in the Notes client.

Tom Bosmans  12 June 2018 10:55:51
A golden oldie ...  I recently had to generate a list of email addresses to use in a Cloud application, based on a mail group in my personal addressbook.

The names in that group are in the Notes format, obviously, and I need the email address.

Now I didn't have my Designer ready, nor did I feel like accessing the addressbooks directly .   And since this actually was a question from a colleague of mine, with no Notes knowledge at all, I needed to find something that works in a regular Notes client.  

So I remembered that Notes includes a built-in @Formula tester.  If you put your @formula code in any (regular) field, and press SHIFT-F9, the content of the field is interpreted as Formula language and executed.

The solution :

Create a colon ( : )  separated list of formulas (@namelookup) for all the Notes addresses you have.  If these contain the @ part (Notes specific routing information), strip that off.  You can easily do that in a spreadsheet, or in a text editor.

I end up with a list of formulas , looking like this :

@NameLookup([Noupdate];"Tom Bosmanst/GWBASICS";"InternetAddress"):        
@NameLookup([Noupdate];"Joske Vermeulen/GWBASICS";"InternetAddress")


(The limit here is the max. size of a text field in Notes, which is about 900 entries.  I had to process about 500 , so not a problem)

Then, I open a new mail and paste the formulas in the "Subject" field.
Image:How to convert Notes names to email addresses in the Notes client.

Select the Subject field, Press "SHIFT-F9" , and the formulas will be executed.  The result is a list of email addresses .  




VMWare Workstation command line

Tom Bosmans  18 May 2018 10:11:38
Get a list of running virtual machines
vmrun list


Use that output to get the ip address of that guest.
vmrun getGuestIPAddress  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx


(note that this particular call is pretty buggy, and does not return the correct IP address if you have multiple interfaces : https://github.com/vmware/open-vm-tools/issues/93 ...  still, it can be pretty useful)

You can run any command in the guest, as long as you authenticate properly (-gu -gp )

So for instance, this command lists all running processes, and you can use the output to actually do something with these processes in a next step (eg. kill them)

vmrun -gu root -gp listProcessesInGuest  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx


You can also run any command using that mechanism.

IBM Connections Communities Replay events DB2 queries

Tom Bosmans  23 January 2018 17:33:25
Working on a recent problem where events are not processed, I was looking at the wsadmin commands to provide information.
The Jython code supplied , for instance for CommunitiesQEventService.viewQueuedEventsByRemoteAppDefId("Blog", None, 100) is pretty useless in situations where you have 100's of 1000's of events in the queue.  The Jython code in the wiki is also plain wrong (but that's a differentl story)

https://www.ibm.com/support/knowledgecenter/en/SSYGQH_6.0.0/admin/admin/r_admin_communities_admin_props.html#r_admin_communities_admin_props__CommunitiesQEventService

So I turned to the DB2 database, to examine the LC_EVENT_REPLAY table.  Unfortunately, the interesting detailed infomration is stored as an XML in a field (CLOB) called EVENT.
It took me quite a bit of time to figure out how to get the information out of that field in an SQL Query.

In fact, the most puzzling fact, was the notation needed for the XML root element and the node elements.  They all need to use the namespace.  Using a wildcard for the namespace , is sufficient in this case.
So this query would give you some detailed information about events in the replay table :

SELECT C1.MANAGEDAPPDEFID,C1.EVENTTYPE,X.*
FROM (SELECT * FROM SNCOMM.LC_EVENT_REPLAY FETCH FIRST 10 ROWS ONLY) as C1,
XMLTABLE( '$tev/*:entry'
passing XMLCAST(XMLPARSE(DOCUMENT C1.EVENT) AS XML) as "tev"
COLUMNS
    "title"          VARCHAR(512) PATH '*:title/text()',
        "author"        VARCHAR(128) PATH '*:author/*:email/text()',
        "communityid"        VARCHAR(128) PATH '*:container/@id',
        "community"        VARCHAR(128) PATH '*:container/@name'
    ) AS X
ORDER BY X."communityid";

Of course, you can show any information from the EVENT XML file you like, but using this query as a start, would help you immensely :-) .

Custom dynamic dns on Ubiquity router with Domaindiscount24.com

Tom Bosmans  5 January 2018 16:57:04

Ubiquity Edgerouter X


The Ubiquity Edgerouter X is a very cheap but very powerful router with a lot of options.  It's based on EdgeOS, which is a linux based distro.
That basically allows you to do "anything" you want.

I got it from Alternate (https://www.alternate.be/Ubiquiti/EdgeRouter-X/html/product/1289652) , around 54 Euros....  

Dynamic DNS


I would like to finally setup a vpn solution, so I can safely access my systems from whereever.  My Edgerouter X has these capabilities, so I was looking for a way to set it up.

The first thing to do, is look for a Dynamic DNS provider.  In the past, I used https://dyndns.org (long, looong ago), but they don't offer dynamic dns services anymore as far as I can tell.
I looked a several free Dynamic DNS providers, but couldn't figure them out (it's probably me) .  

So I went looking what my 'real' dns provider has to offer (https://www.domaindiscount24.com)  .  It turns out, there is a dynamic dns service recently (27th december 2017) .

Dynamic DNS on domaindiscount24.com


Really simple to do : the UI has a new section 'dynamic dns', where you add a new subdomain.  That subdomain is then listed in your regular subdomains.
I did seem to have problems when using longer passwords, but that may have been a differnt problem ...

More information : https://www.domaindiscount24.com/faq/en/dynamic-dns



Dynamic DNS configuration on Edgerouter


DDClient


The Edgerouter uses a pretty standard ddclient package .  

Web UI


Through the web ui, the options are limited.  Specifically, the protocol, is limited to a subset of what ddclient has to offer, although the Service says "custom" ...


Image:Custom dynamic dns on Ubiquity router with Domaindiscount24.com
Bottomline, it doesn't work , and is not as "custom" as I would like.

Console



The Edgerouter allows ssh access, I have configured it to use ssh keys for me .

There is a series of commands to configure the dynamic dns feature (like in the web ui), but although that offers a bit more options, it's still not sufficient.

Custom ddclient


Luckily, ddclient is just a simple perl script, so it's easy to modify.   The problem with the code is that it contains hardcoded elements (like the /update.php? part in the update part)
There's 3 sections to change :
- variables
- examples
- update code


I copied the code from the duckdns sections and adapted it.

Open ddclient with a text editor, as root (sudo su - ).  The ddclient file is here :

/usr/sbin/ddclient


Add keysystems definitions at the end of the %services section (after woima, in my case) :

},
   'woima' => {
       'updateable' => undef,
       'update'     => \&nic_woima_update,
       'examples'   => \&nic_woima_examples,
       'variables'  => merge(
           $variables{'woima-common-defaults'},
           $variables{'woima-service-common-defaults'},
       ),
   },
   'keysystems' => {
       'updateable' => undef,

       'update'     => \&nic_keysystems_update,

       'examples'   => \&nic_keysystems_examples,

       'variables'  => merge(

                         $variables{'keysystems-common-defaults'},

                         $variables{'service-common-defaults'},

                       ),

   },



Add the variables to the %variables object  (somewhere at the end is fine):

'keysystems-common-defaults'       => {

                       'server'              => setv(T_FQDNP,  1, 0, 1, 'dynamicdns.key-systems.net', undef),

                       'login'               => setv(T_LOGIN,  0, 0, 0, 'unused',            undef),

       },




Copy the example code and update code to he end of the file .


######################################################################
## nic_keysystems_examples
######################################################################
sub nic_keysystems_examples {
   return < o 'keysystems'

The 'keysystems' protocol is used by the non-free
dynamic DNS service offered by www.domaindiscount24.com and www.rrpproxy.net/.
Check https://www.domaindiscount24.com/faq/en/dynamic-dns for API

Configuration variables applicable to the 'keysystems' protocol are:
 protocol=keysystems               ##
 server=www.fqdn.of.service   ## defaults to dynamicdns.key-systems.net
 password=service-password    ## password (token) registered with the service
 non-fully.qualified.host         ## the host registered with the service.

Example ${program}.conf file entries:
 ## single host update
 protocol=keysystems,                                       \\
 password=prettypassword                    \\
 myhost

EoEXAMPLE
}

######################################################################
## nic_keysystems_update
## by Tom Bosmans
## response contains "code 200" on succesfull completion
######################################################################
sub nic_keysystems_update {
   debug("\nnic_keysystems_update -------------------");

   ## update each configured host
   ## should improve to update in one pass
   foreach my $h (@_) {
       my $ip = delete $config{$h}{'wantip'};
       info("KEYSYSTEMS setting IP address to %s for %s", $ip, $h);
       verbose("UPDATE:","updating %s", $h);

       # Set the URL that we're going to to update
       my $url;
       $url  = "http://$config{$h}{'server'}/update.php";
       $url .= "?hostname=";
       $url .= $h;
       $url .= "&password=";
       $url .= $config{$h}{'password'};
       $url .= "&ip=";
       $url .= $ip;
       
       # Try to get URL
       my $reply = geturl(opt('proxy'), $url);

       # No response, declare as failed
       if (!defined($reply) || !$reply) {
           failed("KEYSYSTEMS updating %s: Could not connect to %s.", $h, $config{$h}{'server'});
           last;
       }
       last if !header_ok($h, $reply);

       if ($reply =~ /code = 200/)
       {
               $config{$h}{'ip'}     = $ip;
               $config{$h}{'mtime'}  = $now;
               $config{$h}{'status'} = 'good';
               success("updating %s: good: IP address set to %s", $h, $ip);
        }
        else
        {
               $config{$h}{'status'} = 'failed';
               failed("updating %s: Server said: '$reply'", $h);
        }
   }
}



Save the file and restart the ddclient service.

sudo service ddclient restart


This just checks if the code is fine.   Now the configuraiton.

We need 2 files:

/etc/ddclient.conf
/etc/ddclient/ddclient_eth0.conf

Note that you can generate the second file, by using the webui of Edgerouter, or the console commands .  The values in the webui or console command don't matter, you will delete everything anyway.
You need to edit these files as root (sudo su - )

/etc/ddclient.conf :


# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

protocol=keysystems,
server=dynamicdns.key-systems.net,
password='yourpassword'


/etc/ddclient/ddclient_eth0.conf

The important variables here are the password , and the last line, your hostname you defined in the Domaindiscount24 web interface.


#
# autogenerated by vyatta-dynamic-dns.pl on Fri Jan  5 12:58:19 UTC 2018
#
daemon=5m
syslog=yes
ssl=yes
pid=/var/run/ddclient/ddclient_eth0.pid
cache=/var/cache/ddclient/ddclient_eth0.cache
use=if, if=eth0

protocol=keysystems,
server=dynamicdns.key-systems.net,
password='yourpassword'
your.hostname.tld


Save both files.

You can now force an update of the ddns, but issuing a EdgeOS command :

update dns dynamic interface eth0

You can put a tail on the messages log, to see the results :


tail -f /var/log/messages


The result should be something like this :

Jan  5 15:20:06 ubnt ddclient[10616]: SUCCESS:  updating yourhostname.domain.com: good: IP address set to 1.2.3.4
Jan  5 16:39:02 ubnt ddclient[13381]: SUCCESS:  updating yourhostname.domain.com: good: IP address set to 5.6.7.8


Of course, instead of editing the files directly on your router, you could actually copy them using scp .... and editing them on your own desktop machine .

Supportability


Alas, no supportability.  EdgeOS updates will likely wipe the changes away.,
Also, using the webui or console to update the dynamic dns settings, will wreak havoc on the configuration.  I am working on getting the updates in Source forge (https://sourceforge.net  /  https://sourceforge.net/p/ddclient/git/merge-requests/ ), but don't hold your breath for these changes to make it all the way down to Ubiquity .
So the solution is not ideal, but it works for now ...

Trying out Domino data services with Chart.js

Tom Bosmans  4 December 2017 11:06:48
Domino Data Access Services have been around for a few years now, but I never actually used them myself.

https://www-10.lotus.com/ldd/ddwiki.nsf/xpAPIViewer.xsp?lookupName=IBM+Domino+Access+Services+9.0.1#action=openDocument&content=catcontent&ct=api

Since I recently started to dabble in Ethereum mining, I was looking for a place to store my data and draw some graphs and the likes.  I first tried out LibreOffice Calc, but I couldn't find an easy way to automatically update it with data from a REST API.  
So I turned to good old Domino, being the grandpa of NoSQL databases (before it was cool).

The solution I came up with, retrieves multiple JSON streams from various sources, combines it into a single JSON , that is then uploaded into a Domino database (using Python).
To look at the data, I created a literal "SPA" (single page application) - I use a Page in Domino to run Javascript code , to retrieve the data , again in JSON format, and turn it into a nice graph (using charts.js) .
So I don't actually use any Domino code to display anything, Domino is simply used to store and manage the data.

This article consists of 2 parts :


  • loading of data into Domino using Python and REST services.
  • displaying data from Domino using the Domino Data Access Services and an open-source javascript library to display charts ( http://www.chartjs.org/ )


Python to Domino


Domino preparation


To use the Domino Data Access services in a database, you need to enable them

  • On the server
  • In the Database properties (Allow Domino Data Service)
  • In the View properties


Server configuration


Open the internet site document for the server/site you are interested in.
In the Configuration tab, scroll down to the "Domino Acces Services"  .  Enable "Data" here.

Note that you may want to verify the enabled methods as well - enable PUT if you plan to use the services that use PUT requests.
And if you're not use Internet Site documents yet, well, then I can't help you :-)

After modifying the Internet Site document, you need to restart the HTTP task on your Domino server.
Image:Trying out Domino data services with Chart.js

Database properties


In the Advanced properties, select "Views and Documents" for the "Allow Domino Data Service" option.
Image:Trying out Domino data services with Chart.js

View properties


Open the View properties. and on the second to last tab, enable "Allow Domino Data Service operations".
Image:Trying out Domino data services with Chart.js

There is no equivalent option in Forms.

Python code


Instead of figuring out how to load JSON data in a Notes agent or Xpages (which no doubt is possible, but seems a lot of work), I choose to use a simple Python script, that I kick of using a cron job. I run this code collocated with the Domino server, but that is not necessary .  Because the POST requires authentication, and the url used it using TLS, this could just as well run anywhere else.
Any other server-side code would do the same thing , so Node.js or Perl or ... are all valid options.

There's 2 JSON  objects being retrieved :

resultseth = requests.get('http://dwarfpool.com/eth/api?wallet={wallet}&email={email address}')
data = resultseth.json()

and

currentprice = requests.get('https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR')
pricedata = currentprice.json()


The first JSON that's returned , contains nested data (the workers object) .

{
"autopayout_from": "1.0",
"earning_24_hours": "0.1123",
"error": false,
"immature_earning": 0.000890178102,
"last_payment_amount": "1.0",
"last_payment_date": "Thu, 16 Nov 2017 16:24:01 GMT",
"last_share_date": "Mon, 04 Dec 2017 12:41:33 GMT",
"payout_daily": true,
"payout_request": false,
"total_hashrate": 30,
"total_hashrate_calculated": 31,
"transferring_to_balance": 0.0155,
"wallet": "0x5ac81ec3457a71dda2af0e15688d04da9a98df3c",
"wallet_balance": "5411",
"workers": {
"worker1": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 12:38:42 GMT",
"second_since_submit": 587,
"worker": "worker1"
},

"worker2": {
"alive": true,
"hashrate": 15,
"hashrate_below_threshold": false,
"hashrate_calculated": 16,
"last_submit": "Mon, 04 Dec 2017 11:38:42 GMT",
"second_since_submit": 111,
"worker": "worker2"
}
}
}


It turns out that Domino does not like that very much or rather cannot handle nested JSON, but there is a simple solution - flatten the JSON.

This uses the "flatten_json" class in Python, so it easy to use.  

In the sample above, it would translate


{ "workers":
{ "worker1":
  "worker": "worker1"
}
}



into


{workers_worker1_worker: "worker1"}


(Information about this particular API , is here http://dwarfpool.com/api/ )

The flatten_json, can be installed using pip

pip install flatten_json


From a public API , I can get the current price of ETH expressed in EUR, dollars and Bitcoin.

In Python, I now have 2 dictionary objects , with the JSON data (key - value pairs)
I combine them into a a single one, by adding the data of the 2nd dictionary to the first.

for lines in pricedata:
   data[lines] = pricedata[lines]


The nice thing about this Python classes, is that it allows to dynamically edit the JSON before submitting it again.  I could remove the data I don't want, for instance.
In this case, I need to do something about the boolean values that get returned by the Dwarfpool API, because Domino Data Access Services does not like them!

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"


The next step is to post the JSON file to Domino.
It's very straightforward : The url used will create a new Notes document, based on the Form named "Data" .  ( https://www-10.lotus.com/ldd/ddwiki.nsf/xpAPIViewer.xsp?lookupName=IBM+Domino+Access+Services+9.0.1#action=openDocument&res_title=Document_Collection_POST_dds10&content=apicontent )

The Domino Form needs to exist of course, but it's not very important that the fields are on there .  


url = 'https://www.gwbasics.be/dev/dataservices.nsf/api/data/documents?form=Data'


There's some headers to set, in particular "Content-Type" must be set , to "application/json"

To Authenticate, I use a Basic Authentication header .  In this case, the user I authenticate with , only has Depositor access to the Database (which is the first time in 20 years of Domino experience  , I see the point in having this role in an ACL :-)  )

The service responds with an HTTP Code 201, if everything went correctly .  This is of course something you can work with (if the response code does not match 201, do something to notify the administrator, for instance) .

The full script:


# retrieves dwarfpool data for my wallet
# retrieves current price ETH
# merges the 2 in a flattened JSON
# uploads the JSON into a Domino database using the domino rest api
import requests
import json
from flatten_json import flatten

resultseth = requests.get('http://dwarfpool.com/eth/api?wallet=<wallet>&email=<email address>')
data = resultseth.json()
print "-----------------"

# retrieve eth price
currentprice = requests.get('https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR')
pricedata = currentprice.json()

print "------------------"
data = flatten(data)

# merge json data
for lines in pricedata:
   data[lines] = pricedata[lines]

for lines in data:
   print lines,data[lines]        
   if data[lines] ==  True:
           data[lines] = "True"
   if data[lines] == False:
           data[lines] = "False"

url = 'https://www.gwbasics.be/dev/dataservices.nsf/api/data/documents?form=Data'
myheaders = {'Content-Type': 'application/json'}
authentication = ("<Depositor userid>", "<password>")
response = requests.post(url, data=json.dumps(data), headers=myheaders, auth=authentication)
print response.status_code



Lessons learned




  • The Domino DAS are fast and easy to use , from Python .
  • The Domino Data Access Services POST requests do not handle nested JSON, so you need to first massage your JSON into a flat format .
  • The Domino DAS is pretty picky about the types - it does not support Boolean values (true/false)
  • Finally, I have seen a good use of the Depositor role in action !


Chart.js and Domino


Now the data is in Domino, and we can start thinking about

The Single Page Application


I created a Page in Domino, and put all HTML and Javascript on that page as pass-tru HTML.

Having the code in Domino has the advantage that the Domino security model is used.  So I need to authenticate first , to be able to use the SPA.
The same code can live anywhere else (eg. as a html page on any webserver),  but then I'd have to worry about authenticating the Ajax calls that retrieve the data.  
I set the Page to be the "Homepage" of the Database .

I use several javascript libraries, Jquery and Chart.js.  

For Chart.js, there's several ways to include the code, I chose  to use a Content Delivery Network ( http://www.chartjs.org/docs/latest/getting-started/installation.html )

<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.bundle.js" integrity="sha256-vyehT44mCOPZg7SbqfOZ0HNYXjPKgBCaqxBkW3lh6bg=" crossorigin="anonymous"></script>


For Jquery, I learned that the "slim" version does not have the JSON libraries, so use the minimized or full version.

Chart.js



Chart.js is a simple charting engine, that is easy to use and apparently also very commonly used.
I did have problems getting it to work correctly with my Domino Data, but that turned out to be related to Domino, not to Chart.js.

The samples that are out there for Chart.js generally do not include dynamic data, so here's how to use dynamic data from Chart.js using Domino.

Initialize


What worked best for me,  is to initialize the Chart in the $.document.ready function.  Without Jquery, you can do the same with window.onload .

The chart is stored in a global variable, myChart, so it is accessible from everywhere.

The trick here, is to initialize the Chart's data and labels as empty arrays.  The arrays will be loaded with data in the next step (the title is also dynamic, you may notice).

In this sample, I have 2 datasets, and only at the end of this function, I call the first load of the data (updateChartData)


<script language="JavaScript" type="text/javascript">
var pageNumber = 0;
var pageSize = 24;
var myChart = {};
// prepare chart with an empty array for data within the datasets
// 2 datasets, 1 for EUR , 1 for ETH
$(document).ready(function() {
   // remove data button needs to be disabled when we start .
   document.getElementById('removeData').disabled = true;
   var ctx = document.getElementById("canvas").getContext("2d");
   myChart = new Chart(ctx, {
   type: 'line',
   data:{
                   labels: [],
                   datasets: [
                                             {
                                                   label: "EURO",
                                           data: [],
                                           borderColor: '#ff6384',
                                           yAxisID: "y-axis-eur"
                                               },
                                   {
                                           label: "ETH",
                                           data: [],
                                           borderColor: '#36a2eb',
                                           yAxisID: "y-axis-eth"
                                   }
                                           ]
           },
   options:  {
                   responsive: true,
                   animation: {         easing: 'easeInOutCubic',
                                   duration: 200,
                           },
                   tooltips: {
                                               mode: 'index',
                                                  intersect: false,
                                   },
           hover: {
                               mode: 'nearest',
                               intersect: true
                                   },
                     scales: {
               xAxes: [{
                   display: true,
                   scaleLabel: {
                       display: true,
                       labelString: 'History'
                   }
               }],
                 yAxes: [{
                           type: "linear",
                   display: true,
                   position: "left",
                   id: "y-axis-eth",
           // grid line settings
                   gridLines: {
                       drawOnChartArea: false, // only want the grid lines for one axis to show up
                   },
               }, {
                   type: "linear",
                   display: true,
                   position: "right",
                   id: "y-axis-eur",
               }],
           }
       }})
   updateChartData(pageSize,pageNumber);
});


Load data


The getJSON call (Jquery) connects to the Domino view. and gives 3 parameters :
- pagesize -  set to 24 to retrieve the last 24 documents (there is a document generated every hour by the Python cron job)
- page number  - set the paging - initially set to 0.
- systemcolums = 0 - avoids Domino specific data being returned (data that we'll not use anyway in this scneario)

The JSON that is retrieved from the Domino view is now loaded into an array of objects, that we can loop through.

The Chart data is directly accessible :
Labels : myChart.data.labels
Dataset 1 : myChart.data.datasets[0].data
Dataset 2 : myChart.data.datasets[1].data

The last call , myChart.Update, updates the Chart and redraws the chart.


var updateChartData = function(ps,pn) {
   $.ajaxSetup({
               async:false,
           type: "GET",
   });
   myChart.options.title =  {                 display:true,
                                   text: 'Last 24 hour performance - ' + $.format.date(Date.now(), "d MMM yyyy HH:mm")
                           };
   $.getJSON("/dev/dataservices.nsf/api/data/collections/name/GraphData?systemcolumns=0&ps="+ps+"&page="+pn, function(data){
           console.log(" Loading page " + pn + " with pagesize " + ps + " returned " + data.length + " entries");;
           for (var i=0; i < data.length; i++) {
                   //console.log( " index: " + i + "  EUR : " + data[i].TOTAL_VALUE_IN_EUR );
                   myChart.data.labels.unshift($.format.prettyDate(data[i].CREATED));
                   myChart.data.datasets[0].data.unshift(data[i].TOTAL_VALUE_IN_EUR);
                   myChart.data.datasets[1].data.unshift(data[i].TOTAL_ETH);
           }
           //shift to delete first element in arrays, not necessary in this case
           myChart.update();
    });
};


This is the end result :
Image:Trying out Domino data services with Chart.js

Actions


To code the buttons, I used an EventListener (copied from the Chart.js samples : http://www.chartjs.org/samples/latest/charts/line/basic.html )
However , they did not work as expected initially.

On every click, the whole page reloaded - this is not what you want in a Single Page Application !

To counter that, I added the "e" in the function to pass the Event handler , and then use preventDefault,  to avoid reloading of the page.


$( "#addData" ).click(function(e) {
    // --------- prevent page from reloading ------
   e.preventDefault();

    // ----
   pageNumber++;
   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;
   });


Without Jquery, it would look like this (it needs some additional code for cross browser compatibiiltiy).
The first line is there for cross-browser compatibiltiy (Firefox does not know window.event, that is actually an ugly IE hack).


document.getElementById('addData').addEventListener('click', function(e) {
    if(!e){ e = window.event; } ;
   e.preventDefault();

   pageNumber++;
   console.log( " Retrieving page : " + pageNumber );
   updateChartData(pageSize, pageNumber);
   document.getElementById('removeData').disabled = false;
   });


Only after I made that change, I realized that this behaviour was in  fact caused by Domino, and that disabling the Database propery "Use Javascript when generating pages" would fix this.
Why our Domino developers ever thought it was a good idea to put HTML forms in Pages, I will never understand (I understand they used this in Forms).

And in my testing, I still needed the preventDefault, even with the Database property set .....

Some after the fact googling, suggests to me that using preventDefault is in fact the way to go (eg. https://xpagesandmore.blogspot.be/2015/06/bootstrap-js-modal-plugin-in-xpages.html )

Lessons learned




  • Using a Domino Page to host the Javascript code, enables the Domino security model .
  • I forgot about the Domino quirks with regards to web applications (e.preventDefault)
  • $.getJSON can be set up using $.ajaxSetup , although it's not necessary.
  • I didn't find good Chart.js samples for dynamic loading of data.



Since we're talking Ethereum, you may of course donate here :-)  0x5ac81ec3457a71dda2af0e15688d04da9a98df3c

    Check limits on open files for running processes

    Tom Bosmans  10 November 2017 17:02:41
    OK, setting the correct limits in /etc/sysconfig/limits.conf, and messing around with ulimit can leave you thinking everything is ok, while it is not.
    This little line shows you an overview of all the running java processes, to quickly check the Open File limit is correct .

    check the limits (open files) for all running java processes
    (as root)

    for i in $(pgrep java); do prlimit -p $i|grep NOFILE; done


    In this example, you see that there's just 2 of the jvm's are running with the correct limits.  The easiest way to resolve this (if  /etc/sysconfig/limits.conf is correct, and you have a service that starts your nodeagent) , is to reboot :

    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files               65536     65536
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096
    NOFILE     max number of open files                1024      4096