Intelligent watering system Part II – Using Raspberry Pi Zero W’s as additional “Antennas” to extend Jeedom’s Bluetooth range

Spring is back so it is the perfect moment to check if my watering system passed the winter, and reactivate it. At the same time i’ll explain in detail the system I set up with external bluetooth antennas. We will indeed improve our Jeedom test installation for automatic watering, by adding (and actually cloning) some Bluetooth “repeaters” (called the antennas) in order to increase the range inside which we can receive moisture detectors data. Those repeaters are very cheap Raspberry Pi Zero W (the W is important, it adds the Bluetooth functionality to the Pi Zero).

The goal is to deploy them outdoor, where moisture detectors are located. They will use their Bluetooth capacities to gather data from moisture sensors which are out of range from the Jeedom Bluetooth controller (located indoor), and they will use my domestic wifi network (reachable outdoor) to relay moisture data to Jeedom. Once Raspbian got installed on Raspberry’s ZeroW, the software part mainly consists in using Jeedom’s BLEA plugin embedded functionalities to automagically deploy on the ZeroW’s what’s needed to make them become BLEA Antenna, and we will also improve a little bit the system so that we check if wifi is ok to eventually relaunch it in case of a loss of connection.

At the end, the system will be composed by a main Jeedom Controller, with wifi but without Bluetooth, located indoor, and 4 jeedom antennas, with bluetooth (to reach the moisture sensors) and wifi (to feed Jeedom’s controller with sensors data). I present below the previous diagram I created in the tutorial Part I, the added parts are in red.

For this tutorial I will use 4 Raspberry Zero W, to demonstrate the ability to relay information from a place where the central Jeedom controller can not reach a moisture sensor, and to demonstrate the connexions between the antennas.

Note that, as we will clone a firstly configured “master SD card” on other SD cards, I strongly suggest to use the exact same SD Cards in your raspberrys to avoid troubles due to small differences in cards sizes OR to create your source raspbian image on a smaller SD card than the one you will really use for your antennas !! If your master SD card is 8Gb and you clone it on 16Gb SD cards, after cloning you will still be able to extand the Linux partition size on each cloned raspberry by using rasp-config tools, as we will show later in this tutorial.

Configure a dedicated user on each ZeroW

I assume here:

  • You got Raspian installed and working on one of your Raspberry ZeroW (we will clone this Raspberry later to the 3 other ones). FYI I re-checked my previous tutorial about installing Raspbian on a Pi Zero W, with the latest Raspian version at the moment (2021-01-11-raspios-buster-armhf-lite.img), and this tutorial is all based on this version on the 4 Pi Zero W.
  • You are root on the Raspberry Pi Zero, or you know how to use sudo as we configured it in the installation tutorial.

Now we add the pluginblea dedicated user, it will be used by the Jeedom controller to connect through ssh/scp to the antenna.

adduser pluginblea
visudo

Then add at the end of the file:

pluginblea ALL=(ALL) NOPASSWD: ALL

Then type CTRL-X then CTRL-Y (or CTRL-k then X with joe if you changed the default editor as I did in my Raspbian installation tutorial).

Make the blea daemon start automatically

(This part is greatly inspired by this post and my suggestion and this update).

Warning: I create the script now, so that it will be onboarded in the SD Card clone we will perform later to easily deploy a new antenna. But it will not work yet as we didn’t yet create the antenna from Jeedom.

First we need to know your Jeedom’s controller IP, and the BLEA API Key. The BLEA API Key can be found in Jeedom Settings>System>Setup>API tab>API key Bluetooth Advertisement. Don’t forget to check on the right that it’s enabled.

Now insert those lines in /etc/init.d/blearpistart, and dont forget to edit line 23 to insert your jeedom controller’s IP, and BLEA API key.

joe /etc/init.d/blearpistart
#!/bin/sh
#/etc/init.d/blearpistart

### BEGIN INIT INFO
# Provides: Jeedom BLEA Plugin
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Simple script to start a program at boot
# Description: A simple script similar to one from www.stuffaboutcode.com which will start / stop a program a boot / shutdown.
### END INIT INFO

# If you want a command to always run, put it here
touch /tmp/blea && chmod 666 /tmp/blea

# Carry out specific functions when asked to by the system
case $1 in
start)
echo "Starting BLEA"
# run application you want to start
/usr/bin/python /home/pluginblea/blead/resources/blead/blead.py --loglevel error --device hci0 --socketport 55008 --sockethost "" --callback http://:PORT/plugins/blea/core/php/jeeBlea.php --apikey 
stop)
echo "Stopping BLEA"\n
# kill application you want to stop
sudo kill `ps -ef | grep blea | grep -v grep | awk '{print $2}'`
;;
*)
echo "Usage: /etc/init.d/blearpistart {start|stop}"
exit 1
;;
esac

exit 0

Note that:

  • You need to enable Jeedom to be reached by HTTP
  • You need to enable the API to be reachable by anywhere

Now make this script executable:

chmod 755 /etc/init.d/blearpistart

Now we will create the service file used by SYSTEMCTL.

joe /etc/systemd/system/blearpistart.service

Insert those lines:

[Unit]
Description=BlEA service
After=hciuart.service dhcpcd.service bluetooth.service
[Service]
Type=oneshot
ExecStart=/etc/init.d/blearpistart start
[Install]
WantedBy=multi-user.target

Now we activate the service:

systemctl enable blearpistart.service

The output should be as following if no error:

Synchronizing state of blearpistart.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable blearpistart
Created symlink /etc/systemd/system/multi-user.target.wants/blearpistart.service → /etc/systemd/system/blearpistart.service.

Note that depending on your Raspbian version, you may instead need to use:

update-rc.d blearpistart defaults

Checking Connection to Jeedom

(I documented this on this post in Jeedom’s forum).

On the Raspberry Zero, and even if we disabled the power management on the wlan0 interface by using “wireless power off” in the network interfaces file, you may still experiment some disconnections, especially if your raspberrys are not well receiving the wifi signal. So I added a small script which will be used to regularly stop the blea daemon, ping our main server, and if we detect it is not reachable, we reboot the Pi, or else we relaunch the blea daemon. This script is based on another one found on Internet, you need to change “IP_FOR_TEST” by the IP of the Jeedoms controller you want to ping.

Yes it can probably be optimized as stoping the daemon and eventually rebooting it is a little bit overkill !! But it works, my four antennas are stable over time.

mkdir /opt/check_lan
joe /opt/check_lan.sh

Now add those lines in /opt/check_lan.sh and don’t forget to change line 5 to put your Jeedom’s controller’s IP.

#!/bin/sh

# cron script for checking wlan connectivity
# change  to whatever IP you want to check.
IP_FOR_TEST="IP_TO_TEST"
PING_COUNT=1

PING="/bin/ping"
IFUP="/sbin/ifup"
IFDOWN="/sbin/ifdown --force"

INTERFACE="wlan0"

FFLAG="/opt/check_lan/stuck.fflg"
logger "Stopping BLEA antenna"
systemctl stop blearpistart.service
logger "Testing if $INTERFACE can ping $IP_FOR_TEST"
# ping test
$PING -c $PING_COUNT $IP_FOR_TEST > /dev/null 2> /dev/null
if [ $? -ge 1 ]
then
logger "$INTERFACE seems to be down, trying to bring it up..."
if [ -e $FFLAG ]
then
logger "$INTERFACE is still down, REBOOT to recover ..."
rm -f $FFLAG 2>/dev/null
sudo reboot
else
touch $FFLAG
logger $(sudo $IFDOWN $INTERFACE)
sleep 10
logger $(sudo $IFUP $INTERFACE)
logger "Starting BLEA antenna"
systemctl start blearpistart.service &
fi
else
logger "$INTERFACE is up"
rm -f $FFLAG 2>/dev/null
logger "Starting BLEA antenna"
systemctl start blearpistart.service &
fi

Now you will want to make sure this script will run every 15mn (you may change that) by using crontab:

crontab -e

add this line:

*/15 * * * * root /opt/check_lan.sh \u003e/dev/null

FYI, the logger command used in this script will log the output in /var/log/syslog. You may want to monitor it sometimes by using tail -f /var/log/syslog

Backup & Clone your Raspberry SD Card as a template !

Backup

Now is the time you will probably want to dump your Raspberry SD Card into a file, as we have now something like a “standard antenna installation”, that you can easily duplicate on other Raspberrys, before we add it in Jeedom.

First, you should change the name of your raspberry by using ‘raspi-config’, then “System Options”, “Hostname’, and name it as you want. Personally, I named them relatively to their location. (eg. EAST1, EAST2, WEST1, WEST2).

Also, properly stop your raspberry with “shutdown now” instead of removing the USB power cable.

sudo su -
/sbin/shutdown now

Now we will use the “HDD Raw Copy Tool” freeware tool, on a Windows computer. Insert the raspberry’s SD Card in your computer (again, it will probably complain about a drive which should be formatted, cancel it). Launch HDD Raw Copy Tool, then choose you card reader as SOURCE and click CONTINUE.

On the next screen, choose a location to store the image file which will be created, and click CONTINUE.

On the next screen, double check the settings, and click START … Go get a coffee …

When you see “Task complete” at the end of the log window, you can remove your source SD Card from your computer, and now you have a perfect clone of your pre-configured raspberry, ready to be copied on other PI Zero W and then to be configured as Jeedom’s BLEA antennas.

Clone

Those operations are to be done on each SD Card you want to use on your production raspberrys.

Insert a target SD Card in your computer, and use the same HDD Raw Copy tool we used before to backup our master Raspberry in an image file. This time we will write the image on other SD cards, by selecting as SOURCE the image file, then as destination the SD Card, and writing it. This operation will be quite long, depending on your image and target sizes.

Once image writing operation is done, we will test it in another Raspberry. Once again we will have to find it’s IP from our router or DHCP server. It should boot correctly, connect on our wifi network. We will only change its hostname so that it is different from our Master raspberry, by using raspi-config. This is also the moment you may want to extand the partition size, still by using raspi-config, if your master image is smaller than the target SD Card.

I suggest you note each raspberry’s MAC address & IP address. In my case i even wrote their MAC Address on their case.

Create the antennas in Jeedom

Starting by now, all those operations will have to be done on EACH raspberry Pi Zero W which was cloned from your master image.

First, in Jeedom, we will create our Antennas, inside the BLEA plugin.

On the screen which will appear, we will have to enter the antenna’s name, IP, Port, ssh’s login & password (we use the user we created dedicated to jeedom earlier), and the device associated to the bluetooth interface of the antenna (hci0 in our case).

Then I recommend to save right now the antenna, and next we will use the “Send files” button to automagically have our Jeedom’s controller send the required files on our antenna. Once this is done we will use the “Launch dependencies” button to automagically have our antenna compile the required files locally.

Despite the information message explaining the files were successfully sent, you can also verify this by connecting to this antenna with SSH and validate that there is a newly created ‘blead’ directory in the pluginblea home directory.

Now we compile the dependencies.

This operation will actually take a long time. You can manually check the log file by using the green button dedicated to that in Jeedom’s UI, but you can also monitor /tmp/blea_dependencies on your antenna.

Go have a few coffees … It will require ~30mn for dependencies to be installed. But, you can also parallelize tasks, and create your other antennas right now, push the files, and launch the dependencies; it won’t stop the one already building. This is what I did with my 4 raspberrys, and you can notice on the left that we can see the 4 antennas, but they have a red status while the dependencies are compiling, as the blea daemon is not launched.

When the dependencies are successfully compiled (which is my case with this raspbian version), you can turn on automatic daemon management in your antennas, and lunch them by using the green “Run” button.

Note on the previous screenshot that I already launched an antenna, and it is now appearing with a green status in the list. After I launched the 4 antennas, they are all seen by Jeedom as running.

You can now visualize your network, and the devices detected and linked to your antennas.

Note that we see the main controller, which is still equipped with an external bluetooth usb stick, and which is called “Local”, and the 4 antennas we deployed. For now we only have Flowercare detected, as this is the one we used in previous tutorial about a simple BLEA controller. A cool feature here is that JEedom will try to approximately guess the position of each sensor, depending on the place you moved your antennas in this view, and the RSSI signal (check below).

Another interesting view, is the “Health” view of the BLEA plugin.

It wil ldisplay for each of your bluetooth device, its Mac address, type, status and batterie, but also the RSSI per antenna or controller. The less the RSSI is, the better the signal is. Also note, the Antenna transmission & reception columns, in this case the Flowercare we have detected is “stick” to the main controller, but we will change that later.

Adding all our Flowercares

This is very simple to have our other devices detected. We will use the “Scan” function of the BLEA plugin, and tell Jeedom we are only looking for Miflora’s equipments.

When Jeedom will find a device of the selected type it doesn’t already know, a screen will pop, asking you information about this device. I strongly suggest here to give explicit names to your Flowercares !! Like the plant name, the place it will be located, etc. Later in your automation scripts, you will need those explicit names to check the values and perform the good actions, without ambiguity.

So we assign this first one a name (and the number I wrote on each sensor), its parent object so it will be displayed in Jeedom’s dashboard, a function category, and then we will go to check its settings.

In the settings we can see it is stick to a specific antenna, but we can change it.

It is not a problem to change the reception and transmission setting to something larger: every antenna will be able to receive and transmit data to this equipment. IT may be useful if you move your sensors often, or during your final outdoor setup. For now we dont change it, as we will check later in the Health & network views, if it changes something to the links between the components.

Now we have to do the same detection/naming stuff until all our sensors are detected. In my case since those sensors are now used for 3 years and were totally inactive during winter, i had to change a few batteries before they all get detected, and I found one is now unable to be detected, had to use my test sensor as a spare.

The final organization I used in Jeedom for this tutorial is as following. I created some objects in my “Home” root object, to distinguish Indoor from Outdoor equipments. Then, inside Outdoor, I defined one object per terace I have, one East, one West. Each equipment has been assigned to its target destination.

Real gains of the antennas

We will test if our antennas are really giving us an extended range. First, i will show the Health & Network views from the BLEA Plugin, with all the equipments and antennas still on my desk. It means they are all really close from each others !

We can still see on which antenna every sensor is bound. Now, i will shutdown all antennas.

As we can see in the Health view, all the sensors are now only detected by the Local antenna, which is the bluetooth usb stick plugged on my Jeedom controller. Only the RSSI from this antenna is displayed for each equipment. Also, in the Network view, we can see all antennas are down/red, and the links between sensors and antennas are now only pointing to the local controller. In the BLEA documentation and this excellent Sarakha’s article (french) we can read that whatever the antenna configured for reception or transmission in the sensor’s configuration, as soon a sensor is seen by any bluetooth antenna, the device is considered as present and usable.

Now I will move my sensors in various places of my terraces, where they could really be located, still with only my main jeedom controller active, no antennas.

It becomes interesting: clearly all sensors moved on the East terrace are out of range. The main controller can still see the West sensor though, as my desk is closed from this terrace. Now, because the Intel NUC on which I run my production Jeedom won’t have this external bluetooth dongle i’m using on my testbed, i’ll tell BLEA to NOT use any local bluetooth controller. Therefore, we will ONLY rely on external antennas, which are still powered off right now.

I checked the option “No local”, and saved the configuration. To truely test this configuration, i will even shutdown the Jeedom Controller, and remove the bluetooth usb stick. After booting it again without bluetooth, here are the Health & Network views.

Now, clearly none sensor is detected, which is perfectly normal ! I will then move my antennas in each “corner” of my appartement, switch them on. Then on the Network view, i’ll move them approximately to where they physically are from each others. I just wait a few minutes after each antenna is back online in Jeedom, for the network to “stabilize” itself.

So now, clearly all the antennas are up, and are acting as real relays to the Jeedom controller. We can see in the Health view that we even have some kind of redundancy if one of the antenna on a terrace had to go down. The network view is almost accurate, the sensors are close to perfectly displayed on the map.

Conclusion

Watering !

This is our main goal … so we actually just have to expand a little bit what we already did with Jeedom scenarios in the tutorial part I, as now we have much more sensors and water valves to control (though I only have one for this tuto but the principle remains exactly the same with several Fibaro FGS222 and several water valves).

So now, all our sensors are active, and we can see on Jeedom’s dashboard that they are actively feeding data.

Monitoring

We already setup some kind of monitoring script on each antenna, so that they will reboot in case they can not ping the Jeedom controller. But this is technical monitoring, not fnctional to make sure there is no water leak somewhere with a valve staying opened, or a part of your garden not being watered because a sensor is not working well.

So, inside Jeedom, some good practices would be:

  • To monitor and send an alert if any Antenna is down for too long;
  • To monitor and alert if a sensor has not provided new data for too long;
  • To monitor and alert sensors batteries;
  • To monitor and alert if a water valve is opened for too long (avoid flooding and water consumption).

This requires a little bit of coding inside scenarios, and it will be the perfect target for a next turotial about monitoring plugins or scenarios ! Incoming.

Alerting

A nice thing is also to add some notifications to be sent by email or pushover or other external services, to inform you by various ways, that the system decided to NOT floor because of the weather forecast, or decided to floor because the humidity check was triggered. It will also be a perfect target for an incoming tutorial about the notification system I developed inside Jeedom by using both scenarios and embedded PHP. This system can notify you or other people by email, pushover, voice, etc.

Troubleshooting

If you have problems with dependencies after cloning your antenna template, try to remove/reinstall all the python PIP & bluepy stuff:

sudo pip3 uninstall bluepy
sudo apt remove python-pip python3-pip
sudo apt autoremove
sudo apt-get reinstall build-essential libssl-dev libffi-dev python-dev
sudo apt-get reinstall python-pip
sudo apt-get reinstall python3-pip
sudo pip3 install bluepy
sudo setcap cap_net_raw+e /usr/local/lib/python3.7/dist-packages/bluepy/bluepy-helper\nsudo setcap cap_net_admin+eip /usr/local/lib/python3.7/dist-packages/bluepy/bluepy-helper\nsudo /sbin/reboot

Automatic programmable outdoor watering system with Jeedom

In this tutorial I’ll explain how to setup an automatic watering system by using the Jeedom automation system, and some equipments to remotely pilot a water valve plugged on a dripping system. the goal is to explain how to control a simple system for now, so that you can extand it depending on your needs, by keeping exactly the same components.

The system here will use a moisture detector in a plant pot, communicating with Jeedom by using the bluetooth protocol. A water valve will be triggered by a ZWave relay. We will glue everything together by using a scenario inside Jeedom to regularly check the moisture, the weather (is it raining or will it be raining soon or not), compare the moisture to a variable, and if the moisture level is below the variable, we activate the valve to let the water flow, and close it after a few seconds.

This will be the base for a more complex system i’ll detail later, the one i’m using at home, which uses one main jeedom box in my appartment, 4 Jeedom “bluetooth antennas” (repeaters) on my East terrace, 4 other antennas on my West terrace, so that I’m able to pilot 8 separate dripping systems depending on the moisture levels detected by 8 moisture detectors.

First we’ll check what hardware we need. Then for this tutorial, i’ll deploy a fresh new Raspbian image on a spare Raspberry Pi 3 (first tried with a Pi Zero W but had troubles with the ZWave stick not being fully USB compliant), install Jeedom, and install the excellent BLEA plugin we will need. Next I’ll switch on my regular Jeedom installation to show the configuration and scenarios i’m using to automate the watering system.

One last thing in this introduction: I’m sorry the Jeedom screenshots at the beginning are in French … I changed the UI language later in this tutorial. But Jeedom is fully functional in English and other languages.

The hardware I use

First a quick overview of the “internal” stuff needed. The Jeedom box gluing everything together should, for this first setup, use a bluetooth receptor and if needed a high gain antenna. My regular Jeedom installation is running on an Intel NUC with vSphere, it has bluetooth but I wanted to plug an external antenna, so I bought an UD100 “long range” external adaptor made by Sena. The external antenna is a 12dBi one for 2,4Ghz frequency, omnidirectional.

Yes, on the picture above this is a Raspberry Pi Zero, not the Pi3 used for this article. We will also need an Aeotec Gen5 ZWave stick to pilot the remote ZWave modules through the ZWave protocol. The whole “controller” setup is as follow (with the Pi3 now :)):

Then, the tricky part, the “external” dripping system 🙂 Believe me this one is simple … You first need to deploy this configuration before thinking about a larger system with multiple Bluetooth antennas and separate dripping channels.

So the main components, except the regular outdoor water systems & Gardena’s micro dripping stuff are:

  • A Fibaro FGS 222 module, to be controlled by ZWave by our Jeedom box, and which will be used to power supply the valves with the 24v supplied by the 220-24v alimentation (“dry contact” / “contact sec” mode of the module). Note that this is a Dual relay, which means I’ll be able to actually pilot two water valves independently with this module. You could also use a single relay, this tutorial will remain the same except for the electrical wiring of the relay.
  • A water valve, I choosed the Hunter PGV-100mmB 24v (AC)
  • A 220v to 24v (AC) power supply; of course depending on where you are from, maybe you will need a 110v to 24v power supply. Double check if the water valve you order is AC or DC and if your power supply is aligned on that !
  • A bluetooth Xiaomi compatible Miplant Flower Care to be used as the Moisture detector

Setting up the Jeedom Controller

I’ll simply deploy a new Raspbian image on a Raspberry. I wrote a small tutorial about this, and i’ll start here from the end of the tutorial.

Once Raspbian is installed, i’ll check if the embedded bluetooth interface is detected. As seen here, we should also add the pi user to the bluetooth group so that he can use the service through DBus.

root@raspberrypi:~# adduser pi bluetooth
Ajout de l'utilisateur « pi » au groupe « bluetooth »...
Adding user pi to group bluetooth
Fait.
root@raspberrypi:~# hciconfig
hci0:   Type: Primary  Bus: UART
BD Address: B8:27:EB:D3:D3:B8  ACL MTU: 1021:8  SCO MTU: 64:1
 UP RUNNING
RX bytes:731 acl:0 sco:0 events:44 errors:0
TX bytes:1755 acl:0 sco:0 commands:44 errors:0

And check if the bluetooth service is launched.

root@raspberrypi:~# systemctl status bluetooth*
● bluetooth.target - Bluetooth
   Loaded: loaded (/lib/systemd/system/bluetooth.target; static; vendor preset: enabled)
   Active: active since Mon 2020-04-27 20:17:22 CEST; 4 days ago
     Docs: man:systemd.special(7)
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
● bluetooth.service - Bluetooth service
   Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-04-27 20:17:22 CEST; 4 days ago
     Docs: man:bluetoothd(8)
 Main PID: 334 (bluetoothd)
   Status: Running
   Memory: 852.0K
   CGroup: /system.slice/bluetooth.service
           └─334 /usr/lib/bluetooth/bluetoothd
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Perfect. We will try later to use the external adaptor I bought, but it should not be a problem. For information and to be used as a memo for me, you will find here and here some documentation if you want to play with bluetooth on your raspberry, by using the bluetoothctl utility.

root@raspberrypi:~# bluetoothctl
Agent registered

[bluetooth]# list
Controller B8:27:EB:A3:43:77 raspberrypi [default]

[bluetooth]# show
Controller B8:27:EB:A3:43:77 (public)
        Name: raspberrypi
        Alias: raspberrypi
        Class: 0x00000000
        Powered: yes
        Discoverable: no
        Pairable: yes
        UUID: Generic Attribute Profile (00001801-0000-1000-8000-00805f9b34fb)
        UUID: A/V Remote Control        (0000110e-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        UUID: A/V Remote Control Target (0000110c-0000-1000-8000-00805f9b34fb)
        UUID: Generic Access Profile    (00001800-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v1D6Bp0246d0532
        Discovering: no
[bluetooth]# agent on
Agent is already registered
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:A3:43:77 Discovering: yes
[NEW] Device 75:A1:60:B0:33:BC 75-A1-60-B0-33-BC
[NEW] Device 40:CB:C0:E0:72:88 40-CB-C0-E0-72-88
[NEW] Device C4:7C:8D:62:8D:7C C4-7C-8D-62-8D-7C
[NEW] Device C4:7C:8D:62:88:2A C4-7C-8D-62-88-2A
[NEW] Device C4:7C:8D:62:8D:65 C4-7C-8D-62-8D-65
[NEW] Device C4:7C:8D:62:8D:DC C4-7C-8D-62-8D-DC
[NEW] Device C4:7C:8D:62:87:E6 C4-7C-8D-62-87-E6
[NEW] Device C8:0F:10:A4:B6:45 MI_SCALE
[CHG] Device C4:7C:8D:62:8D:DC Name: Flower care
[CHG] Device C4:7C:8D:62:8D:DC Alias: Flower care
[NEW] Device C4:7C:8D:64:44:E8 C4-7C-8D-64-44-E8
[NEW] Device 7C:71:8C:D4:D5:61 7C-71-8C-D4-D5-61
[NEW] Device 3F:D8:34:CE:3F:B1 3F-D8-34-CE-3F-B1
[CHG] Device C4:7C:8D:62:87:E6 Name: Flower care
[CHG] Device C4:7C:8D:62:87:E6 Alias: Flower care
[CHG] Device C4:7C:8D:62:8D:65 Name: Flower care
[CHG] Device C4:7C:8D:62:8D:65 Alias: Flower care
[NEW] Device C4:7C:8D:62:84:E2 C4-7C-8D-62-84-E2
[CHG] Device C4:7C:8D:64:44:E8 RSSI: -99
[CHG] Device C4:7C:8D:64:44:E8 Name: Flower care
[CHG] Device C4:7C:8D:64:44:E8 Alias: Flower care
[bluetooth]# scan off\nDiscovery stopped
[bluetooth]# exit
root@raspberrypi:~#

Now i’ll plug the Aeotec ZWave dongle on the raspberry, and check if it is detected by Raspbian. First we need to make sure the cdc-adm kernel module is well loaded (Aeotec says this is required) and this part depends on the Debian version you installed … To make sure it is loaded we will ask Raspbian to load it at boot by adding it in /etc/modules.

root@raspberrypi:~# joe /etc/modules
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with # are ignored.

cdc-acm

I reboot. Then, I’ll just monitor a few system log files and check on which device it will be mounted by the OS as we will need it later in Jeedom.

pi@raspberrypi:~ $ sudo su -

root@raspberrypi:~# tail -f /var/log/messages
May  2 09:37:02 Pi3-test kernel: [   11.835249] Bluetooth: HCI socket layer initialized
May  2 09:37:02 Pi3-test kernel: [   11.835263] Bluetooth: L2CAP socket layer initialized
May  2 09:37:02 Pi3-test kernel: [   11.835304] Bluetooth: SCO socket layer initialized
May  2 09:37:02 Pi3-test kernel: [   11.852229] Bluetooth: HCI UART driver ver 2.3
May  2 09:37:02 Pi3-test kernel: [   11.852245] Bluetooth: HCI UART protocol H4 registered
May  2 09:37:02 Pi3-test kernel: [   11.852332] Bluetooth: HCI UART protocol Three-wire (H5) registered
May  2 09:37:02 Pi3-test kernel: [   11.852552] Bluetooth: HCI UART protocol Broadcom registered
May  2 09:37:02 Pi3-test kernel: [   12.100250] Bluetooth: BNEP (Ethernet Emulation) ver 1.3
May  2 09:37:02 Pi3-test kernel: [   12.100259] Bluetooth: BNEP filters: protocol multicast
May  2 09:37:02 Pi3-test kernel: [   12.100274] Bluetooth: BNEP socket layer initialized

May  2 09:40:22 Pi3-test kernel: [  110.185265] usb 1-1.5: new full-speed USB device number 4 using dwc_otg
May  2 09:40:22 Pi3-test kernel: [  110.318535] usb 1-1.5: New USB device found, idVendor=0658, idProduct=0200, bcdDevice= 0.00
May  2 09:40:22 Pi3-test kernel: [  110.318550] usb 1-1.5: New USB device strings: Mfr=0, Product=0, SerialNumber=0
May  2 09:40:22 Pi3-test kernel: [  110.328039] cdc_acm 1-1.5:1.0: ttyACM0: USB ACM device
May  2 09:40:22 Pi3-test mtp-probe: checking bus 1, device 4: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
May  2 09:40:22 Pi3-test mtp-probe: bus: 1, device: 4 was not an MTP device
May  2 09:40:22 Pi3-test mtp-probe: checking bus 1, device 4: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.5
May  2 09:40:22 Pi3-test mtp-probe: bus: 1, device: 4 was not an MTP device

Lines 14 to 21 above are logged when I plug the ZWave dongle, it seems good as the system detects it and seems to mount it.

Next i’ll plug my external bluetooth dongle.

May  2 09:42:25 Pi3-test kernel: [  233.426617] usb 1-1.2: new full-speed USB device number 5 using dwc_otg
May  2 09:42:25 Pi3-test kernel: [  233.566301] usb 1-1.2: New USB device found, idVendor=0a12, idProduct=0001, bcdDevice=82.41
May  2 09:42:25 Pi3-test kernel: [  233.566315] usb 1-1.2: New USB device strings: Mfr=0, Product=0, SerialNumber=0
May  2 09:42:26 Pi3-test kernel: [  233.620575] usbcore: registered new interface driver btusb
May  2 09:42:32 Pi3-test kernel: [  239.836725] Voltage normalised (0x00000000)

Same result, it seems to be detected and mounted. Let’s check it:

root@Pi3-test:~# lsusb
Bus 001 Device 004: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB
Bus 001 Device 005: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode)
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Everything seems good, the Sigma Designs is the Aeotec ZWave dongle, and the Cambridge Silicon Radio is the Bluetooth dongle. This is now our functional hardware setup:

Now I install Jeedom. Here I’ll reuse some parts of the tutorial I wrote about restoring Jeedom from an old Debian 8.5 Virtual Machine to a new Debian 10.3 VM.

root@raspberrypi:~# wget https://raw.githubusercontent.com/jeedom/core/master/install/install.sh

root@raspberrypi:~# chmod +x install.sh

root@raspberrypi:~# ./install.sh

Itwill take a long time on a Raspberry. It should finish with those lines. Of course, backup somewhere the mysql root password it displays, it may be usefull.

==================================================
|         TOUTES LES VERIFICATIONS SONT FAITES    |
==================================================
étape 11 vérification de jeedom réussie
/!\\ IMPORTANT /!\\ Le mot de passe root MySQL est XXXXXXXXXXXXXX
Installation finie. Un redémarrage devrait être effectué

root@raspberrypi:~# /sbin/reboot

Jeedom configuration

We connect to Jeedom’s UI (either you know its IP by your router/dhcp server, or by using /sbin/ifconfig after installing the nettools package. The default login/pass with a fresh Jeedom installation is admin/admin.

We are asked to change the admin password.


And we need to configure our market login/pass so that we will be able to download plugins. Beware if you are already using the free Jeedom version, you are allowed to only run 2 Jeedom instances at the same time, on the same market account. If Jeedom tells you you have too much instances connected, then you will need to disable one through Jeedom’s web site / market link. It’s very easy to re-declare it after.

Jeedom is operationnal, but is all empty as I did not create yet any “container” (or Object) in which to assign some components to be displayed. Let’s just create a “home” one. On my production Jeedom system, I defined one container per room and terrace. You can organize them in a hierarchical structure.

Now we need to install through the Marketplace, some plugins:

  • Red below:
    • ZWave : to handle the ZWave communication protocol (to interact with the modules which will open/close the water valves,
    • Bluetooth Advertisement (also called BLEA): to handle the bluetooth communication with the moisture detectors,
    • Weather: to check if it’s raining before we check the moisture level,
  • Orange below:
    • Agenda : you may also want this one to deploy a much simpler watering system simply based on a scheduling (or you could also do this in a much simpler way with a crontab job inside jeedom). In both case we won’t explain this in this article as my system is using moisture detectors as you already guessed.

Bluetooth Advertisement Plugin configuration

From the Market page, we install the BLEA plugin, this is very simple.

When the plugin will be installed, Jeedom will ask if you want to switch on the configuration page, say yes.

Now we need to activate it, and I’ll also check the checkbox to display its standard webview on the panel. Click Save, then go back to the front Jeedom panel by clicking on the Jeedom logo on the top left.

Now you should see the same view but augmented with a few fields, about the plugin configuration. If not, refresh the page (F5) or use the “Plugins” menu on the top, then “Manage Plugins”, and then your BLEA plugin.

First, notice it is already installing all the required system dependancies in the background (red circle above). With the Debian installation I’m using, this step was succesfull, no need to check the logs and debug some stuff (most of the time if you have any problem at this step this is related to Python packages or permissions). Once the permissions are installed and OK, the daemon should start automagically and show a double “OK” (orange circle above on the same line as the red one).

We will need to tell Jeedom what Bluetooth controller should be used, in the dropdown list (second orange circle on the picture above). Both the controller embedded in the Rasp3 & our external dongle are listed; and as the external one is the latest one we added on the system it is the “hci1”. you should end up with this:

Now we will associate our Miflora and for this you need to switch on the other configuration screen for the BLEA plugin by going in the Menu “Plugins -> Home Automation Protocols -> Bluetooth Advertisement”.

Of course the equipment list is still empty. We need to:

  • Unpack our Miflora detector and remove the small plastic part under the battery, it will automatically power on the detector,
  • Place the Miflora near the Bluetooth antenna to ensure a good signal strength,
  • Launch a blutooth scan in Jeedom BLEA by using the top left button below.

When the scan is launched it will automagically create a new equipment for each device detected through the bluetooth controller. Note that you can restrict the type of device you are looking for in the scan launch window. In my case, a lot of devices will be found, as I actually already have 8 “production” Miflora, used to irrigate my terraces, and associated with my Jeedom V3 production system on a Debian Virtual Machine.

I just let the Scan automatically run, it will stop after 2 minutes on this screen:

When you click on the little left arrow just before the 3 tabs, you will go back to the list of equipments detected by Jeedom. Each of them can be clicked to enter its configuration.

As I need for my article to isolate the Miflora i unboxed for this tutorial, and because Xiaomi does not show the Mac address of the equipment on the box or on the device, i’ll have to play with one of the sensor to make it detect an abnormal value. I choosed to put my test sensor in a glass filled with salted water so that moisture should increase to something close to 100%, way above the other sensor which are really in my flowers.

I could also check all the MAC adress on my Production Jeedom and deduce which is the new one (as it is not declared on my production jeedom), but I think it will be longer than just measuring the moisture.

First we need to assign each equipment to our Jeedom object we created a few steps above, in order to get them displayed on the Jeedom front page:

In the parameters tab, I will also modify the default refresh time set to 18.000 seconds, I dont want to wait so long, i’ll put it to 300s (5mn).

Do not forget to save the configuration with the top right green button ! And do that for each equipment you have.

Now when you go back on Jeedom’s frontpage this is what you should see:

Notice each Miflora equipment is gathering data and Jeedom will display it permanently: temperature, moisture, luminosity, fertility.

Now I fill a glass with salted water, and put my new sensor inside the glass, and I wait a few minutes to compare the moisture values.

Either you wait 5mn, or you can use the little “refresh” icon to force a refresh of the data. Jeedom will then query the device for an update. Here’s the change:

Woohoo we got the candidate ! Let’s remove it from its glass as I think too much conductivity is not good for the battery …. And I’ll rename this Miflora and remove the other equipments so that for this tutorial i wont be confused with other Mifloras I use.

This part is finished: we have a working Jeedom with Bluetooth capacities and a remote Miflora detected and gathering data for us. This device can now be used inside Jeedom to code some scenarios (we will do this later to start the water valve) or even PHP scripts. Let’s switch on the ZWave part.

ZWave plugin configuration

We go back to the Market inside Jeedom’s UI (note that I decided to switch Jeedom’s UI in English :p) and install the ZWave plugin; same process as before for the BLEA plugin, I won’t show it again. Once installed, the plugin will display its configuration page.

Then we activate the plugin. And we refresh to display the extended UI.

The dependencies should install automatically after a few seconds. But First I recommend to change right now the dongle port in the dropdown list in the Setup section of the page, as it is not clear in the documentation or forum if this parameters can help to build the dependencies. It should NOT change the packages required and the way they are compiled, but some users report some failures when the port is not set; and it’s not a big deal to change it.

Reminder: it was displayed when we checked the logs when we first connected the dongle, line 17:

May  2 09:40:22 Pi3-test kernel: [  110.328039] cdc_acm 1-1.5:1.0: ttyACM0: USB ACM device

Don’t forget to save the configuration with the green “Save” button on the right of the Setup block. Now we can launch the dependencies build.

Dependencies installation will take a few minutes. If you think the webpage is stalled and have a doubt about the installation, you can connect on your box and check the “tail -f /var/www/html/log/openzwave_update” log file manually. This is the result on my box.

So we can see the dependencies are now OK (green circle), BUT the daemon did not start successfully (red circle). First we’ll have a look at the daemon’s log, by refreshing the page (top yellow circle) and open the log (other yellow circle).

Obviously the dependencies forgot to install a python module … It’s strange, maybe that’s because I initially had Python3 as the default python version system-wide, and I changed it later but it was not shown in this tutorial. You may not have this problem. However, let’s try to manually fix this by opening a shell on the box, and install the tornado package.

pi@Pi3-test:~# sudo su -

root@Pi3-test:~# python -m pip install tornado
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting tornado
  Downloading tornado-5.1.1.tar.gz (516 kB)
     |████████████████████████████████| 516 kB 1.5 MB/s
Collecting backports_abc
  Downloading backports_abc-0.5-py2.py3-none-any.whl (5.2 kB)
Collecting futures
  Downloading futures-3.3.0-py2-none-any.whl (16 kB)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(HTTPSConnectionPool(host='www.piwheels.org', port=443): Read timed out. (read timeout=15),)': /simple/singledispatch/
Collecting singledispatch
  Downloading singledispatch-3.4.0.3-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: six in /usr/lib/python2.7/dist-packages (from singledispatch-etornado) (1.12.0)
Building wheels for collected packages: tornado
  Building wheel for tornado (setup.py) ... done
  Created wheel for tornado: filename=tornado-5.1.1-cp27-cp27mu-linux_armv7l.whl size=461234 sha256=9bccf18e976de51e63e9ef3288d6902fad6cbc3c52286109cab357a0f8c07486
  Stored in directory: /root/.cache/pip/wheels/d8/83/af/e0dc6afbf3a2c51af8d6e3f9fbe790d0c581c2de05bc5d50f5
Successfully built tornado
Installing collected packages: backports-abc, futures, singledispatch, tornado
Successfully installed backports-abc-0.5 futures-3.3.0 singledispatch-3.4.0.3 tornado-5.1.1

root@Pi3-test:~# python3 -m pip install tornado
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Requirement already satisfied: tornado in /usr/local/lib/python3.7/dist-packages (6.0.4)
root@Pi3-test:~#

Now we restart the daemon on Jeedom’s plugin configuration page (yellow circle below).

That’s now ok, perfect. We have a functional ZWave plugin. Let’s just do a sanity check: reboot the box through Jeedom’s UI, and once it’s back online, check the “Health” page on Jeedom. Both plugins should be displayed in green.

So far so good ! Now it’s time to plug & detect our first ZWave module.

Fibaro FGS-222 “Dual Relay” ZWave module configuration

First we will study the wiring we target on the module. Next we will only power the module with 220v to make it detected in Jeedom. After this we will plug the 24v power supply to test the water valve.

Target Wiring Diagram

Fibaro’s documentation for this module is here (or here for FR), you MUST read it to understand how it works. From this documentation, the standard wiring diagram is as follow:

As you can see, there is no explanation in this diagram, or in the documentation, about the dry contact mode the module actually supports and which will be very useful in our case, as the water valves require a 24v AC power supply to switch from a CLOSED position, to OPENED. Our target diagram is indeed this one.

The module is powered by the main 220V AC line. The water valve will be powered by the 24V AC line when we will trigger the Q1 output of the module by using Jeedom and the ZWave communication protocol between Jeedom and the module.

Note 1: you may want to add a switch on the S1 input of the module, to be able to manually trigger the water valve, in case your Jeedom controller is down, or your ZWave network broken.

Note 2: this module is a double relay, we can add a second water valve on the Q2 output (and eventually a second switch on the S2 input). In Jeedom you will be able to trigger the Q2 output for the second water valve, independently of the Q1 output.

Wiring the module & inclusion in Jeedom

Here’s the module wired to be included in Jeedom. I won’t plug the water valve for now as I just want to check if the inclusion of the module in Jeedom is OK.

Now we power it.

In Jeedom we then need to go in the Home automation Protocol Menu, then ZWave, and we will launch the Inclusion mode, to discover ZWave devices.

Jeedom will ask if you want to add a secured device (ZWave+ protocol), or not secured (classical ZWave protocol). ZWave protocol security is out of scope of this tutorial. Anyway, the Fibaro FGS 222 is NOT a ZWave+ device, therefore we select the first option.

Once it’s done, we need to quickly press 3 times on the little “B” switch on the module to also launch its inclusion mode.

Jeedom should detect the module and quickly inform you.

And finally Jeedom will display the configuration page for the device.

I’ll change the parent object to assign it to the only objecti I created for this tutorial (and Save the configuration), and then I go back to the home page.

Woohoo ! Now if you click on one of the lights on the UI, you should hear a very soft “clac” in the module, it’s the internal relay switching on or off. Also, the light will change on the UI.

Tweaking module configuration

While we are on this module, as we are handling with water system, I strongly recommend to check its parameters defining:

  1. An “auto off” for the relays after an amount of seconds, so that if our incoming scenario (or Jeedom or the ZWave network) has a problem, then the water valve will automatically close after this delay;
  2. Its state after a power shutdown.: we probably want to ensure the module will switch all of its outputs to OFF (water valve not opened) when the power will be back.

For this, just click on the module name on the frontpage, it will bring you back to its ZWave page. Then click on the blue “Setup” button.

The two parameters we want to check/change are the { 3, 4, 5 } and { 16 } for my two points above.

First, set the Parameter 3 to “Manual override enabled” in case you plug manual switches on the module. Then set parameters 4 & 5 to the delay you want. Take care, the documentation is not clear, it says the value is in milliseconds but it’s not: for a 3mn delay, I have to set the value to 1800 (60 seconds * 3 (mn) * 10 (factor which I don’t explain)). Test it with a watch: when you switch on the relay through Jeedom’s UI, you should hear the relay and Jeedom’s light on the UI switching off after the delay you put here.

Then, set the parameter 16 to “State NOT saved at power failure, all outputs are set to OFF upon power restore” if it was not by default.

We’re all set for the ZWave module. Now let’s plug the water valve and test it.

Wiring the water valve

Let me remind here the electrical diagram we shall use for this setup.

In real life, and still on the test bed, the wiring is very simple, as one wire of the valve goes on Q1 Fibaro’s module output, the other one is plugged on one of the wire of the 24V AC converter, and the other wire of the 24V AC converter goes on the IN plug of the Fibaro’s module.

When we will trigger the Q1 output through a ZWave command (by using Jeedom), the Fibaro’s module will “wire” its IN input to its Q1 output. As we wired the IN input with a 24v power supply, the valve will be powered with 24V as long as the Q1 is not asked to shut off through another ZWave command.

It becomes this:

Testing it manually in Jeedom

I made a little video to review the setup, and test it live so that we can verify the valve can be remotely controlled by Jeedom. Once it works, we will be able to automate the setup.

Now that the hardware is all functional, we want to switch on the water valve to irrigate some flowers when the moisture is below a threshold and the weather not rainy.

Weather module configuration

As we want our system to also take care about the weather, to not irrigate if some rain is forecasted in the next hour or couple of hours, we will need to gather the weather forecast. In jeedom Market, look for the Weather plugin and install it. Once installed, activate it.

The plugin will now be reachable through the “Weather” section in the “Plugins” menu.

Note that on its configuration page, we need to provide an OpenWeatherMap API Key. You can get one by registering here to their free service, and create one in the API menu on their web site once you are logged in with your fresh new account. Just copy/paste the API key in Jeedom’s Weather plugin’s configuration page, and do not forget to save it.

Now we will jump on the Plugin main page, and add an equipment.

First we have to give it a name, then on the main configuration page we have to assign it to our first & only Jeedom object, activate equipment, activate the widget to be displayed, and fill in the place for which we want to gather the weather forecast (city, country code).

Again, don’t forget to save it.

Note the various Commands we will be able to use through Jeedom. You can check them on the “Commands” tab of your new equipment. The one we will want to check in your scenario is the “Rain+1”, which will return a value indicating the risk of raining.

As explained on the equipment page, if you just created your API key, you may have to wait a bit before it can really be used. If it’s not yet useable, you will get this warning.

After ~10 minutes in my case, the error message was gone, I could save the configuration, and refresh Jeedom’s homepage to obtain this.

We can now use the various informations from our two sensors (Miflora, and weather) in order to automatically activate the water valve, in a scenario.

Gluing everything together in Jeedom with a scenario

I think the simplest way to handle the watering system is to run a scenario every X hours. This scenario will:

  • Check if the weather (actual or incoming) is rainy & above a threshold (need the weather plugin or a personal weather station);
  • If yes, then we do nothing, the next iteration will do the job unless it’s still raining
  • If no, then:
    • We check if the moisture is above a threshold
    • If no we do nothing
    • If yes we switch on the water valve, add a small timer and switch off the valve after X minutes.

As a security, we saw earlier that the water valve will be switched off automatically with its parameters 3 & 4/5 so we could actually avoid to make our scenario explicitely switch off the valves, and only rely on the devices auto-switch-off parameter; but I only see it as a backup: we shall be able to switch off manually the valves in our watering system global scenario. If for any reason the shut-down order fails from Jeedom, then the device auto-switch-off parameter will close the water valve.

First, let’s create a new periodic scenario through the well named “Scenarios” menu.

Then we add a new one and give him a name before we arrive on its main configuration page.

Notive the second tab called “Scenario” on top of the scenario main page. This is where we will implement the logic of our scenario. But first let’s make this scenario periodic, every two hours. No need to modify other parameters and their explanation is out of scope of this tutorial.

Now we add a line for the scheduling.

You can either manually enter a crontab formated string in the line which will appear after clicking on the “Programming” button, or you can use the little “?” button on its right, to be proposed standard but less customizable options.

In my case i’ll use a formated crontab line, to tell the scenario to execute every two hours.

You can save it, and switch on the “Scenario” tab. On top right of the page, click on “Add block” button and select the “If Then Or” block.

A new block was added in our scenario and now we need to find the Rain+1 command which is provided by the Weather plugin.

Now we add the condition on this command. We could just validate and enter it manually in the block definition.

Now we define what we should do if the condition is met. Click the “Add” button just bottom the “SO” line in the left of the block, and add another If/Then/Or block to test the Miflora humidity sensor, inside the first one which already checks the weather forecast.

Again, on the right of this new block, we have to find the right command.

And we define the condition. For this test i’ll use a random value of 50, as i’ll test it with a glass of water. This value shall later be adapted to what is the best for your trees/plants, and it may also be defined as a variable so that you can easily modify it without having to edit the whole scenario. It’s out of scope of this tutorial.

Now we add a new “Action Block” inside this second block, to activate the valve.

And we add a first action, to trigger the Relay switch’s Output, which will activate and open the water valve.

Next, still inside this thrid block, we add a new action, which will the the pause to perform before we order the relay switch to shut-down the valve. This is indeed the delay your water valve will allow your tress to be watered !

Finally, we add an action to close the valve.

Our basic scenario is now almost complete.

For a “production” scenario, we shall add some notifications to be sent by email or pushover or other external services, to inform you by various ways, that the system decided to NOT floor because of the weather forecast, or decided to floor because the humidity check was triggered.

Those external systems require plugins already available on Jeedom’s market, and they are free. For now, without additional plugins, we can tell our scenario to display a small information windows inside your browser if you are connected to Jeedom’s UI when the event is triggered, and add a line in the scenario’s specific log. You need to add two actions in the 3rd block we created as shown below, and add 2 “ELSE” blocks to add actions if we did not floor because of the humidity being high enough, or the weather becoming rainy.

Now we save our complete scenario, and we test it.

Testing the system

To test the scenario, i’ll first plug my Miflora in salted water so that it improves conductivity. I’ll also set up the Miflora to refresh its data every 10s (that is way too low for a production system, it will ruin its battery very fast), and make sure my Double relay default setting for automatic shutdown of outputs are set to 3mn so that we are sure the valve will closed following our scenario last order, after the pause. We shall of course hear the valve sound when it’s triggered on and off, ant we should also see an information window appear in Jeedom, and a specific line in our scenario log file.

I’ll also show how to modify the scenario to not be triggered periodicaly, but to be triggered when the humidity sensor itself will feed back a humidity rate below 50. The scenario in that case should be modified to let the ground propagate humidity and to let the sensor detect the new humidity. In our test setting here, as we updated the sensor configuration to update its data every ten seconds, the valve will actually being opened way too much in a few minutes as the humidity sensor will not have the time to detect a new, higher, value of humidity in the ground.

Better than written words, here is again another video of the test session.

Conclusion: our test bed is OK, we can now check what it becomes “In Real Life”.

Production system

We described in this article how to automagically open and stop a water valve depending on the humidity sensor in the ground. We saw how to add the pre-requisite plugins, configure them, and code a scenario to check values and launch actions on the valve dependings on the values we detect. Those are all the basics to deploy it in your own garden.

Now it’s all up to you to wire it electrically outside (and protect it against environmental conditions which may alter the system !), plug the water tubes, test if there are no leaks, etc. I can not describe here how to do it in your own garden or terraces. But, on the computing side, you could also want to improve a little bit the hardware system, and the logics behind.

For example, we learned how to add a security level on the Relay switch, to automatically shut down the water valve after a few minutes, even if Jeedom is failing to send the CLOSE order to the relay. In my case, I choosed to redundate this “software” security feature with another one, hardware based: I added a master valve on each of my terraces.

My setup is as follow. First the two terraces overview. The diagram may seem complex (you can enlarge it) but it’s actually simple, it’s just full of wires … so many arrows to link the components on the diagram.

Here is what one of the electrical box looks like. You can see:

  • The 220V AC to 24v AC power supply, with its outputs splitted to the 5 valves and the Fibaro relay switches,
  • 3 Fibaro relay switches. 2 are double relay switches so that I can control 4 water valves as they both have two outputs, and the thrird one is a single relay switch, to control the master valve. Their outputs are also wired to the water valves.
  • You can also see a litttle USB adapter, it’s used to power a raspberry I use on the terrace as an external antenna for Jeedom, it will be explained in another tutorial.

Here is the overview of one of the two terraces. Everything is wired under the terrace floor so nothing is visible externally except the little water lines going to the flowers. All the system must be carefully sealed, as of course everything will be flooded when it rains, it will have to support winter, etc. For example I choosed to seal the water valves electrical wires with silicon.

On my Production Jeedom box, I have:

  • 9 Miflora configured: 8 for the 8 outdfoor watering lines, and one for the interior, not used for automatic watering),
  • 6 ZWave Relay switches, 4 are double relays.

I choosed to split the watering scenario in different modules: 1 scenario to handle each watering line specificaly, 1 master scenario to check the conditions and to call the specific scenarios to activate a watering line with the duration as a parameter, 1 scenario to shut down all the valves by security. Also, I defined all the triggers to be used in variables, so that it’s easier to modify them at the same place, and not in the scenario code.

Improvements

Among the improvements I sugget you to implement, there’s the notifications to inform about what’s happening: pushover, mails, etc. Nice plugins exist on Jeedom’s market to allow you to do that. They will expose new commands that you will be able to use inside your scenarios. On my system I created a notification system which can be called with a few parameters: text to notify, method of notification to use (jeedom notification center, log, mail, speach to voice, pushover). Now my watering system is sending me notifications on my phone when it’s activating a watering line or when it won’t because of the weather forecast.

Also, I strongly suggest to use the “Virtual” plugin, to add virtual components on Jeedom’s UI. Inside this virtual equipment, you will gather data from the sensors (to display humidity, fertility, temperature, etc.), but you can also compute new values, for example to display when was the last data from a sensor, and when was the last watering. It’s very usefull to check if all the Mifloras are actually working well. You can also plug notifications in case you detect something is going wrong.

Conclusion

We saw how to implement a single but powerfull automatic watering system, with on demand irrigation for your flowers. The next article will describe how to deploy bluetooth antenas to your system, so that you don’t only rely on the single antenna plugged on the main raspberry. It’s important to do so as bluetooth range is low, and a bad connection could easily ruin your Miflora’s batterys life by resending data. Even worse, depending on environmental conditions around the network, some may not be seen anymore by Jeedom every time.

Thanks !

Quick tutorial: installing Debian raspbian on a Raspberry Pi (Zero)

I’ll explain here the basic configuration I use on my Debian raspberrys when I use them with Jeedom automation system. We just need a raspberry pi zero, an SD Card (I use 16Gb ones, you can use a smaller one), a 1.5A USB power supply, and a Windows computer in order to download & burn the Raspbian image file on our SD Card. Installing Jeedom is out of scope of this article, but we already discussed it in a previous article (not on a Raspberry though but on a VM).

Get the raspbian image

We need to download & install Win32diskimager and then download the raspbian lite image (lighter, no graphical environment). Unzip the image somewhere you will easily find it.

Next, open Win32diskimager, tell it to use your raspbian unpacked image file with the button” Image file” and in “Device” select your SD card. Please take care to choose the right device, as it will overwrite everything ! Then click on “Write” and have a coffee.

This image has an empty alt attribute; its file name is image-22.png

When the operation will be done, a Windows popup asking you to format a drive will appear, this is normal, one of the two filesystems wrote on the SDCard is not recognized by Windows, as it is a Linux EXT partition. You should also see a new drive in your Explorer, called “boot”, this is also normal.

Configure the Wifi before we boot on the SD Card

We could configure the Wifi on the raspberry after booting on it, but it would require you to plug it on an external display to configure it. Fortunately, we can set it up before, just by puting a file on the “boot” partition, and this file will be used by wpa_supplicant, the software used for Wifi connectivity.

Create a text file containing the code below, and save it as ‘wpa_supplicant.conf’ at the root of your “boot” drive, on the SD Card. You may need to adjust your country code on the first line.

country=FR
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
        ssid="My SSID"
        psk=My un-encrypted passphrase
        scan_ssid=1

With the latest Buster Raspbian release, you must ensure that the file contains the three first lines at the top.

Also, if you are using a hidden network, the extra option scan_ssid, may help connection.

About the passphrase: the password can be configured either as the ASCII representation, by using quotes as per the example above, or as a pre-encrypted 32 byte hexadecimal number. You can use the wpa_passphrase utility (with another Linux box) to generate an encrypted PSK. It takes the SSID and the passphrase as as inputs, and generates the encrypted (hashed actually) PSK. With the example from above, you can generate the encrypted PSK with the following command:

wpa_passphrase your_SSID your_un-encrypted_SSID_passphrase

network={
        ssid="your_SSID"
        #psk="your_un-encrypted_SSID_passphrase"
        psk=28d0eca5ddde6c3f53331833547dfa68eb732030f5a48b8b499ab2600015d4be

Configure the SSH server before we boot on the SD Card

Same technic will be applied here: simply create an empty file called ‘ssh’ at the root of yout “boot” drive, on the SD Card. No extension needed.

You can now remove the SD Card from your Windows computer and insert it in your Raspberry.

Boot the Raspberry and connect to it

Now, insert the SD Card in your Raspberry, and plug the USB power. The green light should light a few times until it stabilizes to full color. If your Wifi file is OK (network name & passphrase) you should see it connecting on your network. To check his IP, you have to connect on your Wifi router and refresh a few times the list of connected devices to visually detect the new device.

On your Wifi router, or whatever the component on your network acting as a DHCP server, I strongly recommand to assign a static IP to your raspberry so that it will be much more easier to administrate it later, and configure it in Jeedom.

Then use your favorite SSH client to connect to the raspberry’s IP. The default login and password after a fresh installation is ‘pi’ / ‘raspberry’.

First step is to change the default passwd:

passwd –> enter ‘raspberry’ (default passwd after a fresh installation –> enter your new password

Tweak the configuration

Change to the root user by using:

sudo su -

First, but this is a personal taste, i’ll install the joe text editor.

apt-get install joe

I like to make sure it will be the default editor:

update-alternatives --config editor

Then I install the locate package, to easily find files on the filesystem (by using first ‘updatedb’ to index the FS).

apt-get install locate
updatedb

Then I suggest to change the Timezone and extend the root partition to the available size on the SD Card:

raspi-config

Choose “4 Localisation Options”

Choose “I2 Change Timezone”

And select your country/town.

Then choose “7 Advanced Options”

Choose “A1 Expand FileSystem”

Choose YES when it will ask to reboot.

Then we will tweak a little bit the wifi interface. Reconnect by SSH, and edit the file /etc/network/interfaces and make sure it contains those lines (if you want to use DHCP; otherwise change the 8th line from ‘dhcp’ to ‘static’ if you want to fix the IP and ensure you got DNS servers properly configured in /etc/resolv.conf):

sudo su -
joe /etc/network/interfaces
source-directory /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
wireless-power off

Now we will make sure the default Python version used is the “old good” 2.7, while the v3 is installed ans useable, system-wide:

update-alternatives install /usr/bin/python python /usr/bin/python3.7 1
update-alternatives install /usr/bin/python python /usr/bin/python2.7 2

As the highest number at the end of each line is the priority to be used, Python 2.7 will be used by default. Now we update a few things.

apt-get install python-pip python3-pip libglib2.0-dev
python2 -m pip install upgrade force pip
python3 -m pip install upgrade force pip
pip install bluepy
pip3 install bluepy
setcap cap_net_raw+e /usr/local/lib/python3.7/dist-packages/bluepy/bluepy-helper\nsetcap cap_net_admin+eip /usr/local/lib/python3.7/dist-packages/bluepy/bluepy-helper

Note that the line 5 above should return that it is already deployed:

Requirement already satisfied: bluepy in /usr/local/lib/python3.7/dist-packages (1.3.0)

I suggest you reboot with /sbin/reboot to check it’s reconnecting well – In my case it is !

Update the RPI & Raspian

sudo su -
apt-get update
apt-get full-upgrade

Then we will free some space on our root directory …

apt-get autoclean
apt-get autoremove
apt-get clean

Now I get a fully functional raspberry pi zero runing a specific debian, connected to my wifi network, and able to be used through SSH. This will be one of my bases for later articles.

Restoring Jeedom from a Debian 8.5 to a Debian 10.3 Virtual Machine

We will here install the excellent Domotic/Home Automation system called Jeedom from an old Debian 8.5 on a fresh new Debian 10.3 Virtual Machine. Actually in this post I will not only install it, but i’ll also restore a backup of my fully functional Jeedom V3 installation that use to run on a Debian 8.5. I decided to upgrade by installing a full new system as I ran into more & more problems with Python dependencies each time I had to upgrade the packages, or some Jeedom plugins.

VM Configuration

For my Virtual machines, I run vSphere 6.5 on an Intel NUC, this is perfectly supported (except the sound card but we don’t need it for this use).

Here is the configuration for my actual Debian8.5:

This image has an empty alt attribute; its file name is Pasted.png

First i’ll deploy the netinstall version of Debian 10.3 on my vsphere datastore. Then i create a new VM with the same capacity of the old one, it runned smoothly.

This image has an empty alt attribute; its file name is Pasted-3.png

Note that i’m not running the latest vSphere update … Debian 10 is not known in the dropdown list. It’s not a problem.

This image has an empty alt attribute; its file name is Pasted-4.png

I’ll use my “datastore 1” datastore, it is an SSD drive.

This image has an empty alt attribute; its file name is Pasted-5.png

Same settings as my Debian 8.5 VM. Note that you need to change the dropdownlist selection inside the CD/DVD Drive, to use your Debian iso file downloaded & uploaded on your vSphere datastore earlier.

This image has an empty alt attribute; its file name is Pasted-6.png

Once created, here is the list of all my VMs. I made some cleanup before. Notice I’m also upgrading my gekko trading bot VM from a Debian 9.3 to a new 10.3 VM. Now let’s start our Jeedom 10.3 VM.

This image has an empty alt attribute; its file name is Pasted-7.png

Debian Installation

I dont need any graphical install … Let’s go for text install. I won’t show every screen, but basic options I choosed:

  • French language, timezone, keyboard (nobody’s perfect),
  • DEBIAN10-3-JEEDOM as hostname
  • Domain: home
  • After setting the root password, I created a new ‘jeedom’ user
  • I choosed LVM to partition the disk. I did not do it on my previous 8.5 install (don’t remember if the choice existed), but it will be easier later if we need to add some disk space or change existing one.
  • I choosed to have /home, /var and /tmp in separated partitions. It is much better to prevent applications in your various ~home dirs to fill all your disk space, or logs in /var/log, or various stuff in /tmp
  • To be honest, after two attempts to restore Jeedom, I needed to modify the LVM volumes inside the install to delete the default home and var logical volumes, in order to recreate them and allocate 5Gb to var, as my old Jeedom database is huge and the first time it completely filled the /var directory
  • Then I Next, next, next …
  • The install will download & install the required packages
  • In my case I didnt need a proxy server, no desktop environment
  • BUT i DO want a web server and an ssh server
This image has an empty alt attribute; its file name is Pasted-9.png

The install process will again download & install the required packages. I’ll choose to install GRUB on /dev/sda as it will be the only OS on this VM.

Now we reboot.

This image has an empty alt attribute; its file name is Pasted-10.png
This image has an empty alt attribute; its file name is Pasted-11.png

Then i’ll connect on the VM with the VMWare console, and update  the VM with:

Installing the ‘net-tools’ package will allow us to use ‘ifconfig’ to get our IP address.

This image has an empty alt attribute; its file name is Pasted-13.png

Hint: this is the moment you want to define this IP address as a static one on your router so it won’t change in the future. This is a must for jeedom as much equipments or routines will refer to your box by using its IP address.

We also want to install the VMWare tools. I won’t install the ones from my vSphere installation since it is not up to date. I’ll install the Open VM Tools.

Notice the difference on your vSphere UI (maybe after the reboot we will do later):

This image has an empty alt attribute; its file name is Pasted-16.png

We now see the IP address, and vSphere does see the VMWare tools installed. Perfect.

When prompted, press y to continue the installation, it will install a few dependencies, and when it’s finished, as we installed a buch of things, I suggest to reboot the VM by using:

Now you should be able to SSH to your VM. I’m using Putty from a W10 laptop.

This image has an empty alt attribute; its file name is Pasted-14.png
This image has an empty alt attribute; its file name is Pasted-15.png

Jeedom installation, Mysql tweaking, Apache2 configuration

What is following is extracted from various parts of the Debian installation process from Jeedom documentation (§1 & §4 + some links from it). I won’t follow everything as the Jeedom documentation (as far as I remember) will install jeedom in the default Debian /var/www directory, and I changed that on my previous installation: the whole Jeedom installation website will in fact be located in /home/jeedom (root of my jeedom user). Yes it does require a few changes in the default httpd (Apache) conf file. 

The lines above will install sudo (then add our jeedom user created during the VM installation to the sudo group) and fail2ban (to limit the rate of connexions on opened ports from the exterior) and ffmpeg as it will be needed later by Jeedom (replacement of libav-tools). Now as I dont want my sudoers to enter the root passwd, i’ll edit the /etc/sudoers file and change (or comment) this line:

With this one:

Now with the user jeedom you should be able to perform root actions by using (for example) ‘sudo ls -al /root’.

Then we will install Jeedom once before we will restore our backup.

Note #1: in case you are installing jeedom on an already existing system with a running version of mysql, you can pass arguments to the script (eg. ./install.sh -w /home/jeedom/html -m Jeedom) as I did to directly deploy it in /home/jeedom/html and avoid a few operations to move it later.

Note #2: in case you want to reset the previous Jeedom installation steps and rerun it with a clean database & target html dir, perform the following steps:

  1. Remove the Mysql jeedom database & user
  2. Clean the /var/www/html directory OR /home/jeedom/html in case you modified WEBSERVER_HOME in the Jeedom install.sh file

Running the install script will install a bunch of packages: mysql, php, python dependencies, etc. At the end it should display a successfull message, and you need to note the root mysql password displayed, in case you didn’t define it by passing it as seen above as an argument to the install script.

We will now change the root passwd for mysql, to restore the one used on our Debian8.5 installation (hint: you can see it from your Debian 8.5 Jeedom UI, in the configuration menu, then OS/DB tab), and give more privileges to the jeedom user (this is needed to restore some stuff during the old Jeedom database restoring).

Here you need to use the root password for the Mysql Database you defined in your install script, or noted after the Jeedom installation.

Next, I’ll want to move the default mysql database files from the default dir /var/lib/mysql to my /home/jeedom HOMEDIR, where there is more space.

First we make sure about the default mysql datadir used for storing data.

We stop mysql and check it is well stopped (look for a line “Status: “MariaDB server is down”” in the systemctl status output).

We’ll copy the existing database directory to the new location with rsync. Using the -a flag preserves the permissions and other directory properties, while-v provides verbose output so you can follow the progress. And we rename the old database with an extension, so that we we ensure mysql can not use this one anymore. We will remove it after.

Now I’ll edit the /etc/mysql/mariadb.conf.d/50-server.cnf file (Debian specific) to change the datadir directive and point it on my new location, and edit /etc/apparmor.d/tunables/alias to create a pointer between mysql old dir and new dir.

And we restart AppArmor.

Before restarting mysql we will need to create a basic directory structure in the old files directory as it checked by the default mysql startup scripts, and we will need to authorize Mysql/MariaDB to be run from the /home directories by addind an authorization.

Then we need to reload systemctl configuration files.

And we restart Mysql, and check it is running well.

OK there are some complains about the mysql update, but it’s launched and working. Now a quick sanity check inside mysql to verify it is using the new datadir.

Perfect. We can now remove our old database directory.

Now we will connect to our fresh Jeedom UI (by using http://<our vm ip>), and go for the graphical installation (sorry my screenshots are in French). Default Jeedom login/pass is admin/admin.

This image has an empty alt attribute; its file name is image-13.png

Next we change the default password. I’ll reuse the one i’m using on my other Jeedom installation. And in next screen i’ll tell Jeedom what is my login/pass for Jeedom’s market. If you don’t have one yet there is a link to create one.

After validating the account, Jeedom will display its start page, very empty but that’s normal.

This image has an empty alt attribute; its file name is image-14.png

So, at this stage, we have a working standard Jeedom V4 installation, listening on port 80 of our VM. Fine but not enough.

Now, i’ll tweak the Apache2 configuration to reach /home/jeedom/html as default directory, enable HTTPS and use personal TLS certificates bought on Gandi.

Note that if we want to get more info in Apache2 log files, we will need to edit the /etc/apache2/apache2.conf “LogLevel warn” line to “LogLevel debug”.

In /etc/apache2/apache2.conf, I need to add the following lines:

I check /etc/apache2/ports.conf to verify the default HTTPS listener port.

I’ll modify the ErrorLog directive in /etc/apache2/conf-available/security.conf so that it will log in /var/log/apache2, not in /var/www/html/logs.

Now I modify /etc/apache2/sites-available/000-default.conf to use /home/jeedom/html as root dir and modify the log files.

I modify /etc/apache2/sites-available/default-ssl.conf to configure our HTTPS listener with personal TLS certificates bought on Gandi for my fqdn.

In this file notice the use of a few specific files for my installation:

Those are the files i’ll later need to backup on my Debian8.5 apache2 dir and restore on my Debian10.3 apache2 dir.

We’ll also enable SSL and make the default-ssl.conf file loadable by Apache2 by linking it from sites-available to sites-enabled:

I backup the SSL certificates used on my previous Jeedom installation, copy it on my new VM, and restore it. On the old Debian 8.5 VM:

On the new Debian10.3 VM:

Then I check Apache configuration, and I restart Apache.

Now i’ll check if my ports 80 and 443 are opened:

The output is good:

Now let’s try to reach Jeedom UI (by using http://192.168.1.82):

This image has an empty alt attribute; its file name is image.png

And now by HTTPS (by using https://192.168.1.82):

This image has an empty alt attribute; its file name is image-1.png

This warning is normal as i’m reaching the UI by using the VM’s IP, not the FQDN defined in the TLS server certificate I told Apache2 to use in my default-ssl.conf file. When I validate the warning I can access Jeedom’s UI this is perfect.

This image has an empty alt attribute; its file name is image-2.png

Now i’ll want to backup & restore the little script used to update my (unfortunately) dynamic public IP address on Internet so that I can permanently reach Jedom from the outside, and other small stuff.

On the Debian 8.5 box:

On the Debian 10.3 box, i’ll copy it with scp, untar it, and install a required python package to make it work.

Then I need to add a crontab job to check my ipaddress & eventually update it:

Then add this line:

Zwave stick & Jeedom Backup

Now I will backup my Aeotec Zwave USB stick, and my Debian 8.5 Jeedom installation.

About the Aeotec ZWave stick (thanks to Nechry an active contributor for Jeedom about his article):

  • Windows 10 will already have the driver or it will be downloadable automatically, for other OS or to get the inf files manually check here
  • you’ll need the Network Key for the stick, it will probably be the default key, but in doubt in my old Jeedom installation i’ll check it in resources/openzwaved/ozwave/manager_utils.py, it will be a suite of 16 hexa values « 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0x10 » which will need to be converted as « 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 » in the backup utility

On my W10 laptop, on which I never plugged this USB stick, Windows automatically configured the device, and then I launch the zwave_500.exe utility downloaded from Aeotec’s website.

This image has an empty alt attribute; its file name is Pasted-21.png

I can check the stick is well detected because it displays a COM port in the title bar, and a “Load Zstick Success” in the status bar at the bottom of the window.

Then I need to check « Enable Security » and enter my network/security key and click “set”.

This image has an empty alt attribute; its file name is Pasted-22.png

Then I click on “Read Eeprom” button and choose where to save the backup file. And get a coffee …

This image has an empty alt attribute; its file name is Pasted-23.png

Jeedom backup

Next I need to save my whole functional Jeedom, through its integrated backup functionality.

This image has an empty alt attribute; its file name is Pasted-24.png
This image has an empty alt attribute; its file name is Pasted-25.png
This image has an empty alt attribute; its file name is AUTkcZOOeQbGAAAAAElFTkSuQmCC
This image has an empty alt attribute; its file name is A+iXRq2WLGgfAAAAAElFTkSuQmCC

Jeedom restauration

First, i’ll go back in vSphere administration UI to reallocate my Aeotec Zstick, and bluetooth module to the new VM. This is really simple, just unallocate them from the old DEbian8.5 VM, and reallocate them as a new USB device to the new VM.

This image has an empty alt attribute; its file name is image-17.png

I copy the backup on my Debian 10.3 Box, in the standard backup directory of Jeedom.

Then i’ll restore it through my new Jeedom’s UI (sorry this is a Jeedom v3 configuration > backups UI screenshot, forgot to take the one from the fresh V4 … But it’s almost the same).

This image has an empty alt attribute; its file name is image-15.png

It will be a long task, and the UI may not update well as files will be deleted and restored. You should be able to follow the restauration process through internal jeedom log file:

This image has an empty alt attribute; its file name is Pasted-30.png
This image has an empty alt attribute; its file name is +AQJ5MjGIABGIABGPgtBq64pvsfJwlLHkCSGFkAAAAASUVORK5CYII=

OK so obvisouly there was at least 1 error, but the result is still OK … Dont’t know if this is a big problem or not. First let’s try to reboot the VM and access Jeedom’s UI. First i’ll shut off the old Jeedom on my Debian 8.5.

After a sanity reboot, I reconnect on Jeedom’s UI and It is running, I can see my dashboard.

This image has an empty alt attribute; its file name is image-16.png

The problems I quickly found are:

  1. Check Jeedom > Health menu, it will probably complain about the external network configuration. This is normal, my backup restored the network config made to be reached from the outside, but it’s still pointing on my old VM. I’ll need to assign the IP address of my old VM to the new one on my DHCP server (in my case my internet box) OR modify the external & Internal IP in Jeedom.
  2. In Jeedom > Update Center, it complains that I have too much Jeedom’s declared on the Jeedom market. This is normal i’m using the “free” version, and the same login to access the market. I just connect on the Market website, check my declared boxes, and remove the old one.
  3. Some modules were disabled. Not important ones (forecast.io plugin to display weather, etc.) but they are disabled and can’t manage to get them back.
  4. You need to reinstall the dependancies for all your modules that need some in their configuration, this is VERY important: Bluetooth Advertisement, ZWave, KRoomba, etc. By using this Debnian 10.3 installation, I had absolutely NO problem with dependencies building, the modules were working well after.

Other than that, all my scenarios, seetings, were perfectly restored … I spent a few days on making this tutorial and finding the good VM parameters & restauration process, but it was definitely needed for the health of my home automation.

Hope it can help some people.

Launching Gekkoga on high-end EC2 Spot machine

So now that we know how to launch an EC2 instance from an Amazon EC2 AMI with batched gekko/gekkoga app/conf deployment, we want to learn how to use it on better CPU-sized machine, at a good price (Amazon EC2 Spot feature), so that we can -basically- bruteforce all possible parameters and inputs of a given trading strategy, using Gekkoga’s genetic algorithm.

The main documentations we will use:

As explained in first Amazon’s documentation, we first create a new role AWSServiceRoleForEC2Spot in our AWS web console. This is just a few clicks, please read their doc.

Handling Amazon’s VMs automatic shutdown for Spot instances

Next we need to take care of Amazon’s automatic shutdown of Spot instances, as it depends on market price, or on the fixed duration of usage you specified in your instantiation request. When Amazon decides to shutdown an instance, it will send a technical notification to the VM, that we can watch and query using an URL endpoint. Yes it means that we will need to embed a new script on our EC2 VM & Refence AMI, to handle such a shutdown event, and make it execute appropriate actions before the VM is stopped (Amazon announces a 2mn delay between the notification and the effective action of shut down, this is short).

The way I chosed to do it (but there are others) is to launch a ‘backgrounded’, recurrent, and permanently deployed script at VM boot through our already modified rc.local, and this script will poll every 5 seconds the appropriate metadata using a curl call. If the right metadata is provided by Amazon, we then execute a customized, specific, shutdown script which needs to be embedded in the customized package our VM will automatically download from our reference server at boot time.

So, as you saw in previous article, we insert those 3 lines just before the last “exit 0” instruction in our /etc/rc.local file:

Then we create our /etc/rc.termination.handling script, based on indications from amazon’s documentation:

We make it executable:

We will now test if this is working. The only thing we won’t be able to test right now is the real URL endpoint with terminations informations. First, we reboot our EC2 reference VM and we verify that our rc.termination.handling script is running in the background:

Now we will test its execution, but we need to slightly change its trigger as our VM is not yet a Spot instance and the URL we check won’t embed any termination information, therefore it will return a 404 error. I also disabled the loop so that it will just execute once.

We manually execute it, and we check the output log in $HOME/AWS/logs:

Now we check on our Reference server @home if the results were uploaded by the EC2 VM.

That’s perfect !

Don’t forget to cancel the modifications you made previously in rc.termination.handling to test everything.

Checking Spot Instances price

First we need to know what kind of VM we want, and then we need to check the Spot price trends to decide for a price.

For my first test, I will choose a c5.2xlarge, it embeds 8 vCPU and 16Gb of memory. Should be enough to launch Gekkoga with 7 concurrent threads.

Then we check the price trends and we see that the basic market price is -at the moment- around $0.14, this will be our base price in our request as we just want to test for now.

It is also interesting to look at the whole price trend over a few months, and we can see it actually increased a lot. Maybe we could get better machines for the same price.

Let’s check the price for the c5.4xlarge :

Conclusion: for $0.01 more, we can have a c54xlarge with 16 vCPU and 32Gb of RAM instead of a c5.2xlarge with 8 vCPU and 16Gb of RAM. Let’s go for it.

Requesting a “one-time” Spot Instance from AWS CLI

On our Reference server @home, we will next use this AWS CLI command (I’ve embedded it in a shell script called start_aws.sh). For details on the json file to provide with the request see Amazon’s documentation.

Note:

  • The –dry-run parameter: it asks the CLI to simulate the request and display any error instead of really trying to launch a Spot instance.
  • This time, for my first test, I used a “one-time” VM with a fixed duration execution time, to make sure to know how long it will run, and therefore the price is not the same as the one we saw above ! It is higher.
  • Once our test is successfull, we will use “regular” VMs with prices we can “decide” depending on the market, but also with a run time we can’t anticipate (it may be stopped anytime by Amazon if our calling price becomes lower than the market price. Otherwise you will have to stop it).

Then we create a $HOME/AWS/spot_specification.json file and we use appropriate data, especially our latest AMI reference:

We try the above aws cli command line to simulate a request …

Seems all good. let’s remove the –dry-run and launch it.

On the EC2 Web console we can see our request, and it is already active, it means the VM was launched !

Now in the main dashboard we can see it running, and we get its IP (we could do it via AWS CLI also):

Let’s SSH to it and check the running process:

It seems to be running .. And actually you can’t see this but this is running well, as I keep receiving emails with new “top result” found from Gekkoga as I activated the emails notifications. This is a good way to also make sure you will backup the latest optimum parameters found (once again: this is a backtest and in no way it means those parameters are good for the live market).

Let’s check the logs:

It’s running well ! Now I’ll just wait one hour to check if our termination detection process is running good and backups the result to our Reference server @home.

While we are waiting though … Let’s have a look in the logs, on the computation time:

Epoch #13: 87.718s. So, it depends on the epoch and computations requested by the Strategy, but last time we had a look at it, epoc #1 took 1280.646s to complete, for the same Strat we customized as an exercise.

One hour later, I managed to come back 2mn before the estimated termination time and I could manually check with curl that the termination meta-data were sent by Amazon to the VM.

Our termination script performed well and uploaded the data on my Reference server @home, both in the ~/AWS/<timestamp>_results and in gekkoga/results dir so that it will be reused at next launched by Gekkoga.

Using a Spot instance with a “called” market price

So we previously used a “one-time” spot instance with a fixed duration time, which means we ensured it would run for the specified duration, but at a higher and fixed price. Because we managed to backup the data before its termination, and because Gekkoga knows how to reuse it, we will now use lower priced Spot instance, but with no guarantee it will run long.

Let’s modify our AWS cli request .. We will test a c5.18xlarge (72 CPU, 144Gb of RAM) at a market price of $0.73 per hour.

Amazon started it quite immediately. Let’s SSH it and check the number of process launched.

So … This is quite interesting because parallelqueries in gekkoga/config/config-MyMACD-backtester.js is well set to 71, BUT it seems the maximum node processes launched will cap to 23. Don’t know why ! But until we find out, it means there is no need to launch a 72 CPU VM. A 16 or 32 CPU may be enough for now.

It seems this is the populationAmt which is limiting the number of threads launched. Setting it up to 70, still with a parallelqueries to 71, will make the number of threads to increase, and stabilize around 70 but with some periods down to 15 threads. Would be interesting to graph it & study it. Maybe there is also a bottleneck on Gekko’s UI/API which has to handle a lot of connections from all Gekkoga’s backtesting threads.

Now, I’ll need to start to read a little bit more littérature on this subject to find a good tweak … Anyway right now i’m using 140 for populationAmt and I still have some “low peaks” down to 15 concurrent threads for nodejs.

Result

After 12 hours without any interruption (so, total cost = $0.73*12 = $8,76), those are all the mails I received for this first test with our previously customized MyMACD Strategy.

As you can se if the screenshot is not too small, on those data from the past, with this strategy, the profit is much better with long candles and long signals.

Again, this needs to be challenged on smaller datasets, especially “stalling” markets, or bullish markets as nowadays. This setup and automation of tests will not guarantee you, in any way, to earn money.

A few notes after more tests

Memory usage:

  • I tried several kind of machines, right now I’m using a c4.8xlarge which has still good CPUs but lower RAM than the c5 family. And I started to test another customized Strat. I encountered a few crashes.
    • I initially thought it was because of the CPU usage caping to 100 as I increased the number of parallelqueries and populationAmt. I had to cancel my spots requests to kill the VMs.
    • Using the EC2 Console, I checked the console logs, and I could clearly see some OOM (Out of Memory) errors just before the crash.
    • I went into my Strat code and tried to simplify everything I could, by using ‘let’ declarations instead of ‘var’ (to reduce the scope of some variables), and managed to remove one or two variables I could handle differently. I also commented out every condition displaying logs, as I like to have log in my console when Gekko is trading live. But for backtesting, avoid it. No log at all, and reduce every condition you can.
  • I also reduced the max_old_space_size parameter to 4096 in gekko/gekkoga/start_gekkoga.sh. It has a direct impact on Node’s Garbage Collector. It will make the GC collect the dust twice often than with the 8096 I previously configured.
  • Since those two changes, i’m running a session on a c4.8xlarge for a few hours, using 34 parallelqueries vs 36 vCPUs. The CPUs are permanently 85% busy which seems good to me.

Improvements: in a next article I will detail two small changes I made on my Reference AMI:

  • The VM will send me a mail when it starts, with its details (IP, hostname, etc.) or when a termination is announced
  • I added a file monitoring utility to “live” detect any change in the Gekkoga’s result directory, to upload it immediately on my Reference Server @home. I had to do this because I noticed that when you ask Amazon to resiliate a Spot request whith a running VM associated to it, it immediately kills the VM, without announced termination, so previous results where not synched on my Home server (but I had the details of the Strat configuration by email).

Also, important things to remember:

  • Amazon EC2 will launch your Spot Instance when the maximum price you specified in your request exceeds the Spot price and capacity is available in Amazon’s cloud. The Spot Instance will run until it is interrupted or you terminate it yourself.  If your maximum price is exactly equal to the Spot price, there is a chance that your Spot Instance remains running, depending on demand.
  • You can’t change the parameters of your Spot Instance request, including your maximum price, after you’ve submitted the request. But you can cancel them if their status is either open or active.
  • Before you launch any request, you must decide on your maximum price, and what instance type to use. To review Spot price trends, see Amazon’s Spot Instance Pricing History
  • For our usage, you shall request “one-time request” instances, not “persistent” request (we only used it for testing), which means that you need to embbed a way in your EC2 VM to give you feedback about latest optimized parameters found for your Strat (by email for example or by tweaking Gekkoga to send live results (note for later: TODO))

And remember: nothing is free, you will be charged for this service, and there is NO GUARANTEE that you will earn money after your tests.

v2 – How to create an Amazon EC2 “small” VM and automate Gekko’s deployment

Note (18/02/2019): this is an updated version of the initial post about automating the launch of an Amazon EC2 Instance.

We tried Gekkoga’s backtesting and noticed it is a CPU drainer. I never used Amazon EC2 and its ability to quickly deploy servers, but I was curious to test, as it could make a perfect fit for our needs: on-demand renting of high capacity servers, by using Amazon’s “Spot instance” feature. Beware, on EC2 only the smallest VM can be used for free (almost). The servers I would like to use are not free.

Our first step is to learn how to create an Amazon EC2 VM, and to deploy our basic software on it. Then we will manage the automatic deployment of all packages we need to make Gekko & Gekkoga run and automatically start with the Strat we want to test. We will test this on a small VM -the t2.micro- and using the standard AMI (Amazon Machine Image, the OS) “Amazon Linux 2”.

Once this step is complete, we will make a new AMI based on the one we deployed, including custom software and part of its configuration.

Next we will try to automate in a simple batch file the request, ordering, and execution of a new instance based on our customized AMI, with Gekkoga automatic launching & results gathering. This batch file would be used from my own personal/home gekko server that I use to modify and quickly test new Strats.

Launching a new free Amazon EC2 t2.micro test VM

I won’t explain everything here. First you need to create an account and yes you will need to enter some credit cards info, as most of the services can be used for free at the beginning, but some of them will charge a few cents when used (eg. map an Elastic IP on a VM and release it, when it not in used, you are charged, it’s cheap, but you will be charged; also you are allowed your free small VM only a few hours, so you need to stop it as soon as you can to make it available only whern you need it, this is “on-demand” Amazon’s policy, like it or don’t use it:) ).

Then we choose the AMI and then the smallest VM available, as it is allowed in the “free” Amazon package.

At the bottom of the page, click “Next: configure instance details”. On the following page, you can use all default values, but check:

  • The Purchasing option: you can ask for a Spot Instance, this is Amazon’s market place to request your VM to run on a fixed price you will provide, assuming Amazon’s got free ressources and will allow your VM to run at that price (neds to be superior to the demand)
  • The Advanced Details at the bottom.

The User data field is a place where we can use a shell script which be executed at boot by the VM. As sometimes the VM can be started when Amazons detect they should be (eg. Spot instances), this is a very nice place to use to automatically make your instance download some specific configuration stuff when it boots, for example our Gekko’s strats and conf files to use for automagcally launch our backtests. We will try this later (I did not try it yet myself at the moment I’m writing this but this is well documented by Amazon).

NExt we want to configure the storage, as Amazon allow us to use 30Gb on the free VMs instead of the default 8gb.

Next, I will add a tag explaining the purpose of this VM and storage (not sure about its exact future utility yet but whatever …).

Next, we configure a security group. As I already played a little bit with another VM I created a customized Security Group which allows ports 22 (SSH), 80 (HTTP) and 443 (HTTPS). I choose it but you will be able to do that later and map your own security group to your VM.

Next screen is a global review before VM creation and launching by Amazon. I won’t copy/paste it, but click on Launch at the bottom.

Next is a CRITICAL STEP. Amazon will create you some SSH keys that you need to store and to use to connect on the VM through SSH. Do not loose them. You will be able to use the exact same key for other VMs you would want to create, so one key can match all your VMs.

As I already generated one for my other VM (called gekko), I reuse it.

And next is a simple status page explaining the instance is launching and linking to a few documentation pages, that you should of course read.

Now when we click on “View instance” we are redirected on EC2 console (you will use it a lot) and we can see that our new instance is launched, and its name is the tag we defined earlier during setup (you also see my other VM, stopped).

Next we will connect to the VM shell by SSH. On my laptop running W10 I’ll use putty. I assume you downloaded your private key. With putty the PEM file needs to be converted using PuttyGen to generate a .ppk file it will be able to use.

You’ll also need to grab the public IPv4 address from EC2 console, by clicking on your instance and copying the appropriate field.

Now in Gekko you just have to save a session with your private .ppk key configured and ec2-user@<public IPv4 hostname grabbed from the console> as host. Keep in mind that this hostname and associated IP could change. If you can’t connect anymore to your VM, the first thing to do is to check your EC2 console to check its hostname.

We launch the session. Putty will ask you if you want to trust the host, click Yes.

Woohoo ! we are connected ! This was fast and simple.

Updating the VM & deploying our software

OK so now we need to deploy all the basic things we saw in previous posts, but also more things like Nginx to protect the access to Gekko’s UI. Later we will have to implement a way for the VM to automagically download updated Strats to run it.

The goal is to deploy all what we need to launch a functionnal Gekkoga VM, and then we will create a customized AMI to be reused on a better VM specialized in CPU computations. Note that EC2 can also supply VMs with specific hardware like GPU’s if you need to run software able to offload the computation on GPU cards, this is not our case here unfortunately but it might somedays as I would like to start to experiment AI.

I won’t explain everything below, this can be put in a shell script, and you can use the links to my blog to download a few standard things not compromising security, but there are some private parts that you will need to tweak by yourself, especially the ssh connection to my home servers of course.

All the steps below do not require manual operations but some are customized for my own need, read the comments.

First we update the VM and deploy generic stuff.

Next we deploy NGinx which will act as a Reverse Proxy to authenticate requests made to Gekko’s UI.

Now we need to define some very customized stuff, I won’t explain all as this article is not a complete how-to, you need sysadmin knowledges.

  • Create a user/passwd to be used by Nginx reverse proxy
  • To automate downloading of stuff from our home server using scp or the launch of actions on our home server through ssh (to automatically make a tarball of our gekko’s strats for exemple before downloading them) we will need to import our home server & user SSH key /home/ec2-user/.ssh/ and don’t forget to change its permissions with chmod 600

This is an example of what you could do once your reference server’s ssh key was successfully imported on your EC2 instance:

Now we just need to launch ngix .. and eventually save pm2 sessions so that it will be relaunched at boot.

Testing the VM

If everything was OK -And yes I know a lot of parts could have been wrong for you, but for me at the moment I was testing it it was OK- you should be able to launch your favorite web browser and target https://<Your VM FQDN> and see a login prompt. You need to enter the login/password you defined in /etc/nginx/.htpasswd

You should now see this …

My test dataset was correctly downloaded, it is well detected by Gekko. I will just give it a little update by asking gekko to download data from the 2019-01-07 22:30 to now and then upload it back on my reference server at home.

Next, let’s give a try to the strats we downloaded from our reference server at home …

All is running well …

We now have a good base to clone the AMI and make it a template for higer-end VMs. We will need to make it:

  • Able to download uptodate data from markets
  • Able to download up to date strats from our reference server@home
  • Launch one particular Gekkoga startup script
  • Make it upload or send the data somewhere

Please remember to stop your VM either from command line or from Amazon EC2 console so that it won’t drain all your “free” uptime credits !

Playing with AWS CLI

First, we need to install AWS CLI (Amazon Command Line Interface). On my server I had to install pip for Python.

Now we can install AWS CLI using pip, as explained in Amazon’s documentation. The –user will install it in your $HOME.

We add the local AWS binary directory to our user PATH so that we can launch it without having to use its full path. I’m using a Debian so i’ll add it in .profile

Now, we need to create some IAM Admin User & Groups from our EC2 console, to be able to use AWS CLI. Please follow Amazon’s documentation “Creating an Administrator IAM User and Group (Console)“. Basically you will create a Groupe, a Security Policy, and an Administrator user. At the end, you must obtain and use an Access Key ID and a Secret Access Key for your Administrator user. If you loose it, you won’t be able to retrieve those keys, but you will be able to create new ones for this user (and propagate the change on every system using them). So keep them safe.

Then we will use those keys on our VM, and on our home/reference server from which we want to control our instances. You can specify the region that Amazon attributed to you also if you want (hint: do not use the letter at the end of the region, eg. if your VM is running in us-east-2c, enter ‘us-east-2’).

Let’s test it with a few examples I got in the docs:

  • Fetch a JSON list of all our instances, with a few key/value requested:
  • Stopping an instance
  • Starting an instance
  • Ask the public IP of our running VM (we need to know its InstanceID):

To send remote commands to be executed on a specific VM, you will need to create a new IAM role in your EC2 Console, and make your VM use it, so that your remote calls will be authorized.

Give your VM an IAM role with the Administrator Group you defined before, and in which there is also the Administrator user we are using the keys on AWS CLI. Now we should be able to access the VM and send it informations and request data.

  • To make the vm execute ‘ifconfig’:
  • To check the output we use the commandID in another request:
  • And … -took from the doc, I just added the jq at the end-, If we want to combine both queries:

Making a new AMI from our base VM & instantiate it

Creating a new AMI

First we stop our VM.

Now in EC2 Console we will create a new AMI from our instance.

By default the images you create are private, you can change it if you want and share your AMI to the region you are using in Amazon’s cloud.

The real deployment scenario

We will request an instance creation, the launch and stop of our VM remotely, from a remote server or workstation.

When a new instance is created, we would like it to automatically execute a script at boot, using user-data, to eventually download fresh data from our reference server. User-data is nothing more than a shell script which is executed once. As you can see by clicking on previous links, this is pretty well documented by Amazon. User-data is only exectued at the very forst boot of your newly created instance, not at subsequent boots.

Therefore, we will also need to include something else to make our instance execute some stuff each time it will boot: we will use a basic /etc/rc.local script which will use rsynch to download our whole Gekko’s installation directory from our reference server, tweak it a little bit, and then launch Gekkoga.

We will also need to make a background script to carefully monitor Amazon’s indicators about the incoming shutdown of our instance. Spot instances are automatically shutdown by Amazon, and there is maximum a 2mn delay after its announcement. This will be detailled in the next article.

The whole process is:

Instantiating & executing actions at first boot

We want to tell the new instance of this image to execute a shell script at its very first boot. This could be very useful later. First we will create this script on our local reference server and put a few commands in it, but also activate logging on the VM (outputs will be available both to the /var/log/user-data.log and to /dev/console).

I create a script called 0.user_data.sh in a $HOME/AWS directory on my reference server, and put this inside:

We request the creation & launch of a new instance based on our Image ID. Note that I use the name of the key I defined earlier (gekko), I used the same subnet as my previous VM (don’t really know if that is mandatory, have to test), the security group ID can be checked on EC2 console “Security Groupes” menu, and we also specify what IAM role we want to allow to control the VM with AWS CLI (you created it earlier as it was mandatory for some CLI commands to run).

Also note that we pass our previously created 0.user_data.sh bash script as a parameter: its content will be transmitted to Amazon, which will make it launched at the first boot of the instance. If you want anything to be performed at the very first boot, just think to add it in this script.

Our new InstanceId is i-0c6d1148adebf33c3 . From EC2 console I can see it is launched. I want to check if my user-data script was executed.

This is quite good ! I also double checked on my reference server if I could see incoming ssh connections by adding an ssh execution + scp downloading request command to the script, and it’s ok: 2 connections as expected (one for the ssh, the other one for the scp).

We have a working “first time script” that the VM will execute upon its instantiation, and that we could customize later on to perform one-shot specific actions. Now, we want our VM to connect to our reference server at each boot, to make it prepare a package, then download it, then untar it, and execute a start.sh script that may be embedded inside.

Automatically download a Gekko/Gekkoga installation, tweak it, launch it, at each boot

First, on our EC2 reference VM (the one from which we created a new AMI so yes either we will have to later on create a new AMI, or you can perform this step while you are still preparing the first AMI), we will perform this:

Then we will edit /etc/rc.local (which is a symlink to /etc/rc.d/rc.local) and add this:

A few comments:

  • During my tests I encoutered a problem with sqlite, it seems linked to the type of platform used. To avoid this, I automatically rebuild the dependencies after the rsynch synchronization
  • I update the UIConfig files as on the EC2 instance I use NGinx, which I don’t use on my reference server @home
  • I added a line to restart Nginx as I noticed that I had to relaunch it manually before I could access Gekko’s UI. I didn’t investigate further to understand why, maybe later.
  • As some of you may have noticed, we are synching a remote Gekko’s installation in a local $HOME/gekko one. Therefore we need to delete the Gekko’s installation we previously made on our EC2 instance. IT was just deployed to test it 🙂

On our Reference server @home:

  • We create a $HOME/gekko/start_ui.sh script which contains, if this is not already the case:
  • We create a $HOME/gekko/gekkoga/start_gekkoga.sh which contains:

Remarks:

  • You shouldn’t have to modify the start_ui.sh script.
  • In the start_gekkoga.sh script:
    • You should only have to modify the TOLAUNCH variable, and pass it the name of your Gekkoga’s config file to be used, that all.
    • I wanted to keep one vCPU free to handle synchronization stuff, or other tasks required by the OS, so I dynamically check the number of CPU on the machine, reduce it by 1, and change the appropriate line in Gekkoga’s config file.
    • This has a side effect: on a 1 CPU machine, and this is the case of the smaller EC2s VMs, it will become “0”, and Gekkoga will fail to start, but pm2 will keep trying to relaunch it. This is why I added the pm2 “–no-autorestart” option to this script on the last line.

We reboot our EC2 reference instance:

After a few seconds, we check the rc.local log on our EC2 instance for the downloading of our reference package from our reference server. In rc.local, we redirected the logs to $HOME/AWS/logs/<date>_package.log :

Seems all good. Let’s check pm2’s status:

Gekkoga’s error is probably normal as we requested it to run with 0 parrallel queris … Let’s check its logs:



And yes, I can confirm after a quick test with 0 parralelqueries on my Reference server @home that this error is raised in this case. Good !

One more thing, let’s check if Gekko’s UI is remotely reachable:

Seems Perfect !

Now, before we make a new version of our reference AMI (it won’t be the last one :)), I will :

  • Make some cleaning in AWS/logs, AWS/logs/old, but also (on my Reference server @home) in gekko/history, gekko/strategies and gekko/gekkoga/config and /gekko/gekkoga/results, as I made a lot of tests.
  • Add a shellscipt to automatically update the Dynamic DNS handling my Gekko’s EC2 FQDN. As it is 99% personal, I won’t detail it here. What it does is checking the external IP of the machine, check if it is different than the latest knew one, if yes update the A record of the FQDN on the DNS server.

To create a new AMI from your reference VM, you know the procedure, we already did it above, as well as for instantiating it through your AWS CLI installed somewhere.

Next step will be to try to launch a Gekkoga backtesting session by instantiating our AMI on a much better VM in terms of CPU and memory.

But be warned, this will be charged by Amazon !

This will be next article’s topic.

How to create an Amazon EC2 “small” VM and automate Gekko’s deployment

Note (18/02/2019): a simpler deployment process is under active redaction.

We tried Gekkoga’s backtesting and noticed it is a CPU drainer. I never used Amazon EC2 and its ability to quickly deploy servers, but I was curious to test, as it could make a perfect fit for our needs: on-demand renting of high capacity servers, by using Amazon’s “Spot instance” feature. Beware, on EC2 only the smallest VM can be used for free (almost). The servers I would like to use are not free.

Our first step is to manage the automatic deployment of all packages we need to make Gekko & Gekkoga run and automatically start with the Strat we want to test. I want a one command process. We will test this on a small VM -the t2.micro- and using the standard AMI (Amazon Machine Image, the OS) “Amazon Linux 2”.

Once this step is complete, we will make a new AMI based on the one we deployed, including custom software and part of its configuration.

Next we will try to automate in a simple batch file the request, ordering, and execution of a new instance based on our customized AMI, with Gekkoga automatic launching & results gathering. This batch file would be used from my own personal/home gekko server that I use to modify and quickly test new Strats.

Launching a new free Amazon EC2 t2.micro test VM

I won’t explain everything here. First you need to create an account and yes you will need to enter some credit cards info, as most of the services can be used for free at the beginning, but some of them will charge a few cents when used (eg. map an Elastic IP on a VM and release it, when it not in used, you are charged, it’s cheap, but you will be charged; also you are allowed your free small VM only a few hours, so you need to stop it as soon as you can to make it available only whern you need it, this is “on-demand” Amazon’s policy, like it or don’t use it:) ).

Then we choose the AMI and then the smallest VM available, as it is allowed in the “free” Amazon package.

At the bottom of the page, click “Next: configure instance details”. On the following page, you can use all default values, but check:

  • The Purchasing option: you can ask for a Spot Instance, this is Amazon’s market place to request your VM to run on a fixed price you will provide, assuming Amazon’s got free ressources and will allow your VM to run at that price (neds to be superior to the demand)
  • The Advanced Details at the bottom.

The User data field is a place where we can use a shell script which be executed at boot by the VM. As sometimes the VM can be started when Amazons detect they should be (eg. Spot instances), this is a very nice place to use to automatically make your instance download some specific configuration stuff when it boots, for example our Gekko’s strats and conf files to use for automagcally launch our backtests. We will try this later (I did not try it yet myself at the moment I’m writing this but this is well documented by Amazon).

NExt we want to configure the storage, as Amazon allow us to use 30Gb on the free VMs instead of the default 8gb.

Next, I will add a tag explaining the purpose of this VM and storage (not sure about its exact future utility yet but whatever …).

Next, we configure a security group. As I already played a little bit with another VM I created a customized Security Group which allows ports 22 (SSH), 80 (HTTP) and 443 (HTTPS). I choose it but you will be able to do that later and map your own security group to your VM.

Next screen is a global review before VM creation and launching by Amazon. I won’t copy/paste it, but click on Launch at the bottom.

Next is a CRITICAL STEP. Amazon will create you some SSH keys that you need to store and to use to connect on the VM through SSH. Do not loose them. You will be able to use the exact same key for other VMs you would want to create, so one key can match all your VMs.

As I already generated one for my other VM (called gekko), I reuse it.

And next is a simple status page explaining the instance is launching and linking to a few documentation pages, that you should of course read.

Now when we click on “View instance” we are redirected on EC2 console (you will use it a lot) and we can see that our new instance is launched, and its name is the tag we defined earlier during setup (you also see my other VM, stopped).

Next we will connect to the VM shell by SSH. On my laptop running W10 I’ll use putty. I assume you downloaded your private key. With putty the PEM file needs to be converted using PuttyGen to generate a .ppk file it will be able to use.

You’ll also need to grab the public IPv4 address from EC2 console, by clicking on your instance and copying the appropriate field.

Now in Gekko you just have to save a session with your private .ppk key configured and ec2-user@<public IPv4 hostname grabbed from the console> as host. Keep in mind that this hostname and associated IP could change. If you can’t connect anymore to your VM, the first thing to do is to check your EC2 console to check its hostname.

We launch the session. Putty will ask you if you want to trust the host, click Yes.