Launching Gekkoga on high-end EC2 Spot machine

So now that we know how to launch an EC2 instance from an Amazon EC2 AMI with batched gekko/gekkoga app/conf deployment, we want to learn how to use it on better CPU-sized machine, at a good price (Amazon EC2 Spot feature), so that we can -basically- bruteforce all possible parameters and inputs of a given trading strategy, using Gekkoga’s genetic algorithm.

The main documentations we will use:

As explained in first Amazon’s documentation, we first create a new role AWSServiceRoleForEC2Spot in our AWS web console. This is just a few clicks, please read their doc.

Handling Amazon’s VMs automatic shutdown for Spot instances

Next we need to take care of Amazon’s automatic shutdown of Spot instances, as it depends on market price, or on the fixed duration of usage you specified in your instantiation request. When Amazon decides to shutdown an instance, it will send a technical notification to the VM, that we can watch and query using an URL endpoint. Yes it means that we will need to embed a new script on our EC2 VM & Refence AMI, to handle such a shutdown event, and make it execute appropriate actions before the VM is stopped (Amazon announces a 2mn delay between the notification and the effective action of shut down, this is short).

The way I chosed to do it (but there are others) is to launch a ‘backgrounded’, recurrent, and permanently deployed script at VM boot through our already modified rc.local, and this script will poll every 5 seconds the appropriate metadata using a curl call. If the right metadata is provided by Amazon, we then execute a customized, specific, shutdown script which needs to be embedded in the customized package our VM will automatically download from our reference server at boot time.

So, as you saw in previous article, we insert those 3 lines just before the last “exit 0” instruction in our /etc/rc.local file:

Then we create our /etc/rc.termination.handling script, based on indications from amazon’s documentation:

We make it executable:

We will now test if this is working. The only thing we won’t be able to test right now is the real URL endpoint with terminations informations. First, we reboot our EC2 reference VM and we verify that our rc.termination.handling script is running in the background:

Now we will test its execution, but we need to slightly change its trigger as our VM is not yet a Spot instance and the URL we check won’t embed any termination information, therefore it will return a 404 error. I also disabled the loop so that it will just execute once.

We manually execute it, and we check the output log in $HOME/AWS/logs:

Now we check on our Reference server @home if the results were uploaded by the EC2 VM.

That’s perfect !

Don’t forget to cancel the modifications you made previously in rc.termination.handling to test everything.

Checking Spot Instances price

First we need to know what kind of VM we want, and then we need to check the Spot price trends to decide for a price.

For my first test, I will choose a c5.2xlarge, it embeds 8 vCPU and 16Gb of memory. Should be enough to launch Gekkoga with 7 concurrent threads.

Then we check the price trends and we see that the basic market price is -at the moment- around $0.14, this will be our base price in our request as we just want to test for now.

It is also interesting to look at the whole price trend over a few months, and we can see it actually increased a lot. Maybe we could get better machines for the same price.

Let’s check the price for the c5.4xlarge :

Conclusion: for $0.01 more, we can have a c54xlarge with 16 vCPU and 32Gb of RAM instead of a c5.2xlarge with 8 vCPU and 16Gb of RAM. Let’s go for it.

Requesting a “one-time” Spot Instance from AWS CLI

On our Reference server @home, we will next use this AWS CLI command (I’ve embedded it in a shell script called start_aws.sh). For details on the json file to provide with the request see Amazon’s documentation.

Note:

  • The –dry-run parameter: it asks the CLI to simulate the request and display any error instead of really trying to launch a Spot instance.
  • This time, for my first test, I used a “one-time” VM with a fixed duration execution time, to make sure to know how long it will run, and therefore the price is not the same as the one we saw above ! It is higher.
  • Once our test is successfull, we will use “regular” VMs with prices we can “decide” depending on the market, but also with a run time we can’t anticipate (it may be stopped anytime by Amazon if our calling price becomes lower than the market price. Otherwise you will have to stop it).

Then we create a $HOME/AWS/spot_specification.json file and we use appropriate data, especially our latest AMI reference:

We try the above aws cli command line to simulate a request …

Seems all good. let’s remove the –dry-run and launch it.

On the EC2 Web console we can see our request, and it is already active, it means the VM was launched !

Now in the main dashboard we can see it running, and we get its IP (we could do it via AWS CLI also):

Let’s SSH to it and check the running process:

It seems to be running .. And actually you can’t see this but this is running well, as I keep receiving emails with new “top result” found from Gekkoga as I activated the emails notifications. This is a good way to also make sure you will backup the latest optimum parameters found (once again: this is a backtest and in no way it means those parameters are good for the live market).

Let’s check the logs:

It’s running well ! Now I’ll just wait one hour to check if our termination detection process is running good and backups the result to our Reference server @home.

While we are waiting though … Let’s have a look in the logs, on the computation time:

Epoch #13: 87.718s. So, it depends on the epoch and computations requested by the Strategy, but last time we had a look at it, epoc #1 took 1280.646s to complete, for the same Strat we customized as an exercise.

One hour later, I managed to come back 2mn before the estimated termination time and I could manually check with curl that the termination meta-data were sent by Amazon to the VM.

Our termination script performed well and uploaded the data on my Reference server @home, both in the ~/AWS/<timestamp>_results and in gekkoga/results dir so that it will be reused at next launched by Gekkoga.

Using a Spot instance with a “called” market price

So we previously used a “one-time” spot instance with a fixed duration time, which means we ensured it would run for the specified duration, but at a higher and fixed price. Because we managed to backup the data before its termination, and because Gekkoga knows how to reuse it, we will now use lower priced Spot instance, but with no guarantee it will run long.

Let’s modify our AWS cli request .. We will test a c5.18xlarge (72 CPU, 144Gb of RAM) at a market price of $0.73 per hour.

Amazon started it quite immediately. Let’s SSH it and check the number of process launched.

So … This is quite interesting because parallelqueries in gekkoga/config/config-MyMACD-backtester.js is well set to 71, BUT it seems the maximum node processes launched will cap to 23. Don’t know why ! But until we find out, it means there is no need to launch a 72 CPU VM. A 16 or 32 CPU may be enough for now.

It seems this is the populationAmt which is limiting the number of threads launched. Setting it up to 70, still with a parallelqueries to 71, will make the number of threads to increase, and stabilize around 70 but with some periods down to 15 threads. Would be interesting to graph it & study it. Maybe there is also a bottleneck on Gekko’s UI/API which has to handle a lot of connections from all Gekkoga’s backtesting threads.

Now, I’ll need to start to read a little bit more littérature on this subject to find a good tweak … Anyway right now i’m using 140 for populationAmt and I still have some “low peaks” down to 15 concurrent threads for nodejs.

Result

After 12 hours without any interruption (so, total cost = $0.73*12 = $8,76), those are all the mails I received for this first test with our previously customized MyMACD Strategy.

As you can se if the screenshot is not too small, on those data from the past, with this strategy, the profit is much better with long candles and long signals.

Again, this needs to be challenged on smaller datasets, especially “stalling” markets, or bullish markets as nowadays. This setup and automation of tests will not guarantee you, in any way, to earn money.

A few notes after more tests

Memory usage:

  • I tried several kind of machines, right now I’m using a c4.8xlarge which has still good CPUs but lower RAM than the c5 family. And I started to test another customized Strat. I encountered a few crashes.
    • I initially thought it was because of the CPU usage caping to 100 as I increased the number of parallelqueries and populationAmt. I had to cancel my spots requests to kill the VMs.
    • Using the EC2 Console, I checked the console logs, and I could clearly see some OOM (Out of Memory) errors just before the crash.
    • I went into my Strat code and tried to simplify everything I could, by using ‘let’ declarations instead of ‘var’ (to reduce the scope of some variables), and managed to remove one or two variables I could handle differently. I also commented out every condition displaying logs, as I like to have log in my console when Gekko is trading live. But for backtesting, avoid it. No log at all, and reduce every condition you can.
  • I also reduced the max_old_space_size parameter to 4096 in gekko/gekkoga/start_gekkoga.sh. It has a direct impact on Node’s Garbage Collector. It will make the GC collect the dust twice often than with the 8096 I previously configured.
  • Since those two changes, i’m running a session on a c4.8xlarge for a few hours, using 34 parallelqueries vs 36 vCPUs. The CPUs are permanently 85% busy which seems good to me.

Improvements: in a next article I will detail two small changes I made on my Reference AMI:

  • The VM will send me a mail when it starts, with its details (IP, hostname, etc.) or when a termination is announced
  • I added a file monitoring utility to “live” detect any change in the Gekkoga’s result directory, to upload it immediately on my Reference Server @home. I had to do this because I noticed that when you ask Amazon to resiliate a Spot request whith a running VM associated to it, it immediately kills the VM, without announced termination, so previous results where not synched on my Home server (but I had the details of the Strat configuration by email).

Also, important things to remember:

  • Amazon EC2 will launch your Spot Instance when the maximum price you specified in your request exceeds the Spot price and capacity is available in Amazon’s cloud. The Spot Instance will run until it is interrupted or you terminate it yourself.  If your maximum price is exactly equal to the Spot price, there is a chance that your Spot Instance remains running, depending on demand.
  • You can’t change the parameters of your Spot Instance request, including your maximum price, after you’ve submitted the request. But you can cancel them if their status is either open or active.
  • Before you launch any request, you must decide on your maximum price, and what instance type to use. To review Spot price trends, see Amazon’s Spot Instance Pricing History
  • For our usage, you shall request “one-time request” instances, not “persistent” request (we only used it for testing), which means that you need to embbed a way in your EC2 VM to give you feedback about latest optimized parameters found for your Strat (by email for example or by tweaking Gekkoga to send live results (note for later: TODO))

And remember: nothing is free, you will be charged for this service, and there is NO GUARANTEE that you will earn money after your tests.

v2 – How to create an Amazon EC2 “small” VM and automate Gekko’s deployment

Note (18/02/2019): this is an updated version of the initial post about automating the launch of an Amazon EC2 Instance.

We tried Gekkoga’s backtesting and noticed it is a CPU drainer. I never used Amazon EC2 and its ability to quickly deploy servers, but I was curious to test, as it could make a perfect fit for our needs: on-demand renting of high capacity servers, by using Amazon’s “Spot instance” feature. Beware, on EC2 only the smallest VM can be used for free (almost). The servers I would like to use are not free.

Our first step is to learn how to create an Amazon EC2 VM, and to deploy our basic software on it. Then we will manage the automatic deployment of all packages we need to make Gekko & Gekkoga run and automatically start with the Strat we want to test. We will test this on a small VM -the t2.micro- and using the standard AMI (Amazon Machine Image, the OS) “Amazon Linux 2”.

Once this step is complete, we will make a new AMI based on the one we deployed, including custom software and part of its configuration.

Next we will try to automate in a simple batch file the request, ordering, and execution of a new instance based on our customized AMI, with Gekkoga automatic launching & results gathering. This batch file would be used from my own personal/home gekko server that I use to modify and quickly test new Strats.

Launching a new free Amazon EC2 t2.micro test VM

I won’t explain everything here. First you need to create an account and yes you will need to enter some credit cards info, as most of the services can be used for free at the beginning, but some of them will charge a few cents when used (eg. map an Elastic IP on a VM and release it, when it not in used, you are charged, it’s cheap, but you will be charged; also you are allowed your free small VM only a few hours, so you need to stop it as soon as you can to make it available only whern you need it, this is “on-demand” Amazon’s policy, like it or don’t use it:) ).

Then we choose the AMI and then the smallest VM available, as it is allowed in the “free” Amazon package.

At the bottom of the page, click “Next: configure instance details”. On the following page, you can use all default values, but check:

  • The Purchasing option: you can ask for a Spot Instance, this is Amazon’s market place to request your VM to run on a fixed price you will provide, assuming Amazon’s got free ressources and will allow your VM to run at that price (neds to be superior to the demand)
  • The Advanced Details at the bottom.

The User data field is a place where we can use a shell script which be executed at boot by the VM. As sometimes the VM can be started when Amazons detect they should be (eg. Spot instances), this is a very nice place to use to automatically make your instance download some specific configuration stuff when it boots, for example our Gekko’s strats and conf files to use for automagcally launch our backtests. We will try this later (I did not try it yet myself at the moment I’m writing this but this is well documented by Amazon).

NExt we want to configure the storage, as Amazon allow us to use 30Gb on the free VMs instead of the default 8gb.

Next, I will add a tag explaining the purpose of this VM and storage (not sure about its exact future utility yet but whatever …).

Next, we configure a security group. As I already played a little bit with another VM I created a customized Security Group which allows ports 22 (SSH), 80 (HTTP) and 443 (HTTPS). I choose it but you will be able to do that later and map your own security group to your VM.

Next screen is a global review before VM creation and launching by Amazon. I won’t copy/paste it, but click on Launch at the bottom.

Next is a CRITICAL STEP. Amazon will create you some SSH keys that you need to store and to use to connect on the VM through SSH. Do not loose them. You will be able to use the exact same key for other VMs you would want to create, so one key can match all your VMs.

As I already generated one for my other VM (called gekko), I reuse it.

And next is a simple status page explaining the instance is launching and linking to a few documentation pages, that you should of course read.

Now when we click on “View instance” we are redirected on EC2 console (you will use it a lot) and we can see that our new instance is launched, and its name is the tag we defined earlier during setup (you also see my other VM, stopped).

Next we will connect to the VM shell by SSH. On my laptop running W10 I’ll use putty. I assume you downloaded your private key. With putty the PEM file needs to be converted using PuttyGen to generate a .ppk file it will be able to use.

You’ll also need to grab the public IPv4 address from EC2 console, by clicking on your instance and copying the appropriate field.

Now in Gekko you just have to save a session with your private .ppk key configured and ec2-user@<public IPv4 hostname grabbed from the console> as host. Keep in mind that this hostname and associated IP could change. If you can’t connect anymore to your VM, the first thing to do is to check your EC2 console to check its hostname.

We launch the session. Putty will ask you if you want to trust the host, click Yes.

Woohoo ! we are connected ! This was fast and simple.

Updating the VM & deploying our software

OK so now we need to deploy all the basic things we saw in previous posts, but also more things like Nginx to protect the access to Gekko’s UI. Later we will have to implement a way for the VM to automagically download updated Strats to run it.

The goal is to deploy all what we need to launch a functionnal Gekkoga VM, and then we will create a customized AMI to be reused on a better VM specialized in CPU computations. Note that EC2 can also supply VMs with specific hardware like GPU’s if you need to run software able to offload the computation on GPU cards, this is not our case here unfortunately but it might somedays as I would like to start to experiment AI.

I won’t explain everything below, this can be put in a shell script, and you can use the links to my blog to download a few standard things not compromising security, but there are some private parts that you will need to tweak by yourself, especially the ssh connection to my home servers of course.

All the steps below do not require manual operations but some are customized for my own need, read the comments.

First we update the VM and deploy generic stuff.

Next we deploy NGinx which will act as a Reverse Proxy to authenticate requests made to Gekko’s UI.

Now we need to define some very customized stuff, I won’t explain all as this article is not a complete how-to, you need sysadmin knowledges.

  • Create a user/passwd to be used by Nginx reverse proxy
  • To automate downloading of stuff from our home server using scp or the launch of actions on our home server through ssh (to automatically make a tarball of our gekko’s strats for exemple before downloading them) we will need to import our home server & user SSH key /home/ec2-user/.ssh/ and don’t forget to change its permissions with chmod 600

This is an example of what you could do once your reference server’s ssh key was successfully imported on your EC2 instance:

Now we just need to launch ngix .. and eventually save pm2 sessions so that it will be relaunched at boot.

Testing the VM

If everything was OK -And yes I know a lot of parts could have been wrong for you, but for me at the moment I was testing it it was OK- you should be able to launch your favorite web browser and target https://<Your VM FQDN> and see a login prompt. You need to enter the login/password you defined in /etc/nginx/.htpasswd

You should now see this …

My test dataset was correctly downloaded, it is well detected by Gekko. I will just give it a little update by asking gekko to download data from the 2019-01-07 22:30 to now and then upload it back on my reference server at home.

Next, let’s give a try to the strats we downloaded from our reference server at home …

All is running well …

We now have a good base to clone the AMI and make it a template for higer-end VMs. We will need to make it:

  • Able to download uptodate data from markets
  • Able to download up to date strats from our reference server@home
  • Launch one particular Gekkoga startup script
  • Make it upload or send the data somewhere

Please remember to stop your VM either from command line or from Amazon EC2 console so that it won’t drain all your “free” uptime credits !

Playing with AWS CLI

First, we need to install AWS CLI (Amazon Command Line Interface). On my server I had to install pip for Python.

Now we can install AWS CLI using pip, as explained in Amazon’s documentation. The –user will install it in your $HOME.

We add the local AWS binary directory to our user PATH so that we can launch it without having to use its full path. I’m using a Debian so i’ll add it in .profile

Now, we need to create some IAM Admin User & Groups from our EC2 console, to be able to use AWS CLI. Please follow Amazon’s documentation “Creating an Administrator IAM User and Group (Console)“. Basically you will create a Groupe, a Security Policy, and an Administrator user. At the end, you must obtain and use an Access Key ID and a Secret Access Key for your Administrator user. If you loose it, you won’t be able to retrieve those keys, but you will be able to create new ones for this user (and propagate the change on every system using them). So keep them safe.

Then we will use those keys on our VM, and on our home/reference server from which we want to control our instances. You can specify the region that Amazon attributed to you also if you want (hint: do not use the letter at the end of the region, eg. if your VM is running in us-east-2c, enter ‘us-east-2’).

Let’s test it with a few examples I got in the docs:

  • Fetch a JSON list of all our instances, with a few key/value requested:
  • Stopping an instance
  • Starting an instance
  • Ask the public IP of our running VM (we need to know its InstanceID):

To send remote commands to be executed on a specific VM, you will need to create a new IAM role in your EC2 Console, and make your VM use it, so that your remote calls will be authorized.

Give your VM an IAM role with the Administrator Group you defined before, and in which there is also the Administrator user we are using the keys on AWS CLI. Now we should be able to access the VM and send it informations and request data.

  • To make the vm execute ‘ifconfig’:
  • To check the output we use the commandID in another request:
  • And … -took from the doc, I just added the jq at the end-, If we want to combine both queries:

Making a new AMI from our base VM & instantiate it

Creating a new AMI

First we stop our VM.

Now in EC2 Console we will create a new AMI from our instance.

By default the images you create are private, you can change it if you want and share your AMI to the region you are using in Amazon’s cloud.

The real deployment scenario

We will request an instance creation, the launch and stop of our VM remotely, from a remote server or workstation.

When a new instance is created, we would like it to automatically execute a script at boot, using user-data, to eventually download fresh data from our reference server. User-data is nothing more than a shell script which is executed once. As you can see by clicking on previous links, this is pretty well documented by Amazon. User-data is only exectued at the very forst boot of your newly created instance, not at subsequent boots.

Therefore, we will also need to include something else to make our instance execute some stuff each time it will boot: we will use a basic /etc/rc.local script which will use rsynch to download our whole Gekko’s installation directory from our reference server, tweak it a little bit, and then launch Gekkoga.

We will also need to make a background script to carefully monitor Amazon’s indicators about the incoming shutdown of our instance. Spot instances are automatically shutdown by Amazon, and there is maximum a 2mn delay after its announcement. This will be detailled in the next article.

The whole process is:

Instantiating & executing actions at first boot

We want to tell the new instance of this image to execute a shell script at its very first boot. This could be very useful later. First we will create this script on our local reference server and put a few commands in it, but also activate logging on the VM (outputs will be available both to the /var/log/user-data.log and to /dev/console).

I create a script called 0.user_data.sh in a $HOME/AWS directory on my reference server, and put this inside:

We request the creation & launch of a new instance based on our Image ID. Note that I use the name of the key I defined earlier (gekko), I used the same subnet as my previous VM (don’t really know if that is mandatory, have to test), the security group ID can be checked on EC2 console “Security Groupes” menu, and we also specify what IAM role we want to allow to control the VM with AWS CLI (you created it earlier as it was mandatory for some CLI commands to run).

Also note that we pass our previously created 0.user_data.sh bash script as a parameter: its content will be transmitted to Amazon, which will make it launched at the first boot of the instance. If you want anything to be performed at the very first boot, just think to add it in this script.

Our new InstanceId is i-0c6d1148adebf33c3 . From EC2 console I can see it is launched. I want to check if my user-data script was executed.

This is quite good ! I also double checked on my reference server if I could see incoming ssh connections by adding an ssh execution + scp downloading request command to the script, and it’s ok: 2 connections as expected (one for the ssh, the other one for the scp).

We have a working “first time script” that the VM will execute upon its instantiation, and that we could customize later on to perform one-shot specific actions. Now, we want our VM to connect to our reference server at each boot, to make it prepare a package, then download it, then untar it, and execute a start.sh script that may be embedded inside.

Automatically download a Gekko/Gekkoga installation, tweak it, launch it, at each boot

First, on our EC2 reference VM (the one from which we created a new AMI so yes either we will have to later on create a new AMI, or you can perform this step while you are still preparing the first AMI), we will perform this:

Then we will edit /etc/rc.local (which is a symlink to /etc/rc.d/rc.local) and add this:

A few comments:

  • During my tests I encoutered a problem with sqlite, it seems linked to the type of platform used. To avoid this, I automatically rebuild the dependencies after the rsynch synchronization
  • I update the UIConfig files as on the EC2 instance I use NGinx, which I don’t use on my reference server @home
  • I added a line to restart Nginx as I noticed that I had to relaunch it manually before I could access Gekko’s UI. I didn’t investigate further to understand why, maybe later.
  • As some of you may have noticed, we are synching a remote Gekko’s installation in a local $HOME/gekko one. Therefore we need to delete the Gekko’s installation we previously made on our EC2 instance. IT was just deployed to test it 🙂

On our Reference server @home:

  • We create a $HOME/gekko/start_ui.sh script which contains, if this is not already the case:
  • We create a $HOME/gekko/gekkoga/start_gekkoga.sh which contains:

Remarks:

  • You shouldn’t have to modify the start_ui.sh script.
  • In the start_gekkoga.sh script:
    • You should only have to modify the TOLAUNCH variable, and pass it the name of your Gekkoga’s config file to be used, that all.
    • I wanted to keep one vCPU free to handle synchronization stuff, or other tasks required by the OS, so I dynamically check the number of CPU on the machine, reduce it by 1, and change the appropriate line in Gekkoga’s config file.
    • This has a side effect: on a 1 CPU machine, and this is the case of the smaller EC2s VMs, it will become “0”, and Gekkoga will fail to start, but pm2 will keep trying to relaunch it. This is why I added the pm2 “–no-autorestart” option to this script on the last line.

We reboot our EC2 reference instance:

After a few seconds, we check the rc.local log on our EC2 instance for the downloading of our reference package from our reference server. In rc.local, we redirected the logs to $HOME/AWS/logs/<date>_package.log :

Seems all good. Let’s check pm2’s status:

Gekkoga’s error is probably normal as we requested it to run with 0 parrallel queris … Let’s check its logs:



And yes, I can confirm after a quick test with 0 parralelqueries on my Reference server @home that this error is raised in this case. Good !

One more thing, let’s check if Gekko’s UI is remotely reachable:

Seems Perfect !

Now, before we make a new version of our reference AMI (it won’t be the last one :)), I will :

  • Make some cleaning in AWS/logs, AWS/logs/old, but also (on my Reference server @home) in gekko/history, gekko/strategies and gekko/gekkoga/config and /gekko/gekkoga/results, as I made a lot of tests.
  • Add a shellscipt to automatically update the Dynamic DNS handling my Gekko’s EC2 FQDN. As it is 99% personal, I won’t detail it here. What it does is checking the external IP of the machine, check if it is different than the latest knew one, if yes update the A record of the FQDN on the DNS server.

To create a new AMI from your reference VM, you know the procedure, we already did it above, as well as for instantiating it through your AWS CLI installed somewhere.

Next step will be to try to launch a Gekkoga backtesting session by instantiating our AMI on a much better VM in terms of CPU and memory.

But be warned, this will be charged by Amazon !

This will be next article’s topic.

How to create an Amazon EC2 “small” VM and automate Gekko’s deployment

Note (18/02/2019): a simpler deployment process is under active redaction.

We tried Gekkoga’s backtesting and noticed it is a CPU drainer. I never used Amazon EC2 and its ability to quickly deploy servers, but I was curious to test, as it could make a perfect fit for our needs: on-demand renting of high capacity servers, by using Amazon’s “Spot instance” feature. Beware, on EC2 only the smallest VM can be used for free (almost). The servers I would like to use are not free.

Our first step is to manage the automatic deployment of all packages we need to make Gekko & Gekkoga run and automatically start with the Strat we want to test. I want a one command process. We will test this on a small VM -the t2.micro- and using the standard AMI (Amazon Machine Image, the OS) “Amazon Linux 2”.

Once this step is complete, we will make a new AMI based on the one we deployed, including custom software and part of its configuration.

Next we will try to automate in a simple batch file the request, ordering, and execution of a new instance based on our customized AMI, with Gekkoga automatic launching & results gathering. This batch file would be used from my own personal/home gekko server that I use to modify and quickly test new Strats.

Launching a new free Amazon EC2 t2.micro test VM

I won’t explain everything here. First you need to create an account and yes you will need to enter some credit cards info, as most of the services can be used for free at the beginning, but some of them will charge a few cents when used (eg. map an Elastic IP on a VM and release it, when it not in used, you are charged, it’s cheap, but you will be charged; also you are allowed your free small VM only a few hours, so you need to stop it as soon as you can to make it available only whern you need it, this is “on-demand” Amazon’s policy, like it or don’t use it:) ).

Then we choose the AMI and then the smallest VM available, as it is allowed in the “free” Amazon package.

At the bottom of the page, click “Next: configure instance details”. On the following page, you can use all default values, but check:

  • The Purchasing option: you can ask for a Spot Instance, this is Amazon’s market place to request your VM to run on a fixed price you will provide, assuming Amazon’s got free ressources and will allow your VM to run at that price (neds to be superior to the demand)
  • The Advanced Details at the bottom.

The User data field is a place where we can use a shell script which be executed at boot by the VM. As sometimes the VM can be started when Amazons detect they should be (eg. Spot instances), this is a very nice place to use to automatically make your instance download some specific configuration stuff when it boots, for example our Gekko’s strats and conf files to use for automagcally launch our backtests. We will try this later (I did not try it yet myself at the moment I’m writing this but this is well documented by Amazon).

NExt we want to configure the storage, as Amazon allow us to use 30Gb on the free VMs instead of the default 8gb.

Next, I will add a tag explaining the purpose of this VM and storage (not sure about its exact future utility yet but whatever …).

Next, we configure a security group. As I already played a little bit with another VM I created a customized Security Group which allows ports 22 (SSH), 80 (HTTP) and 443 (HTTPS). I choose it but you will be able to do that later and map your own security group to your VM.

Next screen is a global review before VM creation and launching by Amazon. I won’t copy/paste it, but click on Launch at the bottom.

Next is a CRITICAL STEP. Amazon will create you some SSH keys that you need to store and to use to connect on the VM through SSH. Do not loose them. You will be able to use the exact same key for other VMs you would want to create, so one key can match all your VMs.

As I already generated one for my other VM (called gekko), I reuse it.

And next is a simple status page explaining the instance is launching and linking to a few documentation pages, that you should of course read.

Now when we click on “View instance” we are redirected on EC2 console (you will use it a lot) and we can see that our new instance is launched, and its name is the tag we defined earlier during setup (you also see my other VM, stopped).

Next we will connect to the VM shell by SSH. On my laptop running W10 I’ll use putty. I assume you downloaded your private key. With putty the PEM file needs to be converted using PuttyGen to generate a .ppk file it will be able to use.

You’ll also need to grab the public IPv4 address from EC2 console, by clicking on your instance and copying the appropriate field.

Now in Gekko you just have to save a session with your private .ppk key configured and ec2-user@<public IPv4 hostname grabbed from the console> as host. Keep in mind that this hostname and associated IP could change. If you can’t connect anymore to your VM, the first thing to do is to check your EC2 console to check its hostname.

We launch the session. Putty will ask you if you want to trust the host, click Yes.

Woohoo ! we are connected ! This was fast and simple.

Updating the VM & deploying our software

OK so now we need to deploy all the basic things we saw in previous posts, but also more things like Nginx to protect the access to Gekko’s UI, and later we will have to implement a way for the VM to automagically download updated Strats to run it. First let’s make sure we can run a simple backtest with Gekko.

The goal is to deploy all what we need to launch a functionnal Gekkoga VM, and then we will create a customized AMI to be reused on a better VM specialized in CPU computations. Note that EC2 can also supply VMs with specific hardware like GPU”s if you need to run software able to offload the computation on GPU cards, this is not our case here unfortunately but it might somedays as I would like to start to experiment AI.

I won’t explain everything below, this can be put in a shell script, and you can use the links to my blog to download a few standard things not compromising security, but there are some private parts that you will need to tweak by yourself, especially the ssh connection to my home servers of course.

All the steps below do not require manual operations but some are customized for my own need, read the comments.

First we update the VM and deploy generic stuff.

Next we deploy NGinx which will act as a Reverse Proxy to authenticate requests made to Gekko’s UI.

Now we need to define some very customized stuff, I won’t explain all as this article is not a complete how-to, you need sysadmin knowledges.

  • Create a user/passwd to be used by Nginx reverse proxy
  • To automate downloading of stuff from our home server using scp or the launch of actions on our home server through ssh (to automatically make a tarball of our gekko’s strats for exemple before downloading them) we will need to import our home server & user SSH key /home/ec2-user/.ssh/ and don’t forget to change its permissions with chmod 600

This is an example of what you could do once your reference server’s ssh key was successfully imported on your EC2 instance:

Now we just need to launch ngix .. and eventually save pm2 sessions so that it will be relaunched at boot.

Testing the VM

If everything was OK -And yes I know a lot of parts could have been wrong for you, but for me at the moment I was testing it it was OK- you should be able to launch your favorite web browser and target https://<Your VM FQDN> and see a login prompt. You need to enter the login/password you defined in /etc/nginx/.htpasswd

You should now see this …

My test dataset was correctly downloaded, it is well detected by Gekko. I will just give it a little update by asking gekko to download data from the 2019-01-07 22:30 to now and then upload it back on my reference server at home.

Next, let’s give a try to the strats we downloaded from our reference server at home …

All is running well …

We now have a good base to clone the AMI and make it a template for higer-end VMs. We will need to make it:

  • Able to download uptodate data from markets
  • Able to download up to date strats from our reference server@home
  • Launch one particular Gekkoga startup script
  • Make it upload or send the data somewhere

Please remember to stop your VM either from command line or from Amazon EC2 console so that it won’t drain all your “free” uptime credits !

Playing with AWS CLI

Now we want to control the launch and stop of our VM remotely, from a remote server or workstation, and we would like it to automatically execute a script at boot, using user-data, to download fresh data from our reference server. As you can see by clicking on previous links, this is pretty well documented by Amazon.

First, we need to install AWS CLI (Amazon Command Line Interface). On my server I had to install pip for Python.

Now we can install AWS CLI using pip, as explained in Amazon’s documentation. The –user will install it in your $HOME.

We add the local AWS binary directory to our user PATH so that we can launch it without having to use its full path. I’m using a Debian so i’ll add it in .profile

Now, we need to create some IAM Admin User & Groups from our EC2 console, to be able to use AWS CLI. Please follow Amazon’s documentation “Creating an Administrator IAM User and Group (Console)“. Basically you will create a Groupe, a Security Policy, and an Administrator user. At the end, you must obtain and use an Access Key ID and a Secret Access Key for your Administrator user. If you loose it, you won’t be able to retrieve those keys, but you will be able to create new ones for this user (and propagate the change on every system using them). So keep them safe.

Then we will use those keys on our VM, and on our home/reference server from which we want to control our instances. You can specify the region that Amazon attributed to you also if you want (hint: do not use the letter at the end of the region, eg. if your VM is running in us-east-2c, enter ‘us-east-2’).

Let’s test it with a few examples I got in the docs:

  • Fetch a JSON list of all our instances, with a few key/value requested:
  • Stopping an instance
  • Starting an instance
  • Ask the public IP of our running VM (we need to know its InstanceID):

To send remote commands to be executed on a specific VM, you will need to create a new IAM role in your EC2 Console, and make your VM use it, so that your remote calls will be authorized.

Give your VM an IAM role with the Administrator Group you defined before, and in which there is also the Administrator user we are using the keys on AWS CLI. Now we should be able to access the VM and send it informations and request data.

  • To make the vm execute ‘ifconfig’:
  • To check the output we use the commandID in another request:
  • And … -took from the doc, I just added the jq at the end-, If we want to combine both queries:

Making a new AMI from our base VM & instantiate it

Creating a new AMI

First we stop our VM.

Now in EC2 Console we will create a new AMI from our instance.

By default the images you create are private, you can change it if you want and share your AMI to the region you are using in Amazon’s cloud.

Instantiating & executing actions at first boot

We want to tell the new instance of this image to execute a shell script at its very first boot. This could be very useful later. First we will create this script on our local reference server and put a few commands in it, but also activate logging on the VM (outputs will be available both to the /var/log/user-data.log and to /dev/console).

I create a script called 0.user_data.sh in a $HOME/AWS directory on my reference server, and put this inside:

We request the creation & launch of a new instance based on our Image ID. Note that I use the name of the key I defined earlier (gekko), I used the same subnet as my previous VM (don’t really know if that is mandatory, have to test), the security group ID can be checked on EC2 console “Security Groupes” menu, and we also specify what IAM role we want to allow to control the VM with AWS CLI (you created it earlier as it was mandatory for some CLI commands to run).

Our new InstanceId is i-0c6d1148adebf33c3 . From EC2 console I can see it is launched. I want to check if my user-data script was executed.

This is quite good ! I also double checked on my reference server if I could see incoming ssh connections by adding an ssh execution + scp downloading request command to the script, and it’s ok: 2 connections as expected (one for the ssh, the other one for the scp).

We have a working “first time script” that the VM will execute upon its instantiation, and that we could customize later on to perform one-shot specific actions. Now, we want our VM to connect to our reference server at each boot, to make it prepare a package, then download it, then untar it, and execute a start.sh script that may be embedded inside.

Automatically download an updated Gekko customized package & deploy it, at each boot

First, on our EC2 reference VM (the one from which we created a new AMI so yes either we will have to later on create a new AMI, or you can perform this step while you are still preparing the first AMI), we will perform this:

Then we will edit /etc/rc.local (which is a symlink to /etc/rc.d/rc.local) and add this:

Note that I also added a line to restart Nginx as I noticed that I had to relaunch it manually before I could access Gekko’s UI. I didn’t investigate further to understand why, maybe later.

On our reference server:

  • We also have to create the same directories
  • And to create the $HOME/AWS/1.make_package.sh shell script, this is the script called by our EC2 instance at each boot:
  • And to create a $HOME/AWS/3.package_start.sh shell script, this is the script which will be embedded in the package built by 1.make_package.sh and that the EC2 instance will execute locally after downloading & unpacking of the package.

Note that we could improve it, as it is the only of our script which will not produce a log file prefixed with the same timestamp as the others. IT takes the local hour of execution, instead of receiving the timestamp as a parameter, as 1.make_package.sh.

We reboot our EC2 reference instance:

After a few seconds, we check the rc.local log on our EC2 instance for the downloading of our reference package from our reference server. In rc.local, we redirected the logs to $HOME/AWS/logs/<date>_package.log :

Seems all good. Now still on our EC2 VM, we check the 3.package_start.sh logfile :

Seems also good, no error. Of course we will have to tweak this batch file later so that the files we want to include in the downloaded package will be deployed at the right place before launching Gekko or other actions.

Now on our reference server at home:

Seems also good, no error. Same thing here, our 1.make_package.sh script on our reference server will need to be tweaked so that we include all the files we need in the package which will be created locally, then downloaded by the EC2 instance at next boot, including the other
3.package_start.sh which will be executed by the EC2 instance.

Now, before we make a new version of our reference AMI (it won’t be the last one :)), I will :

  • Make some cleaning in AWS/logs, AWS/logs/old, AWS/package and AWS/package/old dirs, but also in gekko/history, gekko/strategies and gekko/gekkoga/config and /gekko/gekkoga/results, as I made a lot of tests for the various shell scripts you found above and also because I plan to deploy through my reference server only what is necessary to Gekko to perform a backtest. It includes: the history database, its conf file, the strategy and indicators it needs, any other thing needed, same for Gekkoga. But now it’s up to you to decide what you want to do on your reference AMI !
  • Add a shellscipt to automatically update the Dynamic DNS handling my Gekko’s EC2 FQDN. As it is 99% personal, I won’t detail it here. What it does is checking the external IP of the machine, check if it is different than the latest knew one, if yes update the A record of the FQDN on the DNS server.

To create a new AMI from your reference VM, you know the procedure, we already did it above, as well as for instantiating it through your AWS CLI installed somewhere.

Next step will be to try to launch a Gekkoga backtesting session by instantiating our AMI on a much better VM in terms of CPU and memory.

But be warned, this will be charged by Amazon !

This will be next article’s topic.

Automate Gekko’s Strats parameters backtesting (with Gekkoga)

We saw in previous posts how to install gekko, use it, and customize our first strategy.

But, as we figured out, every strategy, shall it be your own custom one or any Strat you will find on Internet with excellent backtests results showed by its creator, also needs to be tweaked, for a specific market, currency, asset, and it means we need to find the good parameters to be used with this specific Strat. And you will need a lot of backtesting, then news tests on the live market with simulated orders (paperTrader mode), before being launched “live”.

Note that finding the perfect parameters for a backtest (the ones which will provide you the best profit and best sharpe ratio) does not mean that it will perform well on a live market, as trends and volumes can simply not be known by advance. Would be too easy. Therefore, the tools we will use here do have a strong limitation: they will help you to find the best parameters for a Strat, using data from the past, but in no way it means it will perform well in the future (see overfitting or curvefitting).

So, first of all, we need to define a good backtest strategy, whatever the way (automated or not) we will find and test parameters. IMO a good testing strategy -this is what is done in AI learning & testing phases- is to split your backtest dataset in several parts: one long dataset to make a general backtest and reach good profit & sharpe; then test it on smaller datasets, of course still from the same market/currency/asset, but with different kind of trends. This way we will be able to understand how well will perform the Strat with parameters X or Y on this or that kind of trend.

We could also run Gekkoga sessions on datasets “specialized” in a kind of trend, and check if the optimized parameters found will change a lot and how between each dataset.
With those kind of results and knowledge, we could imagine implement a strat which would dynamically change and auto-adapt its parameters to the current trend, if it’s a long term one. Remember the parameters won’t only depend on the trend, depending on the indicators used, it could also depends on the market prices or other things.

In any case, each “good” test should require a stronger -manual- analysis from you: you will need to study the trades (when they were made). Is it accurate or not ? Were the larges losses controlled by a stop-loss implementation or not ? If you change a little bit one parameter, won’t it make your Strat less profitable on your past dataset, but also less risky and more profitable for the future ? The main key is probably to control large market losses. Then to add some bonuses to the Strat.

Let’s come back to this post: I wanted to double my theoretical studies on various indicators -to better understand them and eventually find an appropriate way to mix them- with technical tools to improve the backtesting phase. When I test a Strat, I need to test it a lot of time; therefore I naturally searched for tools which would allow me to automate that, and I found -among others- Gekkoga.

Gekkoga is told to be a Genetic Algorithm (GA) trainer, it means that it will:

  1. Automagically test random parameters using controlled backtests it will launch through your Gekko installation,
  2. Automagically try to mix some of the parameters which seemed to perform well and launch new tests, and study the results, and/or mutate them or others,
  3. Log the best result it finds in terms of profit (may not be the most accurate target !) with the associated parameters used during the test,
  4. Until … I don’t know yet if it actually can ends sometime ! And I did not check the code to understand it yet, I simply used it.

Gekkoga Installation

Enough talks … Let’s install it. It’s quite simple, BUT you need a fully functionnal Gekko. Do not try to use Gekkoga if you don’t have a working Gekko and if you don’t masterize its use yet.

cd <gekko_installdir>

git clone https://github.com/gekkowarez/gekkoga.git && cd gekkoga

Now we need to deploy a fix made to make Gekkoga compatible with latest Gekko v0.6x we installed previously, as some changes were made in its API.

git fetch origin pull/49/head:49
git checkout 49

We manually download a fix in index.js to support nested Gekko’s parameters and fix something in mutations

mv index.js index.js.orig
curl -L -O https://raw.githubusercontent.com/gekkowarez/gekkoga/stable/index.js

We manually download a fix in package.json to support nested config-parameters

mv package.json package.json.orig
curl -L -O https://raw.githubusercontent.com/gekkowarez/gekkoga/stable/package.json

Then we install it. Once again, beware: don’t run ‘npm audit fix’ as suggested at the end of the npm install command below; it would break things.

npm install

Note: Gekkoga will need either Gekko’s full UI mode to be launched (use the PM2 startup script start_ui.sh we created in Gekko’s installation post), or the API server which is found in <gekko_installdir>/web, and it will make an intensive use of it. This is why in Gekko’s installation post, I recommended to raise the stock timeouts in <gekko_installdir>/web/vue/dist/UIconfig.js and in
<gekko_installdir>/web/vue/public/UIconfig.js to 600000.

Gekkoga Configuration

Gekkoga’s configuration file is located in <gekko_installdir>/config/. We will copy the original one to a new one dedicated to our previously customized strategy (MyMACD) and symlink it with the name of the config file we defined in the start.sh script. It will make our life easier later when we will have new strats to backtest: we will just need to copy in gekkoga/config one config file with the filename containing the name of the strat used, and update the symbolic link config/config-backtester.js to point on this specific config.file.

cp <gekko_installdir>/gekkoga/config/sample-config.js
gekko_installdir>/gekkoga/config/config-MyMACD-backtester.js

ln -s
<gekko_installdir>/gekkoga/config/config-MyMACD-backtester.js
<gekko_installdir>/gekkoga/config/config-backtester.js

Now we will edit <gekko_installdir>/gekkoga/config/config-MyMACD-backtester.js. It is not complicated BUT we will need to define EXACTLY the same parameters as in your gekko config file or toml file. Otherwise Geekkoga will start, but with no trades if anything is wrong. Beware of the typos, beware of the type of data you will use and their type (integer vs float) & eventual decimals.

Hint: try to use as much integers as possible in your Strat parameters, and avoid floats when you can. This is why in our customized MACD Strat, I defined the stoploss percentage as an integer, and then in MyMACD.js when we need to use it, we divide it by 100. If we used a float to allow very accurate stoploss, it would have force us to tell Gekkoga to generate randomized floats, and even if we can try to fix the number of decimals used, the number of possible combinations and subsequent backtests to perform would be exponential. also, the .toFixed(2) we will sometimes used in Gekkoga conf file is an artefact: the library used to generate the random numbers will actually generate floats with a much higher precision than 2 decimals, but we will artificially truncate or round it to 2 digits. It means that Gekkoga will indeed perform a lot of backtests with the same float rounded to 2 digits, because the floats it actually generated in backend were indeed not equals.

First we change the config section, once again we want it to reflect EXACTLY our gekko config file. Same parameters, same values.

const config = {
stratName: ‘MyMACD‘,
gekkoConfig: {
watch: {
exchange: ‘kraken‘,
currency: ‘EUR,
asset: ‘ETH
},

We use the scan functionnality to automatically the daterange of the dataset to use, as we only have one dataset for kraken; and for now we want to test Gekkoga on the whole dataset. Later on when we know Gekkoka works, you will be able to change that in order to to reduce the dataset and reflect the testing strategy I explained before.

daterange: ‘scan’,

/*
daterange: {
from: ‘2018-01-01 00:00’,
to: ‘2018-02-01 00:00’
},
*/

Now we update our balance and fees.

simulationBalance: {
‘asset’: 0,
‘currency’: 100 //note that I changed this since initial confs in other posts
},

slippage: 0.05,
feeTaker: 0.16,
feeMaker: 0.26,
feeUsing: ‘taker’, // maker || taker

The apiURL should be OK.

apiUrl: ‘http://localhost:3000’,

We won’t change the standard populationAmt, variation, mutateElements, minSharpe or mainObjective.

parallelqueries needs to be updated to reflect your CPU configuration, as it is the number of parallel backtests Gekkoga will be able to launch. the more CPU you have, the better it is. But align this on your number of CPU. If you have 4 CPU or vCPUs, use 3 or 4. With 4, your whole CPU capacity will be filled by Gekkoga, it could make your computer almost unusable for other tasks while Gekkoga is running (and your CPU fan will start to make noise). If you have a dedicated Gekoga computer this is fine, if you don’t, this may be a problem so consider a lower value. It’s up to you.

In my case,

  • On my regular laptop, I have 2 CPUs but 4 seen by the OS thanks to hyper threading, so I’ll use 3 as I wan’t one CPU to be available for other tasks;
  • At the time I’m writing this article, I tried to run Gekkoga on an Amazon EC2 t2.micro with this setting to 1, I lost control on the VM and had to restart it;
  • For this test I will launch it on my Intel NUC VM, powered with 2 vCPU, but I’ll keep the setting to 1 to not stress it too much as a NUC is not designed for intensive CPU computations (I’m afraid the fan won’t cool enough the case & CPU).

parallelqueries: 1,

I don’t use emails notifications for now so I leave it to false.

Now we enter the interesting part. We will explain Gekkoga all the parameters we need our Strat to be filled with, and their values. If you enter a fixed value, Gekkoga will use this value all the time, in every backtest. It won’t change it. But we can define:

  • Some ranges, by using arrays, eg. [5,10,15,30,60,120,240]
  • Randomized values, by using functions as randomExt.integer(max, min) or randomExt.float(max, min).toFixed(2)

Did you notice the .toFixed(2) ? It forces the randomized float to be rounded to 2 decimals and this is the rounded value which will be used in Gekko’s backtest. But keep in mind that the float will still be generated with a higher number of digits, and 2.0003 or 2.0004 will in both cases be rounded to 2.00. It will lead to duplicate backtests. This is why I recommanded you to use as much integers as possible instead of floats.

First, the candleValues. I’m not really confident about short candles, but as the tool will test it for us, why not. Let’s extend it a little bit and remove a few values.

candleValues: [5,15,30,60,120,240,480,600,720],

Now the Strat parameters …

getProperties: () => ({

historySize: randomExt.integer(50, 0),

short: randomExt.integer(30,5),
long: randomExt.integer(100,15),
signal: randomExt.integer(20,6),

thresholds: {
//up: randomExt.float(20,0).toFixed(2),
//down: randomExt.float(0,-20).toFixed(2),
up: randomExt.integer(400,0)/100,
down: randomExt.integer(0,-400)/100,
persistence: randomExt.integer(9,0),
stoploss: randomExt.integer(50,0),
},

Let’s give it a try …. we will need to carefully check the console information.

gekko@bitbot:~/gekkoga/gekkoga$ node run.js -c config/config-MyMACD-backtester.js
No previous run data, starting from scratch!
Starting GA with epoch populations of 20, running 1 units at a time! node run -c config/config-MyMACD-backtester.js

Woohoo ! It started. Now I stop it and will create a nice PM2 startup script as I want to easily make it run in the background and easily get information on it.

echo #!/bin/bash > start.sh
echo “rm logs/*” >> start.sh
echo “pm2 start run.js –name gekkogaMyMACD –log-date-format=\”YYYY-MM-DD HH:mm Z\” -e logs/err.log -o logs/out.log — -c config/config-MyMACD-backtester.js max_old_space_size=8192 >> start.sh

chmod 755 start.sh

We restart it using this script …

./start.sh

Let’s check its logs …

It’s running in the background, good. Now let’s have a look at Gekko’s UI logs as it is supposed to receive API calls from Gekkoga:

We can see some calls to the backtest API, perfect. Now let’s check other informations, while we are waiting for some first results:

Conclusion: Gekkoga is pushing hard on one of the 2 CPU, this is conform to what we defined in conf. Memory consumption is low.

And finally, 21 minutes later, the first epoch completed:

You can find an explanation of an epoch here. What we see here is the winner of this epoch, and Gekkoga keeps running to compute more and compare them. It logs the best combination found in <gekkoga_installdir>/results in a JSON format, so to display it we will use jq (run ‘apt-get install jq’ as root if you don’t have it yet):

So we see here that the winner for now used a long value of 65, short 29, signal 9, 5mn candlesize, 15 candles history size, an up threshold of 2.44, a down threshold of -2.13, a stoploss of 16% and a persistence of 0. Our Gekkoga config file using only integers is well formated !

Now, the sharpe is very high, but in terms of estimated profits, on this whole dataset we actually performed better (1101%) than the market (817%), with 68 trades.

Now the problem is that it will take a very very .. very … long time to run. so, for now, I don’t have much more to say, we have to wait. At the same time we can continue to work on improving our knowledge about indicators and how they work, and imagine improvements to our Strats.

The only way to optimize the runtime seems to be to make it run on a higher number of CPUs and adapt the parallelqueries setting. I’ve never done that before but it gave me the idea to try to make it run on an Amazon EC2 machine. this will be detailed in another article.

Gekko Strategy customization

Now that we’ve briefly seen how to install gekko and how to use its main functionalities, we will try to customize a first strategy.

As a first step, you need to review Gekko’s excellent standard documentation:

I won’t explain here all the functions used, since it is explained in the standard documentation, and you have links above. Neither will I explain how to code. Testing your strategy was explained before but here is a quick recap:

From the UI or from the CLI you can backtest a strategy or use a live traderPaper bot:

  • With the UI, the strategies in <gekko_installdir>/strategies will be proposed in the strategies dropdown list in the backtest page in the UI; also you need a .toml file with the parameters for your strategy in <gekko_installdir>/config/strategies
  • With UI and CLI, if you want to backtest, you need to have populated your database with datasets downloaded from markets
  • You may also want to run a livebot with the paperTrader activated to simulate a real trading bot on the live market; but I strongly suggest to backtest your strategy before as it will be much faster to test it on a huge amount of data, ie. a large timeframe of markets variations.

As we saw, strategies are stored in <gekko_installdir>/strategies, those are .js files. As we started previously by testing the MACD strategy, let’s take a look at it.

The out of the box MACD Gekko’s strategy

First we will work on a copy of the MACD, but we won’t modify it for now.

cp <gekko_installdir>/strategies/MACD.js <gekko_installdir>/strategies/MyMACD.js

In the init function, we can see a few variables being initialized:

  • A trend structure, which will be used by the strat to “remember” the past
  • The requiredHistory which is read from the configuration file config.js in the tradingAdvisor section
  • A call to the MACD indicator, which is by default stored in
    <gekko_installdir>/strategies/indicators

Now, let’s have a look at the MACD indicator: open
<gekko_installdir>/strategies/indicators/MACD.js

  • As we can see it will use the EMA.js indicator
  • It will define three EMAs, one with the short value defined in the configuration file config.js, another one with the long value, and the last one as the signal
  • The MACD indicator will:
    • Update the short and long EMAs with the market price (the last candle closure value, provided by the MACD strategy calling the MACD indicator)
    • Calculate the difference between them
    • Update the signal EMA by using the diff between short and long EMAs as input
    • Return the difference between the short/long EMA’s and the signal EMA to our strategy

Please, check some MACD documentation to better understand this logic.

Now that we know the basics of the MACD indicator (it takes 3 inputs and returns one value) let’s come back to our MyMACD strategy.

The log function will display some information in Gekko’s log at each candle update.

We can see that it creates an macd variable which will “point” to our macd indicator defined in the init function, and it will display the various results for the 3 EMA’s by using macd.short, macd.long, macd.signal, macd.diff, and macd.result which is the real output of the MACD indicator.

Now, the check function. This is the function where all the trading rules are implemented. What is does, basically:

  • If the MACD is upper than the up trend threshold in config.js, then (in a short way):
    • It checks if the number of iterations of up trend is superior to the historySize (which comes from config.js)
    • If yes and if we didn’t yet asked to BUY, it tries to BUY (long order)
  • Else, if the MACD is lower than the down trend threshold in config.js, then:
    • It checks if the number of iterations of low trend is superior to the historySize (which also comes from config.js)
    • If yes and if we didn’t yet asked to SELL, it tries to SELL (short order)
  • Else, and it means that the MACD result is between the down threshold and the up threshold, it does NOTHING.

This is how it performs on a long term Kraken/EUR/ETH dataset, with the standard parameters. I still use the MACD and not MyMACD as we didn’t modify MyMACD yet, nor we modified the config.js to use it.

Backtest Parameters

Results

As we can see, with some quick estimation of the parameters (6 hours candles + higher thresholds than for shorter candles), if we made no trades at all, we would have benefit 984% of evolution straight from the market, but with our 168 trades we earned less: 835% of gain.

Now let’s try to tweak it a little bit …

Parameters

Results

Quick analysis

Just by adjusting two or three parameters we gained in profit. Remember that this can’t be considered as a guarantee you will indeed earn assets or currency in “real life” on a live market; as here we tweak the model with data from the past. We have time to tweak it “as we want to see the result”. But we don’t know if the same parameters will perform well in the future.

If we zoom a little bit on the timeline, we can see that the strategy won’t perform so bad during high uptrends or downtrends. We can also see that it could be tweaked as we often see some LONG (buy) orders appearing just before a loss on the market, but followed by a SHORT (sell) orders. Or SELL orders just before a BUY, despite an uptrend market.

Conclusion: I’m not an expert, and I don’t know if we can rely on only one indicator. Litterature tends to prove that every market analysis needs to be double or triple checked with other indicators.

What is sure is that every indicator needs to be tweaked, and you need to analyse the market with other tools, more accurate, with a visual display of the indicators. I personally use kraken’s new Trading & Charting tool as well as other sites as TradingView.com. I first play globally with one indicator at a time, adjust the short/long parameters, study their crosses, and then study the difference between both, so that I can report what I think are good triggers in Gekko’s config.js thresholds. Then I backtest, as we will see much longer in another article.

Cloning the MACD strategy

So we copied <gekko_installdir>/strategies/MACD.js to <gekko_installdir>/strategies/MyMACD.js

For now we will just modify it a little bit, to later check we are using this strategy and not the stock MACD strat. Edit MyMACD.js and jump to line 51:

log.debug(‘calculated MyMACD properties for candle:’);
log.debug(‘\t’, ‘short:’, macd.short.result.toFixed(digits));
log.debug(‘\t’, ‘long:’, macd.long.result.toFixed(digits));

Save it, and thats all.

We also need to copy
<gekko_installdir>/config/strategies/MACD.toml to
<gekko_installdir>/config/strategies/MyMACD.toml so that the backtest UI can display customized parameters for the MyMACD settings.

Now we need to modify
<gekko_installdir>/config.js so that Gekko will use it.

First the trading.advisor plugin needs to be modified to call the right strategy. It’s just the name of your strategy.js file, without the .js extension.

config.tradingAdvisor = {
enabled: true,
method: ‘MyMACD‘,
candleSize: 360, //to reflect the 6 hours candle I used in previous backtest
historySize: 6, //to reflect the 6 candles of warmup period in backtest UI
}

Then we need to provide the tradingAdvisor some parameters:

// MACD settings:
config.MyMACD = {
// EMA weight (α)
// the higher the weight, the more smooth (and delayed) the line
short: 10,
long: 21,
signal: 9,
// the difference between the EMAs (to act as triggers)
thresholds: {
down: -0.7,
up: 0.3,
// How many candle intervals should a trend persist
// before we consider it real?
persistence: 0
}
};

Those 2 modifications will allow use to use our strat with Gekko’s CLI.

Next, in case we want to use the UI to backtest or run a livebot, we also edit <gekko_installdir>/config/strategies/MyMACD.toml

short = 10
long = 21
signal = 9

[thresholds]

down = -0.7
up = 0.3
persistence = 0

Let’s test it on the UI.

Woohoo, same result :] Just to be absolutely sure we now use MyMACD and not MACD, let’s run it through the CLI and check the console or logs:

gekko@HP850G3:~/gekko$ node gekko –config config.js –backtest



2019-01-09 14:07:52 (DEBUG): calculated MyMACD properties for candle:
2019-01-09 14:07:52 (DEBUG): short: 247.91113481
2019-01-09 14:07:52 (DEBUG): long: 253.43444971
2019-01-09 14:07:52 (DEBUG): macd: -5.52331490
2019-01-09 14:07:52 (DEBUG): signal: -4.87686289
2019-01-09 14:07:52 (DEBUG): macdiff: -0.64645201
2019-01-09 14:07:52 (DEBUG): In no trend
2019-01-09 14:07:52 (DEBUG): calculated MyMACD properties for candle:
2019-01-09 14:07:52 (DEBUG): short: 249.96729212
2019-01-09 14:07:52 (DEBUG): long: 253.96040883
2019-01-09 14:07:52 (DEBUG): macd: -3.99311671
2019-01-09 14:07:52 (DEBUG): signal: -4.70011365
2019-01-09 14:07:52 (DEBUG): macdiff: 0.70699694
2019-01-09 14:07:52 (DEBUG): In uptrend since 1 candle(s)
2019-01-09 14:07:52 (INFO): 2017-10-24 07:29:00: Paper trader simulated a BUY 0.00000000 EUR => 3.39807550 ETH

As you already figured out, we can see the modification we made in MyMACD.js, so YES, this is the right strategy we are using. Now we can try to modify it, as a quick example.

Customizing our strategy

One thing which is not covered in this strategy is the “notrend” possibility. When the MACD result stalls for a long time between our down and up thresholds, it doesn’t mean that the market and currency/asset rates are stables. It means that it is not moving enough up or down to be used by our strategy.

But, the market can still decrease (or increase) and maybe after a few candles we should sell, to keep our gains by converting back our assets in currency. This is called a stoploss.

So basically we should and will:

  1. Introduce a new parameter in the conf, called stoploss
  2. Add the handling of the stoploss in the “notrend” section of the strategy: if the closing price of the candle we are reviewing is below our “last buy price” then we order Gekko to advice a short position (sell).
  3. So we will also need to modify the uptrend section, to update the stoploss price if the market is uptrend.

Note that there are other ways to do that, we could also add a OR condition to the downtrend section, to check if (MACD diff is below down_threshold OR if candle.price < stoploss_price).

Adding a new parameter in conf

Edit <gekko_installdir>/config.js

config.MyMACD = {
// EMA weight (α)
// the higher the weight, the more smooth (and delayed) the line
short: 10,
long: 21,
signal: 9,
// the difference between the EMAs (to act as triggers)
thresholds: {
down: -0.7,
up: 0.3,
stoploss: 10,
// How many candle intervals should a trend persist
// before we consider it real?
persistence: 0
}
};

Now edit
<gekko_installdir>/strategies/MyMACD.js

In the init block, we add this. Note that if you define a variable or const in the init() block, it won’t be persisted in the check() block. So we use “this” object to store data.

//get the stoploss rate from conf
this.stoploss_rate = this.settings.thresholds.stoploss;
//reset the stoploss_price
this.stoploss_price = “”;

We need to modify the check function so that it will receive the candle description as input, as we will need to use it to get the candle.price we try to buy, or the candle.price to update the stoploss if the market is uptrend.

method.check = function(candle) {

In the uptrend section, we add this, to update the new stoploss price each time we sell and each time the market is increasing if it’s new price is higher than our stoploss_price:

log.debug(‘In uptrend since’, this.trend.duration, ‘candle(s)’);

if (this.stoploss_price != “” && candle.close > this.stoploss_price)
{
this.stoploss_price = candle.close-candle.close*this.stoploss_rate/100;
log.info(‘===> New computed stoploss price is:’, this.stoploss_price,'<===’);
}


if(this.trend.persisted && !this.trend.adviced) {
this.trend.adviced = true;
this.advice(‘long’);
this.stoploss_price = candle.close-candle.close*this.stoploss_rate/100;

log.info(‘===> We BOUGHT at ~’,candle.close,’ and computed stoploss price is:’, this.stoploss_price,'<===’);
} else
this.advice();

Now in the downtrend section, if we SELL, we should reset the stoploss_price as we don’t own any assets anymore.

if(this.trend.persisted && !this.trend.adviced) {
this.trend.adviced = true;
this.advice(‘long’);
this.stoploss_price = candle.close-candle.close*this.stoploss_rate/100;
log.info(‘===> We BOUGHT at ~’,candle.close,’ and computed stoploss price is:’, this.stoploss_price,'<===’);

} else
this.advice();

Let’s give it a try, just to make sure we didn’t break anything yet; and even if we actually did not change anything to the decision tree to sell or buy.

node gekko –config config.js –backtest

So what do we see ?

  • First, that I disabled the debug mode at the beginning of the config.js file and I forgot to write it here 🙂
  • The first BUY displayed has initialized the stoploss_price by computing the stoploss_rate we defined in conf
  • Then the stoploss_price keeps rising, as we took care to not update it if the market price was lower than our stoploss_price
  • When we SELL, the stoploss_price is resetted
  • Seems all good so far !

Now let’s modify the “notrend” block and create a “stall” trend with a SELL if we detect a persistence and a market price lower than our stoploss_price.

} else {
//log.debug(‘In no trend’);

// new trend detected
if(this.trend.direction !== ‘stall’)
// reset the state for the new trend
this.trend = {
duration: 0,
persisted: false,
direction: ‘stall’,
adviced: false
};

this.trend.duration++;

log.debug(‘In stalled trend since’, this.trend.duration, ‘candle(s)’);

if(this.trend.duration >= this.settings.thresholds.persistence)
this.trend.persisted = true;

if (this.trend.persisted && !this.trend.adviced)
{
if (this.stoploss_price != “” && candle.close < this.stoploss_price) { this.trend.adviced = true; this.advice(‘short’); this.stoploss_price = “”; log.info(‘===> We \’Stalled\’ SOLD at ~’,candle.close,’ and reseted stoploss price. <===’);

}

log.info(‘===> Market not low enough to \’stall\’ sell <===’);
}

this.advice();
}

Beware that we also need to modify the downtrend block condition, as we don’t need it to SELL another time if the market is really downtrend, after we already sold earlier thanks to our new “stall trend” code:

} else if(macddiff < this.settings.thresholds.down && this.stoploss_price != “”) {

And we test it (note that I commented out the last log.info line in the Stall block to reduce verbosity):

It seems to work. Don’t forget to copy the MACD.toml file in a new MyMACD.toml file and to add the stoploss parameter, so you can use it in the backtest UI.

Conclusion and other possible improvements

Conclusion

So how does it perform ? Well, not so good 🙂 I will write another post about backtesting, and how to try to tune the parameters. Because, As we already noticed, tweaking the parameters does require a lot of analysis, and a lot of tests.

Here we just introduced a new parameter, and here is the result, knowing that I kept the previous test parameters.

So “out of the box” our new strategy is less efficient than the stock MACD. But we gained control over a new kind of trend on the market watched by the Strat. We need to tune it and find better values for the parameters. Maybe the code also needs to be optimized, I don’t pretend AT ALL to have a good knowledge of the good rules to code.

In a next article, I will explain how I managed to launch a massive test with randomized (or not) parameters, in order to track the best results (on past data, again), using public tools from contributors.

For information, one backtest with this strategy, on my i7 “mid class office” laptop approximately runs during 41 seconds, after I modified all the log.info() functions we used above to use log.debug() instead; and with debug mode unactivated in config.js.

This was just an example to explain how to instantiate and modify a strategy. I don’t pretend this modification to be a gain. Again, this is at your own risk, from running a non optimized standard gekko’s Strat, to running a customized strategy. You will probably loose your investment as there is no magic possible, it means it is not possible to anticipate the markets without a huge risk, or a very high confidence in your markets knowledge.

Ideas for more improvements

Other possible improvements you could work on are:

  • Add a parameter in conf to provide your strategy the last type of order (buy or sell) with its price so that your strategy will be able to use it for its first decision
  • Calculate the stoploss in the init section if the last action in conf was a buy and you know its price, so that you won’t miss to sell if the very first trend your strategy will have to deal with is a long stall-but-decreasing-market.
  • Deal with more indicators than just the MACD.
  • Port the stoploss principle to buy actions if we are on a long but slow persisted market raise,
  • What is currently not allowed in Gekko is to handle multiple candle size at the same time, so that you could have one indicator working on short terms trends, and another one on long term trends, and take decisions based on the two (or N candle sizes). There are some forks allowing it, but I didn’t give it a try yet.

Gekko trading bot usage

Now that we installed Gekko, we need to feed it with data, and make it useful. Please, keep in mind that a trading bot is in NO WAY an insurance to earn money, as it will depend on the logic of your strategy, and on the markets (currency & asset) you will use.

We will follow 4 simple steps :

  1. Feeding your local Gekko database with market’s data so that we will be able to locally make some tests with strategies
  2. Run a backtest, which means testing a strategy on the local Gekko database we previously populated
  3. Run a livebot with traderPaper plugin, to simulate how a strategy would perform on a live market
  4. Launch the strategy, live, which means we will allow Gekko to really make buy/sell (long/short) orders

Feeding Gekko with market’s data

This part is quite simple and we will first use the UI to do it, click on “Local Data” on the top menu.

If you try to click on “Scan available data”, Gekko will complain, this is normal as we didn’t provision anything yet.

Now click on “Go to the Importer” button below.

For my test I will choose Kraken, the Euro currency, and Ethereum asset, and use the default daterange but you can change it.

Beware, dowloading data from markets as Kraken using -for now- an anonymous access will often trigger some kind of bandwith and access limitations, so the download rate will be low, and if you request a huge period of data, it may take a long time. Also, you could have to relaunch it several time, and reduce the timeframe each time to complete the previous downloaded one.

Now, Click on Import.

Gekko starts to “automagiclly” download the data from Kraken, and records it in its local database.

To use Gekko’s CLI (Command Line Interface, ie. without the UI) to make the same thing is quite easy:

  • Modify Gekko’s config file

cd <gekko_installdir>

cp sample-config.js config.js

edit config.js with your favorite editor

  • Now, as explained in official documentation, make sure the candleWriter plugin is enabled (around line 263 in the out-of-the-box config.js file), so that gekko will be able to record the data

config.candleWriter = {
enabled: true
}

  • Now check the conf around line 345, and configure what daterange you need to import

config.importer = {
daterange: {
// NOTE: these dates are in UTC
from: “2017-11-01 00:00:00”,
to: “2017-11-20 00:00:00”
}
}

  • Now run gekko and tell him to import data

node gekko –config config.js –import


Now, if we take a look back in gekko’s UI, we can see that it detects 2 importations:

The first one, is the one we imported using the UI and from Kraken. The second one is from binance (I stopped it before it reached 2017-11-20). This is because we didn’t modify the config.watch section in config.js:

config.watch = {

// see https://gekko.wizb.it/docs/introduction/supported_exchanges.html
exchange: ‘binance’,
currency: ‘USDT’,
asset: ‘BTC’,

// You can set your own tickrate (refresh rate).
// If you don’t set it, the defaults are 2 sec for
// okcoin and 20 sec for all other exchanges.
// tickrate: 20
}

You are free to modify it, just make sure that you use an allowed combination market/currency/asset. Checking the console logs when you lanch the CLI will help you to analyze any error Gekko could encounter.

Backtesting a strategy on local data

Now that we have a populated database, let’s give a try on a strategy. The goal here is not to explain you what’s the best strategy or how to tune it. We just want to check how it works.

First, we use the UI: in the Backtest menu, click “Scan available data” button.

We can once again see the two series of data we imported previously, good. Now, below on the same screen, let’s choose the MACD strategy. This is a quite standard indicator (as well as the other ones out-of-the-gekko-box) and the strategy implementing it is also quite simple as we will see in a next article.

Note that what is displayed on this screen can be changed:

  1. The strategy in the drop-down list are the ones in
    <gekko_installdir>/strategies/*.js files (eg. MACD.js for the MACD strategy)
  2. The parameters on the right are from the <strategy_name>.toml files in
    <gekko_installdir>/config/strategies
  3. The paperTrader settings we will review below are stored in the
    <gekko_installdir>/config/plugins/paperTrader.toml file

We change the candle size to 3 hours, and use 4 candles of history to “warmup” the strategy and provide the indicator some historical data.

Now let’s have a look at the paperTrader settings and change it to reflect Kraken fees:

And we launch the backtest ! Wait .. no Launch button ? Oh we forgot to select a dataset. Go back to the top of the page and click on Kraken box. Scroll down and click “Backtest”.

After a very short time, because of a simple strategy and a very short timeframe in the dataset, you will see 3 sections of results:


Those are the general statistics. Simulated profit here may sound bad (-24%), but actually it is better than the standard performance of the market if you did no actions with your 1 asset and 100 currency configured in the paperTrader settings (-41%). We can see here that the strategy actually performed 38 trades.

Here we can see the whole timeframe with BUY/LONG orders (green dots) and SELL/SHORT orders (red dots). You can zoom in. As you can guess, when you see a green dot just before a big market fall, this is not so good.

Finally you can see here a detailled view of each roundtrip (a buy+sell or sell+buy so that a PnL can be calculated).

If you want to use the CLI this is not so complicated.

Edit your config file (remember ?
<gekko_installdir>/config.js) and:

  • Make sur the config.tradingAdvisor section (tip: around line 33) is configured as you expect. Basically, this is were you provide the same parameters as we saw in the UI to the strategy you choose. Here we still use the MACD.

config.tradingAdvisor = {
enabled: true,
method: ‘MACD‘,
candleSize: 180, //unit is minutes
historySize: 4, //this is the historySize in the UI
}

// MACD settings:
config.MACD = {
// EMA weight (α)
// the higher the weight, the more smooth (and delayed) the line
short: 10,
long: 21,
signal: 9,
// the difference between the EMAs (to act as triggers)
thresholds: {
down: -0.025,
up: 0.025,
// How many candle intervals should a trend persist
// before we consider it real?
persistence: 1
}
};

Make sure the paperTrader is enabled, to actually simulate orders (tip: around line 65).

config.paperTrader = {
enabled: true,
// report the profit in the currency or the asset?
reportInCurrency: true,
// start balance, on what the current balance is compared with
simulationBalance: {
// these are in the unit types configured in the watcher.
asset: 1,
currency: 100,
},
// how much fee in % does each trade cost?
feeMaker: 0.16,
feeTaker: 0.26,
feeUsing: ‘taker’,
// how much slippage/spread should Gekko assume per trade?
slippage: 0.05,
}

And the performanceAnalyser should also be enabled, line 100.

config.performanceAnalyzer = {
enabled: true,
riskFreeReturn: 5
}

Remember ? Gekko will use the market, currency and asset defined at the beginning of the file, as we saw when we imported data using the CLI. I want to use the dataset from Kraken, so I’ll edit it:

config.watch = {

// see https://gekko.wizb.it/docs/introduction/supported_exchanges.html
exchange: ‘kraken’,
currency: ‘EUR’,
asset: ‘ETH’,

// You can set your own tickrate (refresh rate).
// If you don’t set it, the defaults are 2 sec for
// okcoin and 20 sec for all other exchanges.
// tickrate: 20
}

Now scroll down to line 332 to edit the timerange we want to use. I’ll stick with the ‘scan’ option as I have only one dataset for kraken, so no conflict or choice to do, and kraken will use the whole timeframe available. If I want to reduce the timefram, I just have to comment out the daterange: ‘scan’ line, and uncomment and modify the 4 next lines.

config.backtest = {
daterange: ‘scan’,
// daterange: {
// from: “2018-03-01”,
// to: “2018-04-28”
//},
batchSize: 50
}

Now, we launch it with

node gekko –config config.js –backtest

As you can see we get the same feedback as the one we had on the UI, without the graphical timeline of course.

Testing your strategy on a live market with a Live “paperTrader” Gekko

On the UI, click on the “Live Gekkos” top menu. The Market watchers are modules launched to gather live data from the markets you will configure for each Live Gekko. The strat runners are the paperTraders module: they analyze the market and use the STRATegy you defined on those data, to provide you feedback about what orders were simulated, and their results.

Here I choosed the MACD strategy, and configured it with 5 minutes candles, for this example, as I didn’t want to wait for hours to take screenshots 🙂

Note that on the right, we choose the PAper Trader option, as we want to simulate the strategy on the live market.

Now let’s configure Kraken’s fees on the paper Trader options below.

And we start it. A lot of things will change on the screen, but during the 10 first minutes we won’t see anything interesting as Gekko is gathering data, then it needs 1 candle of 5 minutes of history to start to eventually take decisions, depending on the market and your strategy and your strategy’s settings.

While we are waiting for the first information to appear, let’s try to come back on the “Live Gekkos” page.

So here I think we can see a little bug in the UI, as it says the duration since we launched the Market Watcher (which was automatically launched by Gekko) is 2h32mn which is wrong. I think there is a mismatch between UTC times and local time to calculate the duration. Not a big issue.

The Strat runner we launched is running, we can see that it did nothing yet: 0 trade, PnL (Profit and Loss) is 0, etc. You can click on its line to come back to the detailled view of this Strat runner.

After a few warmup minutes …

But we still can’t see any roundtrip, as Gekko only performed one trade for now (Amount of trades: 1 in the Runtime section). While we are waiting, please note that you can also manually check Gekko’s logs on the filesystem:

cd <gecko_installdir>/logs

ls -al

tail -f *

Now, I will stop this test (just click on the big red button inside the Strat runner section), and switch to the CLI to launch it manually, as I prefer to not depend on a CLI to perform operations.

To use the CLI is easy, as we already configured the appropriate sections in config.js and especially we enabled the config.paperTrader. IF we want to launch a real live Gekko on a market, the paperTrader will need to be disabled in the conf.

We just have to launch:

node gekko –config config.js

As running a livebot is supposed to be a very long activity, I don’t want to depend on my terminal (which could be unfortunately closed) and want it to run in the background. I’ll still be able to check and analyze the logs to understand what happened.

So we will create a small shell script to launch our livebot through PM2 as explained in the Gekko Installation page, and in PM2 documentation.

gekko@bitbot:~/gekko$ cat start_livetestbot-DEMA_PERSO_SL.sh

!/bin/bash

rm logs/*
pm2 start gekko.js –name TESTBOT_MACD -e logs/err.log -o logs/out.log — –config config-livetestbot.js

gekko@bitbot:~/gekko$

Now when we use
start_livetestbot-DEMA_PERSO_SL.sh to launch our livebot, it will run in the background until you stop it with pm2 <id> stop, and you can check its logs in <gekko_installdir>/logs

Running a “real live” tradebot

In the UI, let’s try to run a real Gekko on Kraken. Please note that we choosed “tradebot” on the right, and that there is no more paperTrader settings, with your amount of currency, assets, etc.

This is normal as a live tradebot will gather your amount of assets and available currency directly from your trading portal (Kraken, Binance, etc.). Yes … we want real trading with real money !

But please remember our quick test with the stock MACD strategy was not so good ! So you need a lot of tests, and to understand the strategies and how to improve them, if you want to really go “live”. And even with good backtesting results, those results will only be good on PAST data, not incoming data from the markets. Therefore, even with a very good strategy on past data, you could have a very bad strategy with new data (this is called overfitting).

After this gentle reminder, let’s try to click on theblue “Start” button.

Yes, this is normal, we want to go live, so we need to configure our trading platform’s credentials somewhere.

You can do it through the UI once again:

Let’s try to add junk data, and see where it ends. I won’t explain how to create an account on a trading platform, I suppose you already know how to do it and already have one. Otherwise, search a little bit.

gekko@bitbot:~/gekko_test/gekko$ cat SECRET-api-keys.json
{“kraken”:{“key”:”AAABBBBCCCC”,”secret”:”DDDEEEFFFF”}}
gekko@bitbot:~/gekko_test/gekko$

Okay so the fake data I used for the test ended up in <gekko_installdir>/SECRET-api-keys.json. This is a file you can edit without the UI to add several keys, for several accounts you may have on several platforms. The keys are then used by Gekko to connect to the platforms under your account and perform potential real buy/sell operations. Once again, take care and be warned ! You will most probably loose money !

Another needed step is to reviw a little bit our config.js file:

  • config.paperTrader needs to be disabled for the real trading plugin to work.

config.paperTrader = {
enabled: false,

  • You will also need to enable the config.trader section (I need to check if the keys in SECRET-api-keys.json are still mandatory if also put here:

config.trader = {
enabled: true,
key: ‘AAAABBBBCCCCC’,
secret: ‘DDDDDDDEEEEEFFFFFFF’,
username: ”, // your username, only required for specific exchanges.
passphrase: ” // GDAX, requires a passphrase.
}

Now you can launch your trading bot through the UI. OR by the CLI, exactly the same way we did for the paperTrader as there was no option on the command line, we just modified the configuration do disable a module and enable another one:

node gekko –config your-config-file.js

Or with PM2 as we saw before.

Please, remember this is a very risky game. You will most probably loose money or assets. Standards strategies are provided by Gekko’s creator as examples, they need to be tweaked and this is not easy. Also, at tartup, Gekko may directly try to sell or buy, even if the market is not good. This needs to be controled and fixed by coding in the strategies. You need to understand what you do, and when to do it. I CAN NOT BE RESPONSIBLE FOR YOUR LOSSES.

Gekko Trading bot installation

Gekko installation (Linux)

Installing gekko is quite simple, everything is described on its homepage, thanks to Mike von Rossum, Gekko’s creator & maintainer.

On a standard Linux distro (I use a small Debian VM on an Intel NUC running vSphere acting as a personal server at home):

Install nodejs:

sudo apt-get install nodejs

Install git:

sudo apt-get install git

Install Pm2 (not mandatory but I like to use it to keep track of my various Gekko’s instances running):

sudo npm install pm2 -g 

Download Gekko:

git clone git://github.com/askmike/gekko.git -b stable
cd gekko

Install Gekko’s dependancies (note: after installation, do NOT run npm audit with –force as suggested by npm, I can confirm it will break Gekko and you will have to redo previous operations):

npm install --only=production

Install Gekko’s broker module dependancies:

cd exchange
npm install --only=production && cd ..

Now, as I want to access Gekko’s UI from other computers in my home network, I need to configure a few things. Edit & change:

  • <your_gekko_install_dir>/web/vue/dist/UIconfig.js

api: {

host: ‘127.0.0.1‘,

port: 3000,

timeout: 120000 // 2 minutes

},

to

api: {

host: ‘0.0.0.0‘,

port: 3000,

timeout: 600000 // 10 minutes

},

and

ui: {

ssl: false,

host: ‘localhost‘,

port: 3000,

path: ‘/’

},

to

ui: {

ssl: false,

host: ‘x.x.x.x‘, // Set this to the IP of the machine that will run Gekko

port: 3000,

path: ‘/’

},

  • <your_gekko_install_dir>/web/vue/public/UIconfig.js and only change:

api: {

host: ‘127.0.0.1’,

port: 3000,

timeout: 120000 // 2 minutes

},

to

api: {

host: ‘127.0.0.1’,

port: 3000,

timeout: 600000 // 10 minutes

},

Hint: modifying the timeout is not in standard Gekko’s documentation. But it will be useful later.

As explained in standard Gekko’s configuration, this will allow you to access the Gekko UI by going to http://x.x.x.x:3000 in a web browser (of course change x.x.x.x with the IP of the machine that will run Gekko and that you defined in
<your_gekko_install_dir>/web/vue/dist/UIconfig.js as seen above).

Beware, Gekko is absolutely NOT secured. There is no authentication mecanisms in Gekko. This configuration should ONLY be activated in a trusted internal network. If you launch it this way on a computer/server reachable from Internet, especially on port 3000, then ANYONE can access your Gekko.

Now launch gekko UI:

node gekko --ui

On http://x.x.x.x:3000 you should see something like that. Congratulations you’ve just achieved the easiest part !

Now we will make it launchable through pm2:

  • In <your_gekko_install_dir>, create a new file called start_ui.sh
  • Put this inside and save it
#!/bin/bash 
rm ui_logs/*
pm2 start gekko.js --name gekko_ui --log-date-format="YYYY-MM-DD HH:mm Z" -e ui_logs/ui_err.log -o ui_logs/ui_out.log -- --ui max_old_space_size=8096

Make it executable:

chmod 755 start_ui.sh

Now

  • When you want to launch Gekko’s UI you have to use
<your_gekko_install_dir>/start_ui.sh
  • If you want to check Gekko’s UI state you can use: pm2 ls

If you want to check Gekko’s UI logs you can use
(and note its id in the second column, and that is is also displaying the name we gave (gekko_ui) to this instance in the start_ui.sh shell script) :

pm2 log <id>
  • If you want to stop it you can use:
pm2 stop <id>
  • If you want to remove a stopped instance from pm2 list of process, you can use:
pm2 delete <id>

There are plenty more things available with pm2, please check its documentation.

Gekko installation (Windows 10)

Well this part will be quite simple as it consists in two short parts:

  1. Use W10 capacity to use a Linux sub-system to deploy a Debian or Ubuntu distro directly “runnable” inside Windows, using this documentation and then this one
  2. Once it is installed and running, you can come back at the beginning of this page and follow the exact same instructions !

%d bloggers like this: