MongoDB is a NoSQL database that has fully flexible index support and a rich queries database.
This document explains how to install MongoDB (as root/sudo
).
First of all, import GPK key for the MongoDB apt repository on your system using the following command. This is required to test packages before installation
Then add MongoDB APT repository url in /etc/apt/sources.list.d/mongodb.list.
Ubuntu 18.04 LTS:
After adding required APT repositories, use the following commands to install MongoDB on your systems. It will also install all dependent packages required for MongoDB.
If you want to install a specific version of MongoDB, define the version number as follows:
After installation, MongoDB will start automatically. To start or stop MongoDB use an init script. For example:
And use the following commands to stop or restart the MongoDB service.
Finally, use the below command to check the installed MongoDB version on your system.
And check the status with:
Important: Some versions have the service name as mongod
and some have mongodb.
If you get an error with the above command, use sudo service mongodb status
instead.
Also, connect MongoDB using the command line and execute some test commands for checking proper working.
Warning: At this point it's crucial to set the default witness node to your own server (ideally running inlocalhost
, see below config.yaml) using peerplays set node ws://ip:port
. If this step is skipped the setup will not work, or at best will work with very high latency.
Since your Witness account is going to create and approve proposals automatically, you need to ensure that the Witness account is funded with PPY.
We now need to configure bos-auto:
The variables are described below:
The following options need to be set:
node: ws://localhost:8090
. If not running a local installation then change this to any Testnet (Beatrice) API node.
network: beatrice.
Only change if you're not using this Testnet.
Important: Make sure you set a Redis password during the Redis installation.
Redis is an open source, in-memory data structure store, used as a database, cache and message broker.
This document explains how to install Redis (as root/sudo
)
To install Redis run the following commands:
In this first step, we'll install everything we'll need going forward.
Note: Dependencies must be installed as root/sudo
Tip: virtualenv
is a best practice for python, but installation can also be on a user/global level.
MongoDB is used for persistent storage within BOS.
For additional information on how to use MongoDB refer to tutorials on your distribution.
Important: Make sure that the MongoDB is running reliably with automatic restart on failure.
Redis is used as an asynchronous queue for the python processes in BOS.
For additional information on how to install Redisdb refer to your Linux distribution.
Important: Make sure that RedisDB is running reliably with automatic restart on failure, and that it's run without any disk persistence.
It is highly recommended that both daemons are started on start-up.
To start the deamons, execute
Important: Common Issues:
Exception: Can’t save in background: fork or MISCONF Redis is configured to save RDB snapshots.
This indicates that either your queue is very full and the RAM is insufficient, or that your disk is full and the snapshot can’t be persisted.
Create your own Redis configuration file (https://redis.io/topics/config) and use it to deactivate caching and activate overcommit memory:
https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram or https://stackoverflow.com/questions/19581059/misconf-redis-is-configured-to-save-rdb-snapshots/49839193#49839193
https://gist.github.com/kapkaev/4619127
Exception: IncidentStorageLostException: localhost:27017: [Errno 111] Connection refused or similar.
This indicates that your MondoDB is not running properly. Check your MongoDB installation.
Note: bos-auto must be installed as user
You can either install bos-auto via pypi / pip3
(production installation) or via git clone (debug installation).
For production using install bos-auto via pip3 is recommended, but the git master branch is always the latest release as well, making both installations equivalent. Recommended is a separate user.
For debug use, checkout from Github (master branch) and install dependencies manually.
BOS auto is supposed to run in the virtual environment. Either activate it beforehand, as above, or run it directly in the env/bin
folder.
Important: If bos-auto is installed as root
and not user
then you'll likely get errors similar to the following:
For production installation, upgrade to the latest version - including all dependencies using:
For debug installation, pull latest master branch and upgrade dependencies manually
Next we need to go through the steps required to setup bos-auto properly.
After bos-auto configuration we need to spin-up bos-auto to see if it works properly.
Bos-mint is a web-based manual intervention module that allows you to work with all sorts of manual interactions with the blockchain.
For more information see:
The isalive call should be used for monitoring. The scheduler must be running, and the default queue a low count (< 10).
Here is an example of a positive isalive
check:
The default configuration looks like the following and is (by default) stored in config.yaml
:
Both, the API and the worker make use of the same configuration file.
We need to provide the wallet pass phrase in order for the worker to be able to propose changes to the blockchain objects according to the messages received from the data feed.
The messages sent to the API need to follow a particular message schema which is defined in endpointschema.py
Now that bos-auto has been configured we want to make sure it works correctly. To do this, we need to start two processes:
An endpoint that takes incident reports from the data proxy and stores them in MongoDB as well as issues work for the worker via Redis.
The worker then takes those incidents and processes them.
Note: It is recommended to run both via system services.
The commands shown are for production installation, for debug installation replace “bos-auto”
with “python3 cli.py”
.
Note: Former installations also required to run the scheduler as a separate process. This is no longer necessary, it is spawned as a subprocess.
This is a basic setup and uses the flask built-in development server, see Production Deployment below.
Important: Before executing the next command make sure that your node is set to the correct environment. For example, if the installation is for Testnet (Beatrice) run:
peerplays set node <Beatrice Node>
where <Beatrice node> is any Beatrice API node.
After this, if it's set up correctly you'll see the following messages:
INFO | Opening Redis connection (redis://localhost/6379) * Running on http://0.0.0.0:8010/ (Press CTRL+C to quit)
This means that you can now send incidents to http://0.0.0.0:8010/.
You can test that the endpoint is properly running with the following command:
If the endpoint is running, the API daemon will print the following line:
At this point, we are done with setting up the endpoint and can go on to setting up the actual worker.
Data proxies are interested in this particular endpoint as they will push incidents to it. This means that you need to provide them with your IP address as well as the port that you opened above.
For more information on Data Proxies see:
The endpoint has an isalive
call that should be used for monitoring:
which produces an output like:
Of interest here are the listed versions and queue.status.default.count
.
The count should be zero most of the time, it reflects how many unhandled incidents are currently in the cache.
Going into production mode, a Witness may want to deploy the endpoint via UWSGI, create a local socket and hide it behind an SSL supported nginx that deals with a simple domain instead of ip:port
pair, like https://dataproxy.mywitness.com/trigger
.
Important: At this point it's crucial to set the default Witness node to your own server (ideally running in localhost
) using peerplays set node ws://ip:port
. If this step is missed, the setup will not work or, at best, will work with very high latency.
Start the worker with the following commands:
It will already try to use the provided password to unlock the wallet and, if successful, return the following test:
Nothing else needs to be done at this point.
Important: For testing, we highly recommend that you set the nobroadcast
flag in config.yaml
to True
For testing, we need to throw a properly formatted incident at the endpoint. The following is an example of the file format,
Note: Because the incident data changes all the time and is quickly out of date, the actual contents of this file are unlikely to work. At the time of testing reach out to PBSA for some up to date incident data.
Store them in a file called replay.txt
and run the following call:
Note the trigger
at the end of the endpoint URL.
This will show you the incident and a load indicator at 100% once the incident has been successfully sent to the endpoint.
Your endpoint should return the following:
And your worker to return something along the lines of (once for each incident above):
Tip: Each incident results in two work items, namely a bookied.work.process()
as well as a bookied.work.approve()
call.
The former does the heavy lifting and may produce a proposal, while the latter approves proposals that we have created on our own.
With the command line tool, we can connect to the MongoDB and inspect the incidents that we inserted above:
Where [Begin Date] and [End Date] specify the date range to pull incident data from.
The output should look like:
It tells you that two incidents for that particular match came in that both proposed to create the incident. The status tells us that the incidents have been processed.
We can now read the actual incidents with:
And replay any of the two incidents by using:
Tip: For more information on BOS supported commands run:
bos-auto --help
or bos-incidents --help
Your worker should now be started.