Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This is the place for those who are interested in operating a Peerplays node. Discover detailed installation guides, node specific information, guides for performing maintenance, and much more.
What is the Use of Infrastructure Document?
Infrastructure documents play an important role in illustrating about Peerplays nodes and the process of working on a node. There are various categories to describe about the concepts of Peerplays and they are listed below.
The concepts under the basics describes the Node type, Hardware requirement, CLI wallet, and First Tokens. This section focus on providing the essential details about the requirement before starting the process using Node. Click the below link to learn more about the basic concepts.
The topics under this section explain about the Test environment, steps involved in manual installation, enabling Elasticsearch in node. To know about the concepts in details click the below link.
There are various types of procedure like Manual, Docker, GitLab Artifact installation for Witness node. Click the below link to know more about the steps involved in installation.
Click the below link to explore about the installation guides to operate SONs.
The Bookie Oracle System, or BOS, is a unique decentralized sports feed oracle system originally designed for the BookiePro dApp. Click below link to learn more about the BOS.
The below section explains about the BOS Installation using MongoDB, Redis, configuration of bos-auto. Along with the detailed explanation on how to Install BookieSports & MINT.
The Data Proxy serves as a middle man between the Data Feed Providers (DFPs) and the Bookie Oracle System (BOS) operated by the Witnesses.
Data proxy is the process through which the all the data received by BOS is normalized/parsed to the required format.
Click the below section to learn in details about:
Data proxy Introduction
How Data proxy works?
How to set up Data Proxy?
How to create own Data Proxy?
BookiePro requires real-time data feeds in order to create various sports, events, markets etc. that are the basis of the sporting exchange which fabricate a challenge whether data is reliable and accurate. This leads to the introduction of couch potato concept where a person input data directly into a portal/API which gets posted directly to BOS. The person here is the "data feed provider" and API/Portal is the "Data Proxy".
Click the below section to learn in detail about Couch Potato:
Introduction
Functional Requirement
User guide to create an account
Use of Database and API
Proxy payment consideration
There are different types of document available to reduce the cost around time and helps in landing to specific topic.
Peerplays Public Docs Portal - To land in Public documentation portal.
The Peerplays Witnesses will bundle transactions into blocks and sign them with their signing key. Witnesses keep the blockchain alive by producing one block every three seconds. For example, if there are 20 Witnesses, each would produce one block every minute. To learn more about the witness .
Sidechain Operator Nodes - SONs facilitate the transfer of off-chain assets (like Bitcoin, Hive, or Ethereum tokens) between the Peerplays chain and the asset's native chain. These nodes often run the Peerplays node software and node software of other chains. To know more about the SONs .
- To gather extensive knowledge about Peerplays.
- To learn about How to operate on Peerplays node.
An explanation of the many types of Peerplays nodes.
All Peerplays nodes continually update an internal database representing "consensus state" by applying transactions that arrive in incoming blocks received from the network. Peerplays nodes communicate with each other using a decentralized Peer-to-Peer (P2P) networking protocol in order to share these blocks and transactions. Peerplays nodes are sometimes called "witness nodes" because they observe, or "witness", blocks and transactions from the network, and then validate, apply, and optionally share them with other nodes on the network. The difference between the node types lies in number of services they are configured to offer to the network. These differences can affect how resource intensive the running of the node is, and may affect what networking ports need to be exposed, and what additional infrastructure (like DNS records) are needed or recommended to support the node. Some roles and configuration categories are described below:
As the name suggests, these nodes produce blocks for the network. They are run by elected "witnesses". A witness is a special account on the chain that has declared an intent to produce blocks, and has been elected via an on-chain voting process. Each witness node validates all blocks and transactions that it receives. The elected block producing nodes all take turns in bundling new transactions into blocks and broadcasting them to the network.
Block-producing witness nodes are often minimally-configured and do not offer additional services to the network (such as client-facing APIs). They often will not have DNS records nor will their locations or IP addresses be made publicly known. This helps protect the integrity of block production on the Peerplays network.
API nodes provide network services to client applications. They allow these client apps to inspect the state of the network, to broadcast new transactions, and other services. They often retain detailed account and market histories accessible through API calls, and other useful data for client apps that go beyond simple consensus state. However, they can vary in the amount of available history or extended data. In addition to participating in the Peer-to-Peer network for sharing and receiving blocks, these nodes listen on a designated port to expose the API that client applications use.
API nodes may be public-facing, or they may be deployed for personal or private business use. If public-facing, they will often be assigned a DNS record, and may additionally be configured behind a reverse proxy to enforce TLS-encrypted connections between client apps and the node. If the node is for private use, these extras may not be needed.
This is an API node that is maximally configured with a complete transaction history of all accounts.
Seed nodes accept incoming P2P connections from the network and relay blocks and transactions. They are usually the first nodes contacted by a freshly started node, and help those nodes get up to date and discover the rest of the network. They are the entry point into the network. Once a node has entered the network it will receive additional node addresses from its peers, so all nodes can connect to each other. A seed node runs the bare minimum services needed to participate in the P2P network, but it may also run additional services if so configured. Thus, a seed node may also be an API node. Seed nodes assist the network by recording and sharing blocks and by being a point of contact for other nodes on the P2P network.
Bookie Oracle System nodes - BOS nodes are required to operate the Bookie Oracle System to ensure the accuracy and decentralization of the data fed into the BookiePro application. The BOS node must be run on a separate server to the Witness node.
Sidechain Operator Nodes - SONs facilitate the transfer of off-chain assets (like Bitcoin, Hive, or Ethereum tokens) between the Peerplays chain and the asset's native chain. These nodes often run the Peerplays node software and node software of other chains.
The software used to run Witness, API (full), Seed, and SON nodes is named witness_node
. All these node types are run with the same software. What makes these nodes different is how that software is configured and how it's used.
SONs will also require the use of software supplied by other chains, like Bitcoin Core for example.
BOS nodes use a collection of software known as the Bookie Oracle Suite.
SONs most likely will be running other nodes (like a Bitcoin node) which may require opening ports to operate on the sidechain. It is because of this that SON nodes should not be run in parallel (i.e. the same server) with Witness nodes.
Node server hardware requirements.
Details about Witness Nodes.
Details about SON Nodes.
Node Type
Description
Open Ports
Block Producer
Elected by the community to produce blocks of validated transactions.
None
API (Full)
Provides an API gateway for apps to interact with the Peerplays chain. Full nodes offer the whole transaction history for all accounts.
RPC
Seed
Opening a P2P port allows new nodes to more readily perform the initial download of the Peerplays chain.
P2P
BOS
BOS nodes are whitelisted by Witnesses to feed data to the BookiePro app.
SSL
SON
Elected by the community to facilitate asset transfers between the Peerplays chain and sidechains.
Likely (see note)
A guide for node operators
This document explains installing the CLI Wallet, setting it up during its first use, and the basics of running the wallet.
This reference doc contains the following commands:
suggest_brain_key
get_private_key_from_password
import_key
upgrade_account
create_vesting_balance
get_private_key
dump_private_keys
get_account
This reference doc contains the following commands:
create_witness
update_witness
get_witness
vote_for_witness
This reference doc contains the following commands in section 1:
create_son
update_son
update_son_vesting_balance
get_son
vote_for_son
update_son_votes
list_sons
list_active_sons
request_son_maintenance
cancel_request_son_maintenance
get_son_wallets
get_active_son_wallet
get_son_wallet_by_time_point
This reference doc contains the following commands in section 2:
add_sidechain_address
delete_sidechain_address
get_sidechain_address_by_account_and_sidechain
get_sidechain_addresses_by_account
get_sidechain_addresses_by_sidechain
get_sidechain_addresses_count
In the Graphene based blockchains like Peerplays the hierarchical permissions like fund transfer, sending memo etc are separated using different roles and Public-Private key-pairs for each role. We have 3 types of roles OWNER, ACTIVE and MEMO. These keys are generated using a brain key.
The password auto generated by the wallet is a brain key. And the brain key is used to generate various key-pairs in the Wallet Import Format (WIF).
WIF = SHA256( username + role + brain key)
example:
OWNER WIF = SHA256(peerplays user name + owner + brain key)
The above brain key belongs to the user t3st123
Note that the above password is not the private or public key. So we need to create the key pairs to be used in the cli_wallet
using the method get_private_key_from_password
To generate the OWNER keypair, use the following command
Result:
unlocked >>> get_private_key_from_password t3st123 owner RAjoOuuSX9N2semIlQOM52iHCQMUrDZPqnpPUDZNpMu2HSYj1gQi get_private_key_from_password t3st123 owner RAjoOuuSX9N2semIlQOM52iHCQMUrDZPqnpPUDZNpMu2HSYj1gQi [ "PPY5xmkfRJhsG54kxNpoBtWqnEpScGBxczooapTbCpmetFAmzUvJ1", "5KPHKeuqRyNfuc32LGDzc6tqcCPzyLgfguQzN4Xkrys3VfMxtjB" ]
ACTIVE & MEMO keys also can be obtained in the same way.
Hardware requirements for installing and operating one of the various Peerplays nodes.
Depending on the configuration, network, and other installed components on your server, the hardware requirements will vary. Here are the requirements of the most common configurations of Peerplays nodes.
The following table lists what should be considered the minimum requirements for running a Peerplays node on Mainnet
:
A Witness node on Mainnet requires a baseline of: 4 CPU Cores, 16GB RAM, and 100GB storage. If you don't intend on running a Witness node, but only a SON or Seed node, the baseline drops a little to: 2 CPU cores, 16GB RAM, and 100GB storage.
As of June 2021, the Peerplays chain requires about 25GB of storage space (on Mainnet). The 100GB storage requirement is set high to account for increased chain usage over time.
The following table lists what should be considered the minimum requirements for running a Witness node on Testnet
:
For SONs: See section 3 for details on sidechain node requirements.
A Witness node on Testnet requires a baseline of: 4 CPU cores, 8GB RAM, and 50GB storage. If you don't intend on running a Witness node, but only a SON or Seed node, the baseline drops a little to: 2 CPU cores, 8GB RAM, and 50GB storage.
As of June 2021, the Peerplays chain requires about 20GB of storage space (on Testnet). The 50GB storage requirement is set high to account for increased chain usage over time, though not as much as Mainnet.
SONs often run nodes for other chains to enable the sidechain functionality. These other nodes, like Bitcoin or Ethereum nodes, will require their own storage on top of what is required for Peerplays. It is recommended to research the requirements of any other nodes you may need to run to operate a SON.
The following table lists what should be considered the minimum requirements for running a Bitcoin SON on Mainnet
: (note the marked increase in storage requirements for self-hosting a Bitcoin node.)
A SON on Mainnet requires a baseline of: 2 CPU Cores, 16GB RAM, and 100GB storage (as per section 1, above). On top of this baseline, if you self-host a Bitcoin node with reduced storage, an additional 50GB storage is required. If you self-host a Bitcoin node with full storage, an additional 700GB storage is required.
The following table lists what should be considered the minimum requirements for running a Bitcoin SON on Testnet
: (note the marked increase in storage requirements for self-hosting a Bitcoin node.)
A SON on Testnet requires a baseline of: 2 CPU cores, 8GB RAM, and 50GB storage (as per section 2, above). On top of this baseline, if you self-host a Bitcoin node with reduced storage, an additional 50GB storage is required. If you self-host a Bitcoin node with full storage, an additional 700GB storage is required.
In addition to the above, if you plan to operate a self-hosted Bitcoin node (for SONs), you should look into getting an unmetered connection, a connection with high upload limits, or a connection you regularly monitor to ensure it doesn’t exceed its upload limits. It’s common for full Bitcoin nodes on high-speed connections to use 200 gigabytes upload or more per month. Download usage is around 20 gigabytes per month, plus around an additional 350 gigabytes the first time you start your node.
When installing nodes (Peerplays or otherwise) you may find it handy to provision a server with higher resources during the installation. Once your nodes are installed and synced with their networks you can then power the server down and provision it with lower resources to operate with. This is possible with cloud providers like Amazon AWS or Google Cloud. This can help speed up the installation process but cost less to run overall.
These requirements are as of the time of writing, so consider deploying a server with specs slightly higher than the ones listed above in order to "future proof" your server in case the minimum requirements grow in the future.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
API Node: Nodes with an open RPC port. They provide a gateway to blockchain functions by exposing the API.
Full Node: An API node which provides complete transaction histories of all accounts accessible through API calls.
Seed Node: Nodes that provide the ability for other nodes to download historical data.
SON: Sidechain Operator Node - An independent server operator which facilitates the transfer of off-chain assets (like Bitcoin or Ethereum tokens) between the Peerplays chain and the asset's native chain.
Bitcoin node types: Just like Peerplays nodes, Bitcoin nodes can provide different levels of service:
Self-Hosted Bitcoin nodes are running on your own server and will therefore have a bigger impact on hardware requirements.
Reduced storage means the node doesn't save the entire Bitcoin chain.
Full storage means the node stores the whole Bitcoin chain (almost 400GB and growing daily).
External Bitcoin nodes are running on someone else's server. You may be able to connect to public or private Bitcoin nodes to run your SON.
Mainnet: The live Peerplays environment, named Alice
, is the publicly running blockchain on which all transactions take place.
Testnet: One of any development environments for the Peerplays blockchain. The official public testnet is operated by the Peerplays witnesses. More testnets exist for development purposes like Gladiator
for the testing of SONs.
CLI commands that all node operators use.
Suggests a safe brain key to use for creating your account keys. A brain key is a long passphrase that provides enough entropy to generate cryptographic keys. This function will suggest a suitably random string that should be easy to write down (and, with effort, memorize).
The GUI Wallet generates a brain key for your password when creating a new account. But in the case of the GUI Wallet, rather than a long passphrase (i.e. set of words), it generates a string of 52 random letters (a-z & A-Z) and numbers (0-9).
For example, the suggest_brain_key
method could give you:
"EDIFICE PALLID ANOESIA STRIDE PARREL SPORTY AXIFORM INOPINE SWOONED TONETIC CORKER OATEN PUSHER MIN CERN TACT"
And the GUI Wallet could produce the password of:
EyqFQDRpydZJDgTV8EJIcpmPLhfmdq6Yjbo45pNsBe7wSJSpvq0v
Although they look different, both are brain keys and will work for generating public and private keys.
Parameters
Example Call
Return Format
Example Successful Return
Note that the returned "pub_key
" value will be prefixed with:
"PPY
" in mainnet (Alice) as per the example above
"TEST
" in the public testnet (Beatrice)
Returns the public-private key-pair for the owner
, active
, or memo
role for a given account and its password.
Parameters
Example Call
Return Format
Example Successful Return
Imports the private key for an existing account for use in the CLI Wallet. The private key must match either an owner key or an active key for the named account.
Parameters
Example Call
Return Format
Example Successful Return
Upgrades an account to prime status. This makes the account holder a 'lifetime member'. This is necessary for the account to become a Witness or SON.
Parameters
Note that this operation currently costs 5 PPY. That fee may change in the future.
Example Call
Return Format
Example Successful Return
Creates a vesting deposit owned by the given account. This is used to supply vested assets to operate certain nodes, such as a SON node. In the case of SONs, 100 PPY (at the time of writing) must be set aside in two separate vesting deposits (50 PPY each) to dedicate to the operation of the SON node transactions.
Parameters
Example Call
Return Format
Get the WIF private key corresponding to a public key. The private key must already be in the wallet.
Parameters
Example Call
Return Format
Example Successful Return
Displays all private keys owned by the wallet. The keys are printed in WIF format. You can import these keys into another wallet using import_key()
.
Parameters
Example Call
Return Format
Example Successful Return
Returns information about the given account.
Parameters
Example Call
Return Format
Example Successful Return
For all nodes: The memory requirements shown in the table above are adequate to operate the node. Building and installing the node from source code (as with the manual install) will require more memory. You may run into errors during the build and install process if the system memory is too low. See for more details. Using Docker or GitLab artifacts for installations don't have this limitation because they use pre-built binaries.
For SONs: See for details on sidechain node requirements.
Plus .
See about Full Nodes, SON Nodes, and Bitcoin node types.
For all nodes: The memory requirements shown in the table above are adequate to operate the node. Building and installing the node from source code (as with the manual install) will require more memory. You may run into errors during the build and install process if the system memory is too low. See for more details. Using Docker or GitLab artifacts for installations don't have this limitation because they use pre-built binaries.
Plus .
See about Full Nodes, SON Nodes, and Bitcoin node types.
As an example, you could shoot the moon and start up a server with 8 CPU cores and 64GB memory to fly through the build and install process, then stop the server and pick an instance with a more reasonable 4 CPU cores and 16GB memory to run the node. In fact, if you have a to manage such installs or updates, you can use this method of changing resources for all your server maintenance without service outages.
Bitcoin Node Type | CPU | Memory | Storage | Bandwidth | OS |
Self-Hosted, Reduced Storage | 2 Cores | 16GB | 150GB SSD | 1 Gbps | Ubuntu 18.04 |
Self-Hosted, Full Storage | 2 Cores | 16GB | 800GB SSD | 1 Gbps | Ubuntu 18.04 |
External Bitcoin node | 2 Cores | 16GB | 100GB SSD | 1 Gbps | Ubuntu 18.04 |
Bitcoin Node Type | CPU | Memory | Storage | Bandwidth | OS |
Self-Hosted, Reduced Storage | 2 Cores | 8GB | 100GB SSD | 1 Gbps | Ubuntu 18.04 |
Self-Hosted, Full Storage | 2 Cores | 8GB | 750GB SSD | 1 Gbps | Ubuntu 18.04 |
External Bitcoin node | 2 Cores | 8GB | 50GB SSD | 1 Gbps | Ubuntu 18.04 |
name | data type | description | details |
ℹ This command has no parameters! | n/a | n/a | n/a |
name | data type | description | details |
account | string | The account name we're creating keys for. | no quotes required. |
role | string | The role we're creating keys for. One of: | no quotes required. |
password | string | A brain key. It might be the password provided by the GUI wallet, or obtained from the | quotes required if there are spaces! |
name | data type | description | details |
account_name_or_id | string | The name or id of the account that owns the key. | no quotes required. |
wif_key | string | The private key. | no quotes required. |
name | data type | description | details |
name | string | The name or id of the account to upgrade. | no quotes required. |
broadcast | bool |
| n/a |
name | data type | description | details |
owner_account | string | The name or id of the vesting balance owner and creator. | no quotes required. |
amount | string | The amount to vest. | In nominal units. For example, enter 0.5 for half of a PPY. |
asset_symbol | string | The symbol of the asset to vest. | no quotes required. |
vesting_type | vesting_balance_type | One of | no quotes required. |
broadcast | bool |
| n/a |
name | data type | description | details |
pubkey | public_key_type | The public key you wish to get the private key for. | no quotes required. |
name | data type | description | details |
ℹ This command has no parameters! | n/a | n/a | n/a |
name | data type | description | details |
account_name_or_id | string | The name or id of the account to provide information about. | No quotes required. |
Node Type? | CPU | Storage | Bandwidth | OS |
Witness | 4 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
API (Full) | 4 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
BOS | 4 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
Seed | 2 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
2 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
Node Type? | CPU | Storage | Bandwidth | OS |
Witness | 4 Cores | 8GB | 50GB SSD | 1Gbps | Ubuntu 18.04 |
API (Full) | 4 Cores | 8GB | 50GB SSD | 1Gbps | Ubuntu 18.04 |
BOS | 4 Cores | 8GB | 50GB SSD | 1Gbps | Ubuntu 18.04 |
Seed | 2 Cores | 8GB | 50GB SSD | 1Gbps | Ubuntu 18.04 |
2 Cores | 8GB | 50GB SSD | 1Gbps | Ubuntu 18.04 |
There are three types of keys on chain and it can be generated using the CLI wallet. The types of keys are:
Active
Owner
Memo
Follow the below steps to generate keys in the wallet
In a new command line window, we can access the cli_wallet program after all the blocks have been downloaded from the chain. Note that "your-password-here" is a password that you're creating for the cli_wallet and doesn't necessarily have to be the password you used while creating Peerplays account.
The CLI wallet will show unlocked >>>
when successfully unlocked.
A list of CLI wallet commands is available here: https://devs.peerplays.tech/api-reference/wallet-api/wallet-calls
The below command will return an array with your owner key in the form of ["PPYxxx", "xxxx"] ["TESTxxx", "xxx"].
Note that the "created-username" and "created-password" used here are the username and password from the Peerplays-DEX account created.
The second value in the returned array is the private key of your owner key. Now we'll import that into the cli_wallet.
The below command will return an array with your active key in the form of ["PPYxxx", "xxxx"] / ["TESTxxx", "xxx"].
Note that the "created-username" and "created-password" used here are the username and password from the Peerplays-DEX account
The second value in the returned array is the private key of your active key. Now we'll import that into the cli_wallet.
The keys that begin with "PPY"/"TEST" are the public keys.
The cli commands includes voting for various sidechain operators
The document explains about SON voting for multiple sidechain listeners like bitcoin, ethereum, & hive and also provide the command which allows the user to change several SON account voting in a single call.
The latest update on the wallet command allows the user to choose the sidechain operators like Bitcoin, Hive, and Ethereum for voting. It allows you to Vote for any active SONs from this list. Each account's vote is weighted according to the number of PPY owned by that account at the time the votes are tallied. An account can publish a list of all SONs they approve and the cli allows you to add or remove SONs from this list.
The CLI command is given below:
Parameters
Syntax
Example Call
The following examples showcase voting/revoking scenario for various sidechains:
Account named account01
vote for account named sonaccount01
to become an active son for bitcoin sidechain.
Account named account01
revokes its vote for account named sonaccount01
from the bitcoin sidechain.
Account named sonaccount01
vote for account named sonaccount01
to become an active SON for ethereum sidechain.
Account named sonaccount01
revokes its vote for account named sonaccount01
from ethereum sidechain.
The signed transaction changing your vote for the given SON
This command allows you to add or remove one or more SON from this list in one call. An account can publish a list of all SONs they approve. Each account's vote is weighted according to the number the core asset shares owned by that account at the time the votes are tallied. When you are changing your vote on several SONs, this may be easier than multiple vote_for_sons
and set_desired_witness
& committee_member_count
calls.
The cli command is given below:
Syntax
Example call
Account named account05
votes for accounts named sonaccount01
, sonaccount02
, sonaccount03
, sonaccount04
, sonaccount05
and revokes its votes for accounts named sonaccount06
,sonaccount07
to become an active SONs. The total number of votes for active SONs is 5 for hive network.
Account named eth-6-son
votes for accounts named eth-2-son
, eth-3-son
, eth-5-son
, eth-6-son
to become active SON on ethereum and no account is revoked from the sidechain. The total number of votes for active SONs is 5 for ethereum network.
The signed transaction changing your vote for the given witnesses
Note that you cannot vote against a SON, you can either vote for the SON or ignore voting for the SON.
CLI commands that sons use.
Creates a SON object owned by the given account. An account can have at most one SON object.
Parameters
Example Call
Return Format
Example Successful Return
Update a SON object owned by the given account.
Parameters
Example Call
Return Format
Example Successful Return
Updates the vesting balances associated with a given SON.
Parameters
Example Call
Return Format
Example Successful Return
Returns information about the given SON.
Parameters
Example Call
Return Format
Example Successful Return
Vote for a given SON. An account can publish a list of all SONs they approve of. This command allows you to add or remove SONs from this list. Each account's vote is weighted according to the number of PPY owned by that account at the time the votes are tallied. Note that you can't vote against a SON, you can only vote for the SON or not vote for the SON.
Parameters
Example Call
Return Format
Change all your votes for SONs in one transaction. This can add and remove votes, and set the number of SONs you think should be active too.
You cannot vote against a SON, you can only vote for the SON or not vote for the SON.
Parameters
Example Call
Return Format
Example Successful Return
Lists all registered SONs, active or not.
Parameters
Example Call
List all active SONs.
Parameters
Example Call
Modify status of the SON owned by the given account to maintenance.
Parameters
Example Call
Modify status of the SON owned by the given account back to active.
Parameters
Example Call
This will display the wallet information for all registered SONs. You can specify the maximum number of wallets to return.
Parameters
Example Call
This will display the wallet information for the current active SON.
Parameters
Example Call
This will display the wallet information for the SON that was active at a specific date and time.
Parameters
Example Call
This command allows a user to register two Bitcoin addresses: one used to create their deposit address, and one that will be used for their withdraw address. Collectively these are a "sidechain address".
Parameters
Example Call
add_sidechain_address mypeerplays-account bitcoin 02cf1b2c34eed7537a63eb5e86c914b6c5f641d87f07798dd777773d96c4df82e9 022bccebf0f97231c1a499ed2145f744444b2df51d24e8ba71016ebd186bec2ab9 1KTE52KRoYf8G3SPJtKUufU9tRansAGyud true
This will delete a sidechain address that was previously registered with the add_sidechain_address
command. Only one sidechain address can exist per user and sidechain. (A sidechain address in the case of Bitcoin consists of both a deposit and a withdraw address.)
Parameters
Example Call
This returns a registered sidechain address for a given account and sidechain.
Parameters
Example Call
This returns all the registered sidechain addresses for a given account.
Parameters
Example Call
This returns all the registered sidechain addresses for a given sidechain.
Parameters
Example Call
This returns the number of registered sidechain addresses.
Parameters
Example Call
Configure your server to start your node on system boot-up
There are a few methods that can be used to start up a node on system boot-up. Automating the node to start when the server starts will help minimize downtime, allow the program to run in the background, and aid in making updates to the node software.
In this tutorial it's assumed that your node was installed at /usr/local/bin
. Please ensure the directories you use match your install. For example, programs in /usr/local/bin
can be run without specifying the directory. But for the script to run programs located in other directories you'll need to specify the location explicitly, like /home/ubuntu/src/peerplays/programs/witness_node/witness_node
.
For nodes installed with Docker, you'll simply need the location of the Docker shell script file (/home/ubuntu/peerplays-docker/run.sh
).
Making a shell script with logging is a good place to start. You'll be able to use this script to start up the node.
First make a log file to store the outputs of the witness_node
program.
Find a good place to store the script file. For this tutorial, let's give it it's own directory. Then create a file named start.sh
.
Use the text editor of your choice (nano comes with Ubuntu) to create the start.sh
file as follows (please select the method which you used to install the node):
Depending on where the programs were installed, you might have to specify the file location explicitly. For example:
In the case of Docker, we don't have to output the logs to another file because we're already maintaining the logs. You can view them with:
Save and exit the file. Now you'll set the file permissions.
You'll only need to use one method to ensure your node starts at system boot. This tutorial will cover two options you can use:
Using a system service with Systemd
Using a cron job with crontab
Setting up a service using Systemd
on Ubuntu is the preferred method of auto-starting your node. It allows for greater visibility of the status of the service. We'll make a service file that uses the shell script.
Now that you have the shell file good to go you'll create a service file. Navigate to /etc/systemd/system
and create a file named peerplays.service
as below.
Inside the peerplays.service
file you'll enter:
Save the file and quit.
Make sure you don't get any errors.
If your node is running, stop it with ctrl + c
, then start it back up with the service.
Lastly, check the log file to ensure the node is running properly.
Success!
You're all done if you've chosen to auto-start your node with systemd. No need for cron!
Cron jobs are simple to set up. If all you need is to ensure that your node starts when your system boots, a cron job is good enough.
If this is the first time you've used crontab on your machine, you'll be prompted to pick a text editor.
Crontab will open a file with some comments which explain how to configure a cron job. All you'll need to do is to specify the following at the end of the file:
Save and quit the file. Now your script will execute whenever your system boots.
In some cases, the crond service needs to be enabled on boot for the configuration to function.
To check if the crond service is enabled, use: sudo systemctl status cron.service
To enable this service, use: sudo systemctl enable cron.service
Success!
You're all done if you've chosen to auto-start your node with cron. No need for systemd!
Node: The general term for the software that an independent server operator runs to perform some service for the network to which it belongs. In the case of Peerplays, that means validating network transactions, facilitating sidechain asset transfers, providing a gateway to on-chain data, or supplying / validating external data for dapps.
System service (Systemd): On Linux based systems (Peerplays nodes require Ubuntu), systemd is a system and service manager. In essence, it's an init system used to bootstrap user space and manage user processes. Systemd is the name of the program.
Cron job (crontab): A time-based job scheduler in Unix-like operating systems. Users who set up and maintain software environments use cron to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. Crontab (cron table) is the file that cron uses to schedule tasks.
There will be many occasions when a node has to be updated with new or modified features. These software updates can be categorized as "soft forks" or "hard forks".
A soft fork is a software update that is compatible with earlier versions, in other words it’s backward compatible. A soft fork contains no new operation or updates to existing operations on the blockchain.
Generally a soft fork provides an update to an existing feature that isn't relevant to core blockchain operations.
A “hard fork” is a software update that isn’t backwards compatible, so any blocks coming after the activation of the software update will have to follow the new rules in order to be considered valid.
A hard fork is required whenever there is need for introducing new operation(s) or updating existing operation(s) on the blockchain. Each hard fork is time bound to update nodes and all Witnesses will be expected to finish the update before this date/time.
For example, if an update was released that required a hard fork and the hard fork date/time was set to be Jan 01, 2020 12:00EST, then every witness node must be updated before that date/time.
To update a Witness node use the following steps:
The current blockchain witness_node
binary (.exe) should be backed up in such a way that if there is a serious problem with the update then the node can be rolled-back easily.
Each new release will be published, and tagged, in the Peerplays public repository ready for download.
For example, the following is the code for test release 1.4.4:
Note: The above release is just an example, each release will have it's own tag / link.
Download the code from the provided tag / link.
After the Witness node has been built / compiled it needs to be started and the data replayed.
Run the following command:
If there are any issues during this step then a data resync should be run instead to download blocks from the seed nodes.
Finally the active node needs to be swapped for the newly built node.
The first peerplays account can be created in a flash using Peerplays DEX by following the steps in below document,
Click the URL below to create an account and then to login Peerplays DEX,
Mainnet Peerplays DEX access
Click the below link to use the Main-net DEX:
Testnet Peerplays DEX access
Click the below link to use the Test-net DEX:
Choose the mainnet/testnet login link based on the requirement
Choose an account name to register. In case, if the name already exists it will be denoted and please choose any other name.
There is an elevated fee for “premium” account names that Peerplays DEX won’t cover, so be sure to pick a name with at least one numeral, hyphen, or dot.
Peerplays DEX will generate a highly random password for your account and there is no option to choose your desired password.
Please save the auto-generated password at some location. As the account cannot be recovered without the password.
In the backend, the private key will be generated using the account name, account password along with few other details. This private key plays a vital role in controlling the account.
The account can be imported into cli_wallet by using the primary key. There are two ways to determine the private key:
Login to NEX with account name and password. Click on the Settings option from the right pane and select the Key Management tab.
Now, enter the password and select the required key to be generated i.e., Owner, Active, or/and Memo.
Next, click on the Let's go button to generate the keys. Click the reveal icon to view the Keys and there is an option to copy the key.
To download the keys, Click on Download Generated Keys option and save the keys for future reference.
The key can be generated using the account name and password in the cli_wallet
Syntax : get_private_key_from_password account_name role password
account_name - Name of the account
Role - Active/Owner/Memo
password - account's password
Example:
After getting the Active key that controls the account, it can be imported into cli_wallet using the command below:
Syntax: import_key account_name_or_id wif_key
account_name_or_id - name of the account/id
wif_key - active, owner, or memo keys
Example:
Use gethelp set_password
and
gethelp unlock
commands to know in detail about setting password and unlocking wallet.
cli_wallet “wallet password” is NOT the same thing as an account password. The cli_wallet command maintains a wallet file that stores as many private keys for as many accounts as you wish to add to it. The wallet password is an encryption password used to protect this file. On the other hand, an account password is a password used in the derivation of the various private keys that control a particular account.
Memory
SON
Memory
SON
name | data type | description | details |
---|---|---|---|
name | data type | description | details |
---|---|---|---|
Follow this guide to build the release from .
We'll need an account as the basis of creating a new Witness. If you don't have an existing account, the easiest way to create one is to use the .
Peerplays provides two networks on which the user may create an account - main net and test net. To learn and familiarize the operations of a node use the test net account. After learning the process and to work on the real Peerplays network, use the main-net account. The will help you navigate Peerplays DEX and learn about its features and options in detail.
voting_account
string
the name or id of the account who is voting with their shares
No quotes required
son
string
the name or id of the SONs' owner account
No quotes required
sidechain
sidechain_type
the name of the sidechain - ethereum, hive or bitcoin
No quotes required
approve
bool
true
if you wish to vote in favor of that SON
false
to remove your vote in favor of that SON
n/a
broadcast = false
bool
true
if you wish to broadcast the transaction
n/a
voting_account
string
the name or id of the account who is voting with their shares
No quotes required
son_to_approve
std::vector std::string
the names or ids of the sons owner accounts you wish to approve (these will be added to the list of sons you currently approve). This list can be empty.
Quotes required
sons_to_reject
std::vector std::string
the names or ids of the SONs owner accounts you wish to reject (these will be removed from the list of SONs you have approved). This list can be empty.
Quotes required
sidechain
sidechain_type
the name of the sidechain - ethereum, hive or bitcoin
No quotes required
desired_number_of_sons
unit16_t
the parameter helps the user to vote for /suggest the number of active SONs in sidechain
Mention the value in numbers
broadcast = false
bool
true
if you wish to broadcast the transaction
n/a
name
data type
description
details
owner_account
string
The name or id of the account which is creating the SON.
no quotes required.
url
string
a URL to include in the SON record in the blockchain. Clients may display this when showing a list of SONs.
May be blank.
deposit_id
vesting_balance_id_type
vesting balance id for SON deposit.
This is the son
vesting balance.
pay_vb_id
vesting_balance_id_type
vesting balance id for SON pay_vb
This is the normal
vesting balance.
sidechain_public_keys
flat_map
The new set of sidechain public keys.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
owner_account
string
The name of the SON's owner account. Also accepts the ID of the owner account or the ID of the SON.
no quotes required.
url
string
Same as for create_son. The empty string makes it remain the same.
n/a
block_signing_key
string
A new signing key to replace the currently set signing key.
n/a
sidechain_public_keys
flat_map
The new set of sidechain public keys. An empty string makes it remain the same.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
owner_account
string
The name or id of the SON account owner, or the id of the SON.
No quotes required.
new_deposit
vesting_balance_id_type
A vesting balance id that will replace the currently set son
vesting balance.
Optional
new_pay_vb
vesting_balance_id_type
A vesting balance id that will replace the currently set normal
vesting balance.
Optional
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
owner_account
string
The name or id of the SON account owner, or the id of the SON.
No quotes required.
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
son
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
sons_to_approve
std::vector<std::string>
An array of SON names or ids that you had not previously voted for which you wish to add your vote.
This can be empty.
sons_to_reject
std::vector<std::string>
An array of SON names or ids that you had previously voted for which you wish to remove your vote.
This can be empty.
desired_number_of_sons
uint16_t
The number of SONs that you think the network should have. You must vote for at least this many SONs. You can set this to 0 (zero) to abstain from voting on the number of SONs.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
lowerbound
string&
The name of the first SON to return. If the named SON does not exist, the list will start at the SON that comes next.
Use ""
to start at the beginning.
limit
uint32_t
The maximum number of SON to return.
Max of 1000.
name
data type
description
details
No Parmeters!
n/a
n/a
n/a
name
data type
description
details
owner_account
string
The name or id of the account who owns the SON.
No quotes required.
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
owning_account
string
The name or id of the account who owns the SON.
No quotes required.
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
limit
uint32_t
The maximum number of results to return.
n/a
name
data type
description
details
No Parameters!
n/a
n/a
n/a
name
data type
description
details
time_point
time_point_sec
The date and time. Formatted like this:
"2020-10-31T13:43:39"
Quotes are required!
name
data type
description
details
account
string
The name or id of the account who owns the address.
No quotes required.
sidechain
sidechain_type
One of: bitcoin
, (more will be added later).
n/a
deposit_public_key
string
The public key of a Bitcoin address. This will be used to generate the deposit address in the return of this function.
n/a
withdraw_public_key
string
The public key of a different Bitcoin address. This will be used for the withdraw address.
n/a
withdraw_address
string
The Bitcoin address that is connected to the withdraw_public_key.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the SON's owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that SON, false
to remove your vote in favor of that SON.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
Command
Description
Example
systemctl start <SERVICE>
Start a SERVICE (not reboot persistent)
systemctl start peerplays.service
systemctl stop <SERVICE>
Stop a SERVICE (not reboot persistent)
systemctl stop peerplays.service
systemctl restart <SERVICE>
Restart a SERVICE
systemctl restart peerplays.service
systemctl reload <SERVICE>
Reloads the configuration files without interrupting pending operations
systemctl reload peerplays.service
systemctl status <SERVICE>
Shows the status of a SERVICE
systemctl status peerplays.service
systemctl list-units --type=service
Displays the status of all services
n/a
systemctl list-unit-files --type=service
List the services that can be started or stopped
n/a
ls /etc/systemd/system/*.wants/
Print list of services (alternate)
n/a
systemctl enable <SERVICE>
Start SERVICE at next boot
systemctl enable peerplays.service
systemctl disable <SERVICE>
SERVICE won't be started at next boot
systemctl disable peerplays.service
systemctl is-enabled <SERVICE>
Check if a SERVICE is configured to start in the current environment
systemctl is-enabled peerplays.service
systemctl daemon-reload
Run this command after a change in any configuration file (old or new)
n/a
systemctl list-unit-files --type=service
List the services that can be started or stopped
n/a
Command
Description
journalctl -b
Show all messages from last boot
journalctl -b -p err
Show all messages of priority level ERROR and more from last boot
journalctl -f
Follow messages as they appear
journalctl -u <SERVICE>
Show logs for SERVICE
journalctl --full
Display all messages without truncating any
systemctl --state=failed
Display the services that failed to start
systemctl kill <SERVICE>
Gently kill the SERVICE
systemctl list-jobs
Show jobs
Command
Description
systemctl halt
Halts the system
systemctl poweroff
Powers off the system
systemctl reboot
Restarts the system
systemctl suspend
Suspends the system
systemctl hibernate
Hibernates the system
systemctl hybrid-sleep
Hibernates and suspends the system
A faucet is an API service and associated blockchain account, controlled by an automated script, that listens for new account creation requests. As a response, the faucet will register a new account on the chain for the recipient. In some cases, the faucet might also send a small amount of token to the new account to cover the transaction fees or other needs.
A Faucet acts as a platform to allow any User to self-create an account even before they have funds to pay the fees for account creation.
A Faucet is significant in the process of onboarding new Users to the Peerplays blockchain and other related chains (Example: Testnet Chains).
Any wallet or blockchain-connected application or service that offers its users free account registration will need to rely on a faucet. In order to request new account creation, the wallet or other blockchain-enabled application communicates with the Faucet API. The Faucet runs as an API endpoint to connect with the application. As the User's client software handles the Faucet, there is no necessity for the user to directly connect with Faucet.
Anyone can run Faucet.
PBSA runs faucet as public service to facilitate on-boarding requests.
Private parties executing private business logic on-chain, who might want to run their own faucet exclusive to their users.
PBSA maintains the Faucet installation guide. Click the below link to learn the steps in detail, https://gitlab.com/PBSA/tools-libs/faucet
Direct account creation
Fund transfer
Send PPY/token for account creation
The witness node can be configured into different types of node based on the usage and requirements.
An API node provides network services to client applications. They usually have an account transaction histories accessible through API calls, but can vary in the amount of available history. These nodes have an open RPC port to expose the API. Click the link to learn about, how to configure an API Node?
Sidechain Operator Nodes - SONs facilitate the transfer of off-chain assets (like Bitcoin, Hive, or Ethereum tokens) between the Peerplays chain and the asset's native chain. These nodes often run the Peerplays node software and node software of other chains. Click the link to learn about how to configure a SON node?
A delayed node allows the user to introduce a delay in the flow of the messages between nodes. It is used in the scenario where there is a need to control the timing of message processing. Click the link to learn about, how to configure a delayed node?
A reverse proxy is a server that typically sits behind the firewall in a private network and directs client request to the appropriate backend server. It provides additional level of abstraction and ensure smooth flow of traffic between client and server. Reverse proxy is effective in protecting the system from web vulnerabilities.
SSL stands for Secure Sockets Layer which is the standard technology used to secure the data transferred between two systems by using encryption algorithm. The algorithm scramble the data in transit and prevent the hackers from reading/modifying any information. The information could be anything sensitive or personal like credit card details, any financial information, personal details.
Nginx is used as a reverse proxy to get all the requests on DNS or IP on port 80 and 433 to your application.
To install Nginx:
Check service is running:
To start Nginx when the server boots, Enable Nginx:
Add the following rules in the IP tables of your servers
SSL is the combination of a Public certificate and a private key. The SSL key is kept on the server secretly while the SSL certificate is publicly shared with anyone requesting the content.
The SSL key encrypt the content sent to client and the SSL certificate decrypt the content signed by the associate SSL key.
The directory to hold the public certificate should already exist on the server and below is the path
Below are the steps to create the path to hold the private key files with security to prevent any unauthorized access:
Command to create self-signed key and certificate pair with OpenSSL
To Use OpenSSL , a strong Diffie-Hellman group should be created as it will be used while negotiating Perfect Forward Secrecy with clients. Below Command is used for creating dh group:
create a new file in the below directory to configure a server block that serves content using the certificate files we generated. We can then optionally configure the default server block to redirect HTTP requests to HTTPS.
Create and open a file called ssl.conf in the /etc/nginx/conf.d directory:
With this configuration, for any requests Nginx responds on port 433 with encrypted content while on port 80 responds with unencrypted content. This means that our site requires better encryption which is important while transferring any confidential data between the client and server.
Thankfully, the default Nginx configuration file allows to add directives to the default port 80 server by adding files in the below directory
Create a new file called ssl-redirect.conf and open it using the below command
In the location /etc/nginx/conf.d/ssl.conf file, the configuration should be added to reverse proxy to your application. Remember that the proxy must go through HTTP and not HTTPs, as it is handled by Nginx.
When the TCP/IP packets go through the Public internet it has to be encrypted in the middle of way which is considered as the "dangerous path", when it reaches Nginx the server will have the private key to decrypt that packet and everything is secured.
Though there is no perfect security architecture, there is an option to enhance the security and make Nginx reverse proxy to HTTPs for application with proper configuration.
Example: if you have a node express server running, you would need a HTTPS configuration with the proper SSL certificates set up on it. This adds another level of security, and it’s good to man in the middle attacks.
With HTTP only approach, in the below file location add the following data and remember to change the port:
The websockets support is a little configuration in the location section of the file /etc/nginx/conf.d/ssl.conf file. Add the below data
When multiple people using the system, this configuration for HTTPs work for development purposes while WSS might not work in Test/Prod environment.
When self signed certificate is used by WSS the below error might occur,
To correct this error some trusted certificate is required and Nginx configuration must have .pem trusted certificate files only.
Use certbot with reachable domain to generate the trusted certificate. The below command is used to generate the required trusted certificates. Add the certificate to Nginx configuration replacing the self-signed certificates.
Restart Nginx to reflect the changes,
The following document provides an overview of how to become a witness node, the perks of becoming a witness, duties of a witness, and a brief description about the node types.
An existing account
A machine running a witness_node that can be configured to produce blocks
The first peerplays account can be created in a flash using Peerplays DEX by following the steps in below document,
Peerplays provides two networks on which the user may create an account - main net and test net. To learn and familiarize the operations of a node use the test net account. After learning the process and to work on the real Peerplays network, use the main-net account. The User Guide will help you navigate Peerplays DEX and learn about its features and options in detail.
Click the URL below to create an account and then to login Peerplays DEX,
Mainnet Peerplays DEX access
Click the below link to use the Main-net DEX:
Testnet Peerplays DEX access
Click the below link to use the Test-net DEX:
Account creation with Peerplays DEX
Click the below link to learn in detail about the Peerplays account creation
The account should have some PPY balance to become a lifetime member. To create a witness, the account must have a lifetime membership status.
Syntax:
Example:
Output:
Your account can be configured as an eligible "block producer" by using cli wallet. The commands to complete the configuration are explained in the below link.
The next step is to create a witness
The URL in this command is your own URL which should point to a page which describes who you are and why you want to become a Peerplays witness. Note your block signing key after you enter this command.
To execute the command, some PPY is required.
Syntax:
Example:
Expected Output:
The Block-signing key and private keys are required to get the witness id. From the above output the block-signing key is obtained and using that the private key can be generated.
Syntax:
Then dump your keys to check and compare. One of the returned values from the following command should match your block_signing_key.
Last we'll get your witness ID.
Example:
Expected Output:
Exit the cli_wallet with the quit
command. Back in the first command line window, we'll stop the node (Ctrl + c
) and edit the config.ini
file once again.
Once again, we need to wait for the node to sync the blocks to use the cli_wallet. After the sync, you can vote for yourself. Back in the second command line window:
Syntax:
Example Output:
Now you can check your votes to verify it worked.
Syntax:
Example output:
In Peerplays, the word "witness" refers to a lot of terms and definitions. A "witness node" is typically a machine on the network that runs the Peerplays core software, and "witnesses" (observes) blocks, validates that they are correct, and relays them to the rest of the network. A witness node may also be responsible for producing blocks, by assembling transactions into a block structure and signing the block with an approved signing key. These special witness nodes are called "block-producing witness nodes" or "block producers".
To determine which witness nodes are allowed to produce blocks, holders of PPY can vote for "witnesses," which are accounts on the chain that have applied for a block-producing witness role. The top-voted witness accounts get to designate the signing key that will allow their witness nodes (the machines running Peerplays core software) to fulfill the block-producer role.
The Peerplays block-producing witnesses will bundle transactions into blocks and sign them with their signing keys. Witnesses keep the blockchain alive by producing one block every three seconds.
For example, if there are 20 Witnesses, each would produce one block every minute on average, in a random rotation.
Witness nodes that are not block producers can serve a variety of other roles, including being an API node, a seed node, a SON node, or other purposes.
There are various types of procedures like Manual, Docker, and GitLab Artifact installation for the Witness node. Click the below link to learn about the installation steps involved in detail.
The below page helps to learn about the Peerplays witness.
Based on the requirement, witness node can be installed using any of the below procedure.
The list of Peerplays mainnet API nodes are given below
A list of public full API nodes for the Peerplays blockchain is maintained as a Github gist:
The API node is a computer or virtual machine participating in the Peerplays Peer-to-Peer (P2P) network and maintaining an open port for queries and other client interactions. It provides a gateway to the blockchain by exposing a client API (Application Programming Interface) for inspecting or interacting with the blockchain. API nodes usually function as a back-end service to front-end software that provides a user interface (such as a graphical “wallet” interface), but they can also be queried directly with command-line tools for blockchain introspection.
Node types:
The instructions below detail the installation and setup of a private API node that connects to the Peerplays public Test Net. The install base is an Ubuntu 20.04 Virtual Machine with root access via SSH, such as one might acquire from a cloud compute provider like DigitalOcean or Linode. The requirements for a test net API node are lightweight — 1 GB RAM and 25 GB disk should be sufficient.
Note: The steps here are for the public test net. For a main net deployment, some steps will differ and the CPU, RAM, and disk requirements may be steeper.
After instantiating a VM or cloud compute machine based on Ubuntu 20.04 or similar, the first step is to connect to it in a terminal window using “ssh” and your node name. The below command and output is an example:
The below commands will be executed in the newly deployed node terminal, which will act as a private API node.
The number of software packages to be updated can be determined by using the command “apt update”. The output for the command is provided below:
Example output:
Similarly, to see the list of packages run the “apt list --upgradable” command.
Example output:
After collecting the list of software packages, run the command “apt upgrade” to begin the upgrade. The command will build, extract, unpack, and install the package.
Example output:
The two executables needed are witness_node and cli_wallet. These can be compiled, or built, via the instructions at https://gitlab.com/PBSA/peerplays. Often, however, pre-built binaries for your system (e.g. Ubuntu 20.04 on AMD64 hardware) may be available in the “Releases” tab in the GitLab repo. The instructions that follow involve retrieving pre-built binaries that were compiled for “TESTNET”.
The files related to witness_node and cli_wallet should be stored in an accessible directory and it is handy to keep subdirectories based on software versions, to make it easy to select different builds in the future. So, create a directory as follows:
In the directory created above, run the following commands to configure the witness node and cli wallet related files,
Example Output:
To configure the witness node use the below command
To configure the cli wallet use the below command:
Now the witness node and cli wallet file will be created in the desired folder.
To make the witness node and cli wallet files executable, the file permission must be given accordingly. Use the below command to provide necessary permission. The “a+x” makes the file executable for all users.
Example Output:
The necessary file permissions are given to the files.
Boost is an open source software development library which provides various tools and utilities for C++ programmers. The C++ framework relies on the boost libraries. The boost is mainly used to create high quality, efficient, and portable c++ code.
Output:
libBitcoin
A runtime library is a collection of executable software programs used at program run time to provide one or more native program functions or services. Peerplays witness_node software depends on some libraries that are not widely available. We have built these libraries and hosted them for convenient download.
Output:
2.2 Installation of the runtime libraries
The following command will install the run-time lib. If you observe the permission denied in the output, it can be ignored as the installation will be done directly through the root in some cases.
Output:
2.3 Checking the library file installation
To make sure the library files are installed correctly, run the following command in the terminal,
Output:
Run the below command to check the witness node version and other build versions.
Output:
Run the below command,
Next we will create the folders in which the witness node will store block data and configuration files.
Run the below command to create the required directory and move to the location: /root/Node/peerplays-testnet
The user should be in the location: /root/Node/peerplays-testnet, then create the bin folder. Below is the output,
The ln command supports the symbolic link creation subcommand. It will build links or aliases to other files on our system. The ln -s commands create soft symbolic links. We will use this to link to the current version of the software. (This will make it easier to upgrade later.)
Execute the following commands in terminal:
The user should be in the directory “ /root/Node/peerplays-testnet/bin ”
Execute the following commands in the terminal,
The “genesis” file is a JSON description of the initial starting state of the Peerplays Public Test Net. It is needed to allow the software to connect to and interoperate with the public test net.
Use the wget command to download the required file. Execute the following:
After the Json file download, the file’s sha256 checksum value must be same as the given value here “195d4e865e3a27d2b204de759341e4738f778dd5c4e21860c7e8bf1bd9c79203”.
Make sure both the values are the same.
Running witness_node for the first time will create the data directory structure for blocks and configuration. Execute the below command to create the witness node directory. The software will hang after a while because it does not yet know how to connect to the correct network. When console output stops, stop the witness_node with Ctrl-C.
Note that a new folder “witness_node_data_dir” now exists.
Once the editor is installed, some values in the config file witness_node_data_dir/config.ini need to be updated. But, before updating the file, you may wish to make a copy of the original config file with another name.
The below commands helps in terminal execution:
Now, the original config file is copied and saved as “config.ini-original”. The user can update the values in the config.ini file.
The user must be in the directory, “/Node/peerplays-testnet# cd witness_node_data_dir/” and open the config.ini file using the editor installed.
The config.ini file is large and only the necessary values are mentioned here.
Only the highlighted values in the above config.ini file must be updated based on the values of your node and save the file.
Notes on listening ports:
After the config file is updated, the node must be resynced with the blockchain to reflect the changes. The user must be in the location “/root/Node/peerplays-testnet”. This command should be executed only once to sync the chain. (If started with --resync-blockchain again in the future, the blocks database will be discarded and the entire blockchain will be downloaded again.) Execute the below command to perform the resync,
After the sync is complete, the terminal will begin to show blocks received in real time. You can stop the node at this point (with Ctrl-C), and in the future start it without the resync flag.
To start the node normally, run the below command:
The purpose of the “screen” command is to create a pseudo virtual terminal. Mainly used to keep the witness node running even after the terminal dies. The terminal must be screened before starting the execution, to keep the witness node alive at all times. Also, the node can be reattached from another terminal at any time.
Execute the below command to screen the terminal before starting the node:
To Detach the screen session
Switch to the screen session, Press Ctrl+A and then D to detach from the screen. It can later be reattached with the same “screen -DRRS PeerplaysTestNet” command
To check the SCREEN session status
When logging into the node, one can check for running screen sessions with:
The screen command used here is one of the options for persisting the terminal. Users can pick their own option to perform this operation. Other options include tmux, pm2, Docker, and others.
Setup a Witness Node using a pre-compiled GitLab artifact
This document assumes that you are running Ubuntu 20.04.
The Gitlab artifacts were built targeting Ubuntu 20.04 and will not work on Ubuntu 18.04. While Peerplays does support Ubuntu 18.04, you'll need to follow the Manual Install guide for Ubuntu or use Docker to use it on this version.
The following steps outline the artifact installation of a Witness Node:
Prepare the environment
Download and extract the artifacts
Copy the artifacts to the proper locations
Update the config.ini
File
Start the node
Please see the general Witness hardware requirements.
For the GitLab artifact install, the requirements that we'll need for this guide would be as follows:
The artifacts from GitLab are already built for x86_64 architecture. These will not work with ARM based architecture.
The following dependencies are necessary for a clean install of Ubuntu 20.04:
Cmake is an open-source, cross platform tool that uses independent configuration files to generate native build tool files specific to the compiler and platform. It consists of precompiled binaries and the cmake tools makes configuration, building, and debugging much easier.
Install the cmake using the below commands:
Boost libraries provide free peer-reviewed portable C++ source libraries and it can be used across broad spectrum of application.
Install Boost libraries using the below commands:
The components libzmp and cppzmp are used for relaying messages between nodes.
First, install libzmq using the below commands:
Next, install cppzmp using the below commands:
GSL is the GNU scientific library for numerical computing. It is a collection of routines for numerical computing in e.g. linear algebra, probability, random number generation, statistics, differentiation, etc.,
Install the gsl using the below commands:
The libbitcoin toolkit is a set of cross platform C++ libraries for building bitcoin applications. The toolkit consists of several libraries, most of which depend on the base libbitcoin-system library.
Install the libbitcoin using the below commands:
Doxygen is a software utility that recognizes comments within C++ code that have a certain form, and uses them to produce a collection of HTML files containing the information in those comments.
Install the Doxygen using the below commands:
Perl is a high-level, general-purpose, interpreted, dynamic programming language originally developed for text manipulation.
Install the Perl using the below commands:
Artifacts are pre-built binaries that are available to download from GitLab. You can see the available pipelines, sorted by release tags, on the GitLab Peerplays project page. The link in the code below refers to release version 1.6.1
which is the latest production release as of the writing of this document. Please make sure to replace the tag with the one you need.
Double check the tag in the download link!
Putting the witness_node
and cli_wallet
programs in the /usr/local/bin
directory will allow us to call the program from any directory.
Now we can run start the node with:
Launching the witness creates the required directories which contain the config.ini file we'll need to edit. We'll stop the witness now with Ctrl + C
so we can edit the config file.
We need to set the endpoint and seed-node addresses so we can access the cli_wallet and download all the initial blocks from the chain. Within the config.ini file, locate the p2p-endpoint, rpc-endpoint, and seed-node settings and enter the following addresses.
Save the changes and start the Witness back up.
We have successfully started the witness node and it is now ready for configuration.
Next step is to configure the witness node based on the requirement. There are different ways in which the nodes can be configured such as block producer, SON node, API node, and delayed node.
Becoming a block producer is one of the important steps as it is mandatory to use the node for transactions across the wallet. Follow the steps from the below document to become a block producer,
There are other ways it which the node can be configured. The below document showcase the other ways available for node configuration.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
Delayed node is the difference between the time that a node announces the discovery of a new block or a transaction and the time that other node receives the information for a period of operation.
The witness_node
program now can be used to serve the features that was served by delayed_node
.
To use witness_node
as a delayed node, some options are required to be configured appropriately.
Start witness_node
with command line option --seed-nodes="[]"
and/or --p2p-endpoint=127.0.0.1:0
, or configure the options in config.ini
:
Start witness_node
with --plugins="delayed_node [and other required plugins]"
, or configure it in config.ini
:
Start witness_node
with --trusted-node="ip.address.of.the.witness.node:rpc-port"
, or configure it in config.ini
:
Assuming the RPC endpoint of the trusted node is 127.0.0.1:8090
, we can have following options in config.ini
of a delayed node.
The plugin code is available in the below location,
Up until this point we have been running the node in the foreground which is fragile and inconvenient. So let's start the node as a service when the system boots up instead.
After that, it would be smart to create a backup server to enable you to make software updates, troubleshoot issues with the node, and otherwise take your node offline without causing service outages.
You've got a Witness node. Now you'll need a BOS node. And since you're in the node making mood, how about a SON too?
If you have a node that is accessible from the internet (for example, an API or Seed node) it would be wise to enable SSL connections to your node.
Congrats! You've successfully installed your Witness node using GitLab artifacts!
After configuring the node with desired configuration, click below to learn the NEXT steps
P2P Node:
Relays blocks and transactions
Can be a “seed node” (helps other nodes sync blocks)
API Node:
A P2P node, but also:
Exposes API for inspecting blockchain state and/or broadcasting transactions
BP Node:
A P2P Node, but also:
Produces blocks for the network (BP = Block Producing)
Must be elected by stakeholders to produce blocks; is rewarded for successful block production
p2p-endpoint:
This is the port the node uses for P2P connections (for relaying blocks and transactions across the network). 127.0.0.1:19777 — Node will make out-going P2P connections to peers, but will not accept incoming connections from peers.
0.0.0.0:19777 — Node will accept incoming P2P connections in addition to making outgoing connections.
rpc-endpoint:
This is the “client port” that client software (GUI interfaces, CLI wallets, etc.) connect to to interact with the network. 127.0.0.1:18090 — Connections allowed from “localhost” only — such as cli_wallet or a GUI wallet running on the same machine as the node. 0.0.0.0:18090 — Connections allowed from any machine. Useful when the client software and the node are on separate machines, or when offering the API connection as a public service.
Node Type?
CPU
Memory
Storage
Bandwidth
OS
Witness
4 Cores
16GB
100GB SSD
1Gbps
Ubuntu 20.04
The minimum system requirements of the server which will host your SON Node.
Example config.ini settings and their explanations for SON Node operation.
Compile and run the source code without a Docker container.
Use a pre-configured Docker container to run a SON Node.
CLI checks to ensure the successful installation of Bitcoin-SON node.
Example Config following SON config file is used as a template for SON test network.
Each node will require fine tuning to this config file, to set the parameters specific to that node.
On config file recreation, default values will be assigned to some of the properties, but some of them need to be changed or added manually for more complex testing environment.
To find values you need to put into config file, for particular SON, you need to get full SON Account info and its private key, using cli_wallet. Execute these commands with initialized wallet.
Our config params should look like this now
Now, we need to create a new bitcoin address and get its public and private key, and add them to config file
Our config param should look like this now
For more realistic test scenario, witnesses also need to be configured with their own IDs, public and private keys.
Set up a Sidechain Operator Node (SON) using a pre-configured Docker container
This document assumes that you are running Ubuntu 18.04. Other Debian based releases may also work with the provided script.
This tutorial will take you through the steps required to have an operating SON. Since SONs serve the purpose of facilitating transfers of assets between the Peerplays blockchain and other blockchains, we'll need to connect to another chain to be of any use...
The following steps outline the Docker installation of a (Bitcoin enabled) SON:
Preparing the Environment
Installing Docker
The Bitcoin node
Installing the peerplays:son
image
Starting the environment
Using the CLI wallet
Update config.ini
with SON Account Info
Before we begin, to set up a SON node requires about 110 PPY. This is to pay for an upgraded account (5 PPY) and to fund two vesting balances (50 PPY each). The remaining funds are to pay for various transaction fees while setting up the node. Please see Obtaining Your First Tokens for more info.
Note that these fees will likely change over time as recommended by the Committee of Advisors.
Please see the SON hardware requirements.
For the docker install, we'll be using a self-hosted Bitcoin node. The requirements that we'll need for this guide are as follows (as per the hardware requirements docs):
Then we'll clone the Peerplays Docker repository.
It is required to have Docker installed on the system that will be performing the steps in this document.
Docker can be installed using the run.sh
script inside the Peerplays Docker repository:
Since the script has added the currently logged in user to the Docker group, you'll need to re-login (or close and reconnect SSH) for Docker to function correctly. You can check to see if the current user belongs to the Docker group with the groups
command. If the Docker group is still not listed after a re-login, you'll have to reboot the machine with sudo reboot
(This will be the case if your using Ubuntu 20.04).
You can look at https://docs.docker.com/engine/install/ to learn more on how to install Docker. Or if you are having permission issues trying to run Docker, use sudo
or look at https://docs.docker.com/engine/install/linux-postinstall/.
Copy the example.env
to .env
located in the root of the repository:
We're going to have to make some changes to the .env
file so we'll open that now using the Vim editor.
Here are the important parts of the .env
file. These will be the parts that need to be edited or optionally edited. The rest of the file should be unchanged.
There are two options available to connect to the Bitcoin network.
Run a Bitcoin node yourself
Find an open Bitcoin node to connect to
For the purposes of this guide, I'll discuss how to run a node yourself as that will be a more reliable connection for now. Either way you go, you'll need to collect the following information to use in the config.ini
file:
The IP address of a Bitcoin node you can connect to (127.0.0.1 if self-hosting)
ZMQ port of the Bitcoin node (default is 1111)
RPC port of the Bitcoin node (default is 8332)
Bitcoin RPC connection username (default is 1)
Bitcoin RPC connection password (default is 1)
Bitcoin wallet label (default is son-wallet)
Bitcoin wallet password
A new Bitcoin address
The Public key of the Bitcoin address
The Private key of the Bitcoin address
First we'll download and install one of the official Bitcoin Core binaries:
The official Bitcoin Core binaries can be found here: https://bitcoincore.org/en/download/
The latest version is 0.21.1 as of July 2021. You may want to find and download the latest version of the binaries just like you would for the 0.21.1 version above.
Then we make a config file to manage the settings of our new Bitcoin node.
in the Vim text editor we'll set the following (You can copy and paste the content of this complete config file):
Save and quit the Vim editor.
The settings in the config file above are set to reduce the requirements of the server. Block pruning and setting the node to Blocks Only save network and storage resources. For more information, see https://bitcoin.org/en/full-node#reduce-storage.
Lastly we'll set a Cron job to ensure the Bitcoin node starts up every time the server starts.
At the bottom of the crontab file, add the following:
Save and quit the crontab file. Now we're ready to fire up the Bitcoin node!
If successful, you'll see Bitcoin Core starting
. As an extra check to see if everything is working, try the bitcoin-cli -version
or bitcoin-cli getblockchaininfo
commands.
You can also use this website to check the status of your node: https://bitnodes.io/
If you use the Bitnodes website, your node will appear as "down" until it's almost done downloading and verifying the Bitcoin chain. This can take a while.
Your Bitcoin node should now be downloading the Bitcoin blockchain data from other nodes. This might take a few hours to complete even though we cut down the requirements with block pruning. It's a lot of data after all.
We'll need a wallet to store your Bitcoin address.
At this point we hit a fork in the road! You'll need to do one of the following:
Option 1: Generate a new Bitcoin address to use for your SON node. (see 3.2.a. below)
Option 2: Import an existing Bitcoin address to use for your SON node. (see 3.2.b. below)
Either way, you'll need the Bitcoin address, its public key, and its private key.
Now we will create a Bitcoin address.
Then we'll use this address to get its keys.
Now we get the private key.
You don't need to do this if you made a new address in step 3.3.a. above!
Now we will import an existing Bitcoin address. You'll need the private key of the existing address which should be obtainable from your current wallet. You may not be able to get the private key from online or cloud wallet providers (contact their support teams for assistance with this.)
Then you can get the public key with the getaddressinfo
command.
That was a lot to go over. Let's collect our data. Here's an example:
Keep your tuple handy. We'll need it in the Peerplays config file.
Use run.sh
to pull the SON image:
There are many example configuration files, make sure to copy the right one. In this case it is: config.ini.son-exists.example
Copy the correct example configuration:
We'll need to make an edit to the config.ini
file as well.
The important parts of the config.ini
file (for now!) should look like the following. But don't forget to add your own Bitcoin public and private keys!
Save the file and quit.
Once the configuration is set up, use run.sh
to start the peerplaysd and bitcoind containers:
The SON network will be created and the seed (peerplaysd) and bitcoind-node (bitcoind) containers will be launched. To check the status, inspect the logs:
If the logs are throwing errors, perform a replay.
After starting the environment, the CLI wallet for the seed (peerplaysd) will be available.
Open another terminal and use docker exec
to connect to the wallet.
If an exception is thrown and contains Remote server gave us an unexpected chain_id
, then copy the remote_chain_id
that is provided by it. Pass the chain ID to the CLI wallet:
Set a password for the wallet and then unlock it:
The CLI wallet will show unlocked >>>
when successfully unlocked
A list of CLI wallet commands is available here: https://devs.peerplays.tech/api-reference/wallet-api/wallet-calls
Assuming we're starting without any account, it's easiest to create an account with the Peerplays GUI Wallet. The latest release is located here: https://github.com/peerplays-network/peerplays-core-gui/releases/latest. When you create an account with the GUI wallet, you should have a username and password. We'll need those for the next steps. First we'll get the private key for the new account.
The key beginning with "PPY" is the public key. The other key is the private key. We'll need to import this private key into the cli_wallet.
Next we'll upgrade the account to a lifetime membership.
At the time of writing this guide, it costs 5 PPY to perform this operation. You'll need that in your account first! To this end, check out Obtaining Your First Tokens.
Next we'll create the vesting balances.
Now we have all the info we need to create a SON account.
To get the SON ID:
We'll set the signing key using the active key from the owning account:
Now we have our SON account ID and the public and private keys for the SON account. We'll need this for the config.ini
file.
Lets stop the node for now so we can finish up the config.ini
.
Ensure the following config settings are in the config.ini
file under the peerplays_sidechain plugin options.
Then it's just a matter of starting the node back up!
Your SON is now good to go!
Up until this point we have been running the node in the foreground which is fragile and inconvenient. So let's start the node as a service when the system boots up instead.
After that, it would be smart to create a backup server to enable you to make software updates, troubleshoot issues with the node, and otherwise take your node offline without causing service outages.
Why stop at Bitcoin?
Now you have a SON, but have you thought about becoming a Witness? It will be a piece of cake for you since you've already set up a SON.
If you have a node that is accessible from the internet (for example, an API or Seed node) it would be wise to enable SSL connections to your node.
SON: Sidechain Operator Node - An independent server operator which facilitates the transfer of off-chain assets (like Bitcoin or Ethereum tokens) between the Peerplays chain and the asset's native chain.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
Vim is a text editing program available for Ubuntu 18.04. See vim.org
Peerplays Core release 1.6.0 will introduce changes to SONs operations, and SONs operators (those acting as a SON on the network) will need to upgrade to the latest software and make changes to their node configuration in the config.ini
file. These changes are critical to ensure un-interupted SONs network services.
A brief overview of changes is given here, followed by details in subsequent sections, including changes SONs operators will need to make to their nodes.
From 1.6.0 forward, following the hard-fork date, it will be possible for SONs operators to operate a subset of SONs. They will no longer be required to operate all supported SONs. I.e., if they wish to operate a Bitcoin SON but not a Hive SON, this will be possible following the hard fork.
1.6.0 Introduces experimental support for Libbitcoin.
Note: As support is experimental, it is recommended to continue using a bitcoind API access point at this time.
1.6.0 renames a configuration parameter for clarity. Existing config.ini
files will need to be adjusted.
1.6.0 adds some new configuration parameter, of which a few are mandatory. Existing config.ini
files will need to be adjusted.
1.6.0 adds a mandatory configuration parameter. Existing config.ini
files will need to be adjusted.
1.6.0 adds support for Ethereum SONs. SONs operators MAY add an Ethereum SON to their operations.
With this new upgrade, the bitcoin operators have two option to use for interfacing with the Bitcoin network. The two Bitcoin API endpoint options are,
Bitcoind
Libbitcoin (Experimental) (Note: The current recommendation is to continue using bitcoind until support for Libbitcoin reaches maturity.)
Important: Main-net SONs operators are recommended to use the bitcoind option at this point as the libbitcoin feature is considered experimental in this release. Test-net SONs operators may try out Libbitcoin if desired.
Operators upgrading from a previous version will need to edit their existing config.ini
file, as new config options are available and some are required in this release.
Parameter: use-bitcoind-client
(Recommended to use)
This option is used to select bitcoind as the API protocol for accessing the Bitcoin network. The user must add this option to their config and change the value from 0 to 1 to enable its usage. (Otherwise it will default to libbitcoin.)
Libbitcoin parameters The use of libbitcoin is not yet recommended for Main-net. These options MAY be added to the config file, but will be ignored so long as bitcoind is enabled. Example values are as follows:
Parameter: bitcoin-wallet-name
In 1.6.0, the bitcoin-wallet
option in config.ini is renamed to bitcoin-wallet-name
. The old option will not be recognized.
If the input is not updated, the SONs will not function. The following warning message will be shown at witness_node
startup:,
" Haven’t set up Bitcoin sidechain parameters "
After the upgrade, the HIVE plugin has only one new parameter added.
hive-wallet-account-name
The input for this parameter is the SON multisig account name on the HIVE network which is controlled by the SONs operators. This is the son-account
on HIVE mainnet.
New in this release, a SON operator may operate an ETH SON. Operators who wish to enable an ETH-SON can follow the steps from the link below,
The config.ini
file consist of all the necessary details to configure the node based on the operator's requirements. It has the option to enable/disable the use of specific asset, endpoint, wallets, etc.,
The set of active plugin options such as witness, debug_witness, account_history, market_history, and peerplay_sidechain are available.
There are other set of plugins such as elastic_search, es_object, snapshot, and delayed_node available to use based on the operator's requirements. (Disabled by default)
The config.ini can be divided into two section,
End point configuration
Plugin options (only active configurations are explained here)
The user must provide the details such as which network interfaces & ports to listen on and which seed nodes to use for peer discovery. Also, to be a SONs operator, the relevant plugin must be enabled.
The default configuration file looks as follows:
Only the essential details to configure SON node is mentioned below. The operator must provide their details for configuration. Example configuration values as follows:
The list of active plugins available in the config file are,
witness
debug_witness
account_history
market_history
peerplay_sidechain
The witness plugin default config is provided below and there is no manual configuration required if the SON operator is not also a block-producing witness on the same machine.
The debug_witness default config is provided below. By default the keys will be added after the node creation,
The account_history default config is provided below and no manual changes are required unless your use case necessitates it.
The market_history default config is provided below and no manual changes are required unless required.
This plugin consists of all the necessary details about various asset, IPs, private keys, wallet details, and API endpoints. In order to make the SON node work as required, the operator must carefully input their values and requirement in this configuration.
The default plugin configuration is mentioned below,
An example configuration to enable basic requirements for a SON node is explained below,
Config file contains all the default public/private keys for SON account, no changes are required here. But unused key pairs may be removed.
List of keypairs is ordered. The first one belongs to sonaccount1, last to sonaccount16
By using the cli_wallet, the SON account information, private key can be collected. Execute the below command in the wallet,
The config parameter should look like the below example,
Let's use Bitcoin!
Bitcoin node type
CPU
Memory
Storage
Bandwidth
OS
Self-Hosted, Reduced Storage
2 Cores
16GB
150GB SSD
1Gbps
Ubuntu 18.04
MongoDB is a NoSQL database that has fully flexible index support and a rich queries database.
This document explains how to install MongoDB (as root/sudo
).
First of all, import GPK key for the MongoDB apt repository on your system using the following command. This is required to test packages before installation
Then add MongoDB APT repository url in /etc/apt/sources.list.d/mongodb.list.
Ubuntu 18.04 LTS:
After adding required APT repositories, use the following commands to install MongoDB on your systems. It will also install all dependent packages required for MongoDB.
If you want to install a specific version of MongoDB, define the version number as follows:
After installation, MongoDB will start automatically. To start or stop MongoDB use an init script. For example:
And use the following commands to stop or restart the MongoDB service.
Finally, use the below command to check the installed MongoDB version on your system.
And check the status with:
Important: Some versions have the service name as mongod
and some have mongodb.
If you get an error with the above command, use sudo service mongodb status
instead.
Also, connect MongoDB using the command line and execute some test commands for checking proper working.
In this first step, we'll install everything we'll need going forward.
Note: Dependencies must be installed as root/sudo
Tip: virtualenv
is a best practice for python, but installation can also be on a user/global level.
MongoDB is used for persistent storage within BOS.
For additional information on how to use MongoDB refer to tutorials on your distribution.
Important: Make sure that the MongoDB is running reliably with automatic restart on failure.
Redis is used as an asynchronous queue for the python processes in BOS.
For additional information on how to install Redisdb refer to your Linux distribution.
Important: Make sure that RedisDB is running reliably with automatic restart on failure, and that it's run without any disk persistence.
It is highly recommended that both daemons are started on start-up.
To start the deamons, execute
Important: Common Issues:
Exception: Can’t save in background: fork or MISCONF Redis is configured to save RDB snapshots.
This indicates that either your queue is very full and the RAM is insufficient, or that your disk is full and the snapshot can’t be persisted.
Create your own Redis configuration file (https://redis.io/topics/config) and use it to deactivate caching and activate overcommit memory:
https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram or https://stackoverflow.com/questions/19581059/misconf-redis-is-configured-to-save-rdb-snapshots/49839193#49839193
https://gist.github.com/kapkaev/4619127
Exception: IncidentStorageLostException: localhost:27017: [Errno 111] Connection refused or similar.
This indicates that your MondoDB is not running properly. Check your MongoDB installation.
Note: bos-auto must be installed as user
You can either install bos-auto via pypi / pip3
(production installation) or via git clone (debug installation).
For production using install bos-auto via pip3 is recommended, but the git master branch is always the latest release as well, making both installations equivalent. Recommended is a separate user.
For debug use, checkout from Github (master branch) and install dependencies manually.
BOS auto is supposed to run in the virtual environment. Either activate it beforehand, as above, or run it directly in the env/bin
folder.
Important: If bos-auto is installed as root
and not user
then you'll likely get errors similar to the following:
For production installation, upgrade to the latest version - including all dependencies using:
For debug installation, pull latest master branch and upgrade dependencies manually
Next we need to go through the steps required to setup bos-auto properly.
After bos-auto configuration we need to spin-up bos-auto to see if it works properly.
Bos-mint is a web-based manual intervention module that allows you to work with all sorts of manual interactions with the blockchain.
For more information see:
The isalive call should be used for monitoring. The scheduler must be running, and the default queue a low count (< 10).
Here is an example of a positive isalive
check:
The default configuration looks like the following and is (by default) stored in config.yaml
:
Both, the API and the worker make use of the same configuration file.
We need to provide the wallet pass phrase in order for the worker to be able to propose changes to the blockchain objects according to the messages received from the data feed.
The messages sent to the API need to follow a particular message schema which is defined in endpointschema.py
SONs manual installation process is similar to witness node installation. However, the SONs will perform the assets transfer between Peerplays blockchain and other blockchains, we need to connect to another chain to be of any use. This document features the steps to configure and install Ethereum-SON.
The following steps outline the Manual Installation of a ETH-SON.
Build Peerplays node
Primary Requirements
Installation steps
Peerplays ETH-SON configuration
Start the SON
Before we begin, to set up a SON node requires about 110 PPY. This is to pay for an upgraded account (5 PPY) and to fund two vesting balances (50 PPY each). The remaining funds are to pay for various transaction fees while setting up the node. Please see Obtaining Your First Tokens for more info.
Note that these fees will likely change over time as recommended by the Committee of Advisors.
The detailed steps to build the Peerplays node is explained in the below readme file. Click the link to follow the steps.
It covers the initial steps in bringing up the node which includes latest Ubuntu installation and its software dependencies, building Peerplays, building docker images, starting & upgrading a Peerplays node, wallet setup, and finally witness node creation.
A Peerplays account to act as the SON operator.
An Ethereum account to join the ETH SON multisig on the ETH chain. The account can be created using Metamask, wallet, etc.,
An accessible Ethereum API node to communicate with the Ethereum network.
The SON node installation steps are explained in the below section. Click the link based on your preferences,
The generated config.ini
file will be located at /home/ubuntu/witness_node_data_dir/config.ini
. We'll begin by editing this config file.
The config file is large and only the required section is focused here. This section contains all the SON related configuration. Ensure the following config settings are in the config.ini
file under the peerplays_sidechain plugin options.
Make sure to add peerplays_sidechain
plugin along with existing plugins in config.ini
file. Find the plugins
option in the initial section of the config.ini
file and add the peerplays_sidechain
plugin to the list as shown below
In order to enable Ethereum sidechain, replace 0 by 1 in the below command.
By default, the value is "0" in the config.ini file.
After setting up the config.ini
file for SON operation, we'll start the node back up.
SON node is UP and it's time to play around !
Up until this point we have been running the node in the foreground which is fragile and inconvenient. So let's start the node as a service when the system boots up instead.
After that, it would be smart to create a backup server to enable you to make software updates, troubleshoot issues with the node, and otherwise take your node offline without causing service outages.
Now you have a SON, but have you thought about becoming a Witness? It will be a piece of cake for you since you've already set up a SON.
If you have a node that is accessible from the internet (for example, an API or Seed node) it would be wise to enable SSL connections to your node.
SON: Sidechain Operator Node - An independent server operator which facilitates the transfer of off-chain assets (like Bitcoin or Ethereum tokens) between the Peerplays chain and the asset's native chain.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
Ethereum: Ethereum is a decentralized blockchain with smart contract functionality. Ether is the native cryptocurrency of the platform.
The Bookie Oracle System, or BOS, is a unique decentralized sports feed oracle system originally designed for the BookiePro dApp.
Unlike traditional centralized sports betting applications that rely on perhaps one or two data feeds, Bookie, through BOS, has the potential for almost unlimited data feeds through the use of data proxies.
At its simplest, a data proxy is just middleware that consumes it's own data feed (usually from a commercial supplier) then parses and normalizes that data before sending it to BOS.
Since BOS supports multiple data proxies, it therefore supports multiple data feed providers.
This is where BOS becomes truly decentralized. There is no single source of truth as far as the sports data is concerned. BOS, either automatically or through manual intervention by Witnesses requires a consensus of at least 2 approvals in most cases, before processing incident/event data.
For example:
A soccer game takes place between two teams and there are three data proxies interfaced to BOS.
Each data proxy sends the result of the game to BOS, two send the result as 2 - 1, but one sends the result as 2 - 2. In this instance BOS will process the result as 2-1 because at least two data proxies agree that this is the correct score.
However, if only two data proxies were reporting to BOS, and they gave different scores, that game would not get automatic approval and instead would require a manual proposal by the Witnesses using the manual intervention tool (MINT).
An overview of some of the elements in this diagram:
bos-auto This service provides the endpoints that receive incidents from Data Proxies, triggers that distinguish incidents according to their information, as well as a worker that processes the triggers and incidents and synchronizes them on the Peerplays blockchain by means of bos-sync and BookieSports.
bos-mint The Manual Intervention Module (MINT) provides a web interface for Witnesses to manually intervene in the otherwise fully-automated process of bringing Bookie Events, BMGs, and Betting Markets to the Peerplays blockchain (through bos-auto). This allows Witnesses to handle any edge cases that may arise and cannot be dealt with by bos-auto.
python-peerplays This is a communications library which allows interface with the Peerplays blockchain directly and without the need for a cli_wallet. It provides a wallet interface and can construct any kind of transactions and properly sign them for broadcast.
bookiesports bookiesports is essentially a set of rules and recommendations about the sports, leagues, competitions, and betting markets that should be offered on Bookie. bookiesports also provides configuration information regarding betting market formats, along with rules and grading algorithms used to settle markets on Bookie. Use of bookiesports allows Bookie to provide a coherent product offering that meets the expectations of the sports betting consumer. bookiesports is also used by Data Proxies for standardization of Sport, Event Group, and team/competitor names.
bos-sync The bos-sync module is involved to the blockchain either through creating a proposal or by approving an existing proposal. This module is the heart of BOS and provides a library with an easy-to-use programming interface that hooks straight into bookiesports.
bos-incidents This module stores incoming incidents from Data Proxies and is also integrated with bos-mint (MINT). This module integrates a database that persistently stores incidents and allows the tracking of status changes.
For more information on MINT see:
Redis is an open source, in-memory data structure store, used as a database, cache and message broker.
This document explains how to install Redis (as root/sudo
)
To install Redis run the following commands:
Warning: At this point it's crucial to set the default witness node to your own server (ideally running inlocalhost
, see below config.yaml) using peerplays set node ws://ip:port
. If this step is skipped the setup will not work, or at best will work with very high latency.
Since your Witness account is going to create and approve proposals automatically, you need to ensure that the Witness account is funded with PPY.
We now need to configure bos-auto:
The variables are described below:
The following options need to be set:
node: ws://localhost:8090
. If not running a local installation then change this to any Testnet (Beatrice) API node.
network: beatrice.
Only change if you're not using this Testnet.
Important: Make sure you set a Redis password during the Redis installation.
Now that bos-auto has been configured we want to make sure it works correctly. To do this, we need to start two processes:
An endpoint that takes incident reports from the data proxy and stores them in MongoDB as well as issues work for the worker via Redis.
The worker then takes those incidents and processes them.
Note: It is recommended to run both via system services.
The commands shown are for production installation, for debug installation replace “bos-auto”
with “python3 cli.py”
.
Note: Former installations also required to run the scheduler as a separate process. This is no longer necessary, it is spawned as a subprocess.
This is a basic setup and uses the flask built-in development server, see Production Deployment below.
Important: Before executing the next command make sure that your node is set to the correct environment. For example, if the installation is for Testnet (Beatrice) run:
peerplays set node <Beatrice Node>
where <Beatrice node> is any Beatrice API node.
After this, if it's set up correctly you'll see the following messages:
INFO | Opening Redis connection (redis://localhost/6379) * Running on http://0.0.0.0:8010/ (Press CTRL+C to quit)
This means that you can now send incidents to http://0.0.0.0:8010/.
You can test that the endpoint is properly running with the following command:
If the endpoint is running, the API daemon will print the following line:
At this point, we are done with setting up the endpoint and can go on to setting up the actual worker.
Data proxies are interested in this particular endpoint as they will push incidents to it. This means that you need to provide them with your IP address as well as the port that you opened above.
For more information on Data Proxies see:
The endpoint has an isalive
call that should be used for monitoring:
which produces an output like:
Of interest here are the listed versions and queue.status.default.count
.
The count should be zero most of the time, it reflects how many unhandled incidents are currently in the cache.
Going into production mode, a Witness may want to deploy the endpoint via UWSGI, create a local socket and hide it behind an SSL supported nginx that deals with a simple domain instead of ip:port
pair, like https://dataproxy.mywitness.com/trigger
.
Important: At this point it's crucial to set the default Witness node to your own server (ideally running in localhost
) using peerplays set node ws://ip:port
. If this step is missed, the setup will not work or, at best, will work with very high latency.
Start the worker with the following commands:
It will already try to use the provided password to unlock the wallet and, if successful, return the following test:
Nothing else needs to be done at this point.
Important: For testing, we highly recommend that you set the nobroadcast
flag in config.yaml
to True
For testing, we need to throw a properly formatted incident at the endpoint. The following is an example of the file format,
Note: Because the incident data changes all the time and is quickly out of date, the actual contents of this file are unlikely to work. At the time of testing reach out to PBSA for some up to date incident data.
Store them in a file called replay.txt
and run the following call:
Note the trigger
at the end of the endpoint URL.
This will show you the incident and a load indicator at 100% once the incident has been successfully sent to the endpoint.
Your endpoint should return the following:
And your worker to return something along the lines of (once for each incident above):
Tip: Each incident results in two work items, namely a bookied.work.process()
as well as a bookied.work.approve()
call.
The former does the heavy lifting and may produce a proposal, while the latter approves proposals that we have created on our own.
With the command line tool, we can connect to the MongoDB and inspect the incidents that we inserted above:
Where [Begin Date] and [End Date] specify the date range to pull incident data from.
The output should look like:
It tells you that two incidents for that particular match came in that both proposed to create the incident. The status tells us that the incidents have been processed.
We can now read the actual incidents with:
And replay any of the two incidents by using:
Tip: For more information on BOS supported commands run:
bos-auto --help
or bos-incidents --help
Your worker should now be started.
Let's use Ethereum! 😃
With , the flow looks like this:
bookiesports.datestring.date_to_string
(date_object=None)
rfc3339 conform string representation of a date can also be given as str YYYY-mm-dd HH:MM:SS
bookiesports.datestring.string_to_date
(date_string=None)
assumes rfc3339 conform string and creates date object
exception bookiesports.exceptions.SportsNotFoundError
Bases: Exception
exception bookiesports.normalize.EventGroupNotNormalizableException
Bases: bookiesports.normalize.NotNormalizableException
class bookiesports.normalize.IncidentsNormalizer
(chain=None)
Bases: object
This class serves as the normalization entry point for incidents. All events / event group and participant names are replaced with the counterpart stored in the BookieSports package.
DEFAULT_CHAIN
= 'beatrice'
default chosen chain for BookieSports
NOT_FOUND
= {}
As class variable to have one stream for missing normalization entries
NOT_FOUND_FILE
= None
If normalization errors should be written to file, set file here
normalize
(incident, errorIfNotFound=False)
static not_found
(key)
static use_chain
(chain, not_found_file=None)
exception bookiesports.normalize.NotNormalizableException
Bases: Exception
exception bookiesports.normalize.ParicipantNotNormalizableException
Bases: bookiesports.normalize.NotNormalizableException
exception bookiesports.normalize.SportNotNormalizableException
Bases: bookiesports.normalize.NotNormalizableException
The installation of BookieSports is very straightforward.
In the environment of bos-auto, for a new installation run:
pip3 install bookiesports
or for an existing installation run:
pip3 install bookiesports --upgrade
Then restart services.
The latest code can be found here:
class bookiesports.BookieSports(chain=None, override_cache=False, **kwargs)
Bases: dict
This class allows to read the data provided by BookieSports
On instantiation of this class the following procedure happens internally:
Open the directory that stores the sports
Load all Sports
For each sport, load the corresponding data subset (event groups, events, rules, participants, etc.)
Validate each data subset
Perform consistency checks
Instantiate a dictionary (self
)
As a result, the following call will return a dictionary with all the BookieSports:
Parameters:
Note: It is possible to overload a custom sports_folder by providing it to BookieSports as parameter.
BASE_FOLDER = '/home/docs/checkouts/readthedocs.org/user_builds/bookiesports/envs/latest/lib/python3.7/site-packages/bookiesports-0.4.10-py3.7.egg/bookiesports/bookiesports'
CHAIN_CACHE = {}
Singelton to store data and prevent re- reading if BookieSports is instantiated multiple times
DEFAULT_CHAIN = 'beatrice'
JSON_SCHEMA = None
Schema for validation of the data
SPORTS_FOLDER = None
chain_id
static list_chains()
static list_networks()
@deprecated, use list_chains
network
@deprecated, use self.index
network_name
@deprecated, use self.chain
static version()¶
After successful update of BOS, Witnesses should perform the following actions to synchronize Mainnet with the latest version of BookieSports.
Synchronization should be done as follows:
Open MINT.
Go to the MINT /proposals page (see below).
Click on the thumbs-up-and-down icon on the MINT top bar.
Approve the sync proposal by clicking the green 'approve' button.
Enter your password if prompted.
Note: Only after 50%+1 of Witnesses have approved the proposal, will it be executed and disappear from MINT so that subsequent Witnesses will not be able to see the proposal.
Note: Advanced features needs to be enabled within the .YAML configuration file for MINT in order to use this feature.
Example .YAML File:
For more information on MINT see:
Some BookieSports files (in particular name and description fields) allow the use of variables. Those are dynamic and filled in by bookie-sync, automatically.
As an example, the file MLB_ML_1.yaml defines betting markets for a Moneyline market group. The betting markets carry the name of the event participants. We encode this in BookieSports using variables:
teams:
{teams.home}: Home team
{teams.away}: Away team
result:
{teams.home}: Points for home team
{teams.away}: Points for away team
{teams.hometeam}: Points for home team
{teams.awayteam}: Points for away team
{teams.total}: Total Points
handicaps:
{teams.home}: Comparative (symmetric) Handicaps (e.g., +-2) for home team
{teams.away}: Comparative (symmetric) Handicaps (e.g., +-2) for away team
{teams.home_score}: Absolute handicap for home team (e.g., 2)
{teams.away_score}: Absolute handicap for away team (e.g., 0)
overunder:
{teams.value}: The over-/under value
The variable parsing is done in bos-sync (substitutions.py) and work through decode_variables
and a few classes that deal with the variables. This allows us to have complex variable substitutions.
The variables all consist of a module identifier and the actual member variable:
All modules are listed in the substitutions variable in decode_variables
::
The modules themselves (capitalized first letter) are defined in the same file and can be as easy as:
or as complex as:
chain (string) – One out ‘alice’, ‘beatrice’, or ‘charlie’ to identify which network we are working with. Can also be a relative path to a locally stored copy of a sports folder
override_cache (string) – if true, cache is ignored and sports folder is forcibly reloaded and put into cache
network (string) – deprecated, please use chain
The Manual Intervention Module (MINT) provides a web interface for Witnesses to manually intervene in the otherwise fully-automated process of bringing Bookie Events, BMGs, and Betting Markets to the Peerplays blockchain (through bos-auto).
Occasionally, there may be a need for manual intervention in some element of BOS’s operation. One example would be when multiple Data Proxies are not able to provide the final result of a game (due to sustained connectivity issues or, further back down the chain, an issue with the reporting of the match to the third party data feeds). In this situation, a Peerplays Witness can use the Manual Intervention Module (MINT) which is part of the BOS suite.
MINT provides a web interface for Witnesses to manually intervene in the otherwise fully-automated process of bringing Bookie events and betting markets to Bookie. Some of the functions that MINT can be used to perform:
turn an event ‘in-progress’ (match has started) or a betting market ‘in-play’
freeze an event or betting market
cancel an event or betting market
settle a betting market (i.e. decide winners and losers)
It is important to remember that changes made using MINT do not automatically become ‘fact’ on Bookie. As with BOS’s automated operation, data sent by a particular Witness using MINT is representative of the ‘opinion’ of only that Witness. It requires a consensus amongst a simple majority (50%+1) of all Witnesses for that ‘opinion’ to be accepted as ‘fact’ on Bookie.
MINT also allows for the creation of new events and betting markets. This allows for Bookie to offer betting on longer-term markets like “Who will win the Superbowl?” at the start of a new NFL season (called ‘Futures’ in North America, or ‘Ante-Post’ betting in the UK). These markets do not lend themselves to fully automated management by BOS but are an important part of any sports betting offering.
The ability to create new games or events using MINT also opens up the possibility of Bookie offering ‘novelty’ betting markets where it is not feasible to implement fully-automated data management.
There are five triggers, also called incidents, that control the flow of data to BOS from each Data Proxy.
create
<game> created
in_progress
<game> started
finish
<game> finished
result
<game> score
canceled
<game> canceled / postponed / abandoned
Where <game> represents any game in the context of BookiePro. It's worth calling this out because right now we only think of Data Proxies in the context of BookiePro, and therefore sports data, a Data Proxy could in theory send other types of data. For example, as long as BOS receives a 'winner' and a 'loser' that data could be anything from the outcome of a coin toss to the winner of a general election!
Note: Sports, EventGroups, Betting Market Groups (BMGs) and Betting Markets are automatically created via bookie sync. Only events are affected by the Data Proxy.
Each Data Feed Provider will send data to a Data Proxy according to it's own rules, and not necessarily a direct mapping to the four triggers BOS expects. For example, a DFP might send a single message for both a finished game and the result.
It would then be the job of the Data Proxy to re-format this data in to the two messages that BOS is expecting.
DFPs will also use their own status codes for the events that they send, and these status codes won't match the triggers that BOS uses. Once again it is the job of the Data Proxy to normalize this data. The following is an example of the status codes used by a DFP:
Using the triggers supported by BOS it then requires the Data Proxy to map these codes in a similar manner to the following:
Note: The create
incident is only sent when a manual data replay is requested. It is not a trigger that is automatically sent from the Data Proxy software.
We can see from the above mappings that canceled events come in many forms, but are all pushed to BOS as a canceled
incident. However, depending on the circumstances a second incident needs to be sent as follows:
Game was abandoned - Typically this means the game will be re-scheduled. If it is then a new create
incident will be sent with the re-scheduled date.
Game was postponed - By definition this means the game will be re-scheduled so a new create
incident will also be sent with the re-scheduled date.
Game was interrupted - Usually means there is a delay and the game will re-start at some point. From a BOS perspective no additional incident is required; BOS will simply interpret the game as taking longer to finish so the next incident it expects is a finish
.
What we really mean by 'sending to subscribers' is 'sending to the BOS system for all Witnesses subscribed to Data Proxies'.
In essence this is Data Proxies' ultimate purpose. We've already discussed the fact that there is no common format in the data sent from the DFPs, but the data sent from each Data Proxy to each BOS instance has to be in the same format. This is what we'll discuss next.
The Data Proxy provides a HTTP endpoint for monitoring purposes. Assume that the Data Proxy is deployed on localhost:8010, then the URL is
localhost:8010/isalive
The response has a json body that has three main contents:
status
: String flag either "ok" or "nok", general state of the proxy
subscribers
: List of dictionaries containing information of each subscriber. Contains a status flag as well
providers
: List of dictionaries containing information of each provider. Contains a status flag as well
This isalive can be called from localhost and from anywhere.
Important: No identifiable information on providers or subscribers is published when queried from anywhere. Details are only added when it is called from localhost.
Note: Dependencies must be installed as root/sudo
Note: virtualenv
is a best practice for python, but installation can also be on a user/global
level.
Note: Databases must be installed as root/sudo
MINT uses a local SQLite database which requires MySQL setup (running a MySQL server instance is not required). Assuming a Ubuntu 16.04. or later operating system, install:
Note: bos-mint should be installed as user
You can either install bos-mint via pypi / pip3 (production installation) or via git clone (debug installation). For production use install bos-auto via pip3 is recommended, but the Git master branch is always the latest release as well, making both installations equivalent.
For debug use, checkout from GitHub (master branch) and install dependencies manually:
BOS MINT is supposed to run in the virtual environment. Either activate it beforehand like shown above or run it directly in the env/bin folder.
Note: bos-mint should be upgraded as user
For production installation, upgrade to the latest version - including all dependencies run:
For debug installation, pull latest master branch and upgrade dependencies manually:
Next step is to configure bos-auto.
Default configuration only requires the following:
Possible override values are :
To run MINT in debug mode use:
The output that you see should contain:
The above setup is basic and for development use. Going forward, a Witness may want to deploy UWSGI with parallel workers for the endpoint.
MINT is purposely run on localhost
to restrict outside access. Securing a Python flask application from malicious break in attempts is tedious and would be an ongoing effort.
Important: Recommendation is to access it via a SSH tunnel or through VPN.
Example for SSH tunnel:
Assume bos-mint is running on a remote server accessible via 1.2.3.4 and you have login credentials via SSH (password or private key access). On the local machine that you'll be using to access MINT via a web browser open the tunnel:
-f : Send process to background
-N : Do not send commands (if you need open ssh connections only for tunnelling)
-L : Port mapping (8080 port on your machine, 127.0.0.1:8001 - proxy to where MINT runs)
Now you can open mint in your browser using http://localhost:8080 address.
After starting MINT use your favourite desktop browser to access it and you'll be asked to enter your Witness key that will be stored encrypted in the local Peerplays wallet.
Note: MINT is not optimized for mobile use yet.
For MINT development use checkout the latest repository from:
and then run:
Text | Type | Comments |
[title] | Dynamic | Default is CouchPotato but can be configured using the |
[icon/Image] | Dynamic | The icon or image shown in the top left of the header. Configured using the |
[version] | Dynamic | The version number/value of the release. Configured using the version property in the |
Local Time | Static |
|
[time] | Dynamic | The current time in the format [hh:mm:ss]. Time changes every second. |
[username] | Dynamic | The (user)name of the logged in user. |
Caption | Type | Action |
👤 | Icon |
Replay | Button | Open the Replay screen. |
Open the .
The account menu is accessed by clicking on the user icon at the far right of the header.
The account menu has two features.
Click on the Change Password menu item to open the Change Password screen
Click on the Log Out menu item to log out of the application and return to the Home Page.
Note: The Account Menu will automatically close after 10 seconds if it isn't used.
The leagues tab runs vertically down the left side of the dashboard and displays one tab for each league that is configured for the selected sport. The tabs are dynamic and configured through the MySql database Leagues
table.
The order the leagues tabs are displayed in is defined by their id
value in the Leagues
table.
There is no limit on the number of sports tabs that can be created. If the tabs reach the vertical limit of the application then they will stack in to multiple columns. Realistically there should never be so many leagues enabled at any one time to cause the tabs to be stacked.
Important: The leagues tabs must be 100% configurable through the database only. Leagues must be added or removed without any code changes.
Clicking on any unselected tab will change the calendar display to show only events for the selected sport and league.
Text
Type
Comments
[league name]
Dynamic
Value set in Leagues
table.
[icon]
Dynamic
Path and name defined in the icon
column of the Leagues
table.
The icon itself must exist in the corresponding asset/imgs/leagues
folder in the application.
Caption
Type
Action
[League]
Text
Change calendar to the selected league of the selected sport.
Before using Couch Potato every user must create an account.
To create an account by first clicking on the Create Account
link on the Home Page.
You'll then see a screen like the following, with you unique Data Proxy name.
Next enter a User Name between eight and 24 characters and a password between eight and 40 characters. Finally re-enter the password to confirm it and click on the REGISTER
button.
If all fields are valid you'll be returned to the Home Page to log in with your new account credentials.
You can exit this screen at any time by clicking on the X at the top right.
The notifications panel is displayed on the right side of the dashboard and is where all notifications (reminders) will be posted for all games about to start or finish.
The notifications will be refreshed at a configurable millisecond interval set in the notifications->delay
property in config-dataproxy.json
. The default will be 3,000 (3 seconds).
Each notification will take the form of a 'note' which will have the following information:
The colour of the notes is very important and must be visibly obvious. The colour of the note is set by the following criteria:
Green
Any game that is in the range 30 - 15 minutes to it's scheduled start time.
Any game that is in the range 30 - 15 minutes from it's predicted end time.
Amber
Any game that is in the range 1 - 14 minutes to it's scheduled start time.
Any game that is in the range 1 - 14 minutes from it's predicted end time.
Red
Any game that should have started according to its scheduled start time.
Any game that is in the range 30 - 15 minutes to it's predicted end time.
The notifications rely on the start and end times of each game. The start time is taken as the time entered for any game when it was created, this is the only value that can be used.
The end time is more complicated because for many sports it's very hard to predict when a game ends because of time-outs, extra-time etc. For example, a game of soccer is much more predictable because the clock doesn't stop during play. So a game is likely to be two halves of 45 minutes, 15 minutes of half-time and perhaps 5 minutes of extra time, so 45+45+15+5 = 110 minutes.
However, a game of football, even thought it's four quarters of 15 minutes, has time-outs and regular clock stops, so the time the game will finish is a very broad average.
Note: Because the end times are largely unpredictable the notification for game finishes should say "might have finished" rather than "should have finished"
Each note is tied to a game and as such will be automatically removed from the notification panel as soon as the status of the game is updated. For example, if a note states that game x "SHOULD HAVE STARTED", then as soon as the game is started from the game selector, the note will be removed.
To keep things tidy notes can be set to be be automatically removed after a set interval (in hours). This means that if games haven't been started, instead of the warning note appearing even after [x] days when there would be no point updating the status of the game, it will be removed.
The number of hours after which a notes should be removed is set in the notifications->end
property in config-dataproxy.json.
The default value is 240 (10 days).
Each note is 'clickable', and when any note is clicked on it will automatically open the game selector for the selected date / league / sport combination. The game in the selector, that corresponds to the game on the selected note, will be highlighted for easy identification. For more information see:
The duration of any sport is set in the duration
column of the table. The default values are based on the accepted average durations for these sports.
Text/Image | Type | Comments |
[icon] | Dynamic | The icon associated with the league for the game. |
[start date/time] | Dynamic | The start date and time of the game |
[sport] ([league]) | Dynamic | The sport and league of the game |
[home team] v [away team] | Dynamic | The home and away teams |
[Status] | Dynamic | The time in minutes until start or end and the following text according to the rules above:
|
The following user guide will step you through the process of using Couch Potato application for creating games, entering scores and finishing a game.
For information on using the Couch Potato API to create your own data proxy see the
Couch Potato is a web based application that requires no installation or dependencies beyond needing access to the web site. Access to the web site is restricted to those users granted permission after consultation with PBSA.
The application is designed to be very easy to use and at the time reducing the chances of bad data being entered by, as much as possible ,allowing only data selected from lists to be entered; keeping free text entry to absolute minimum.
For example, Couch Potato won't let you enter an invalid basketball team for NBA because you can only select from a list of teams that has already been determined to be correct.
The two underlying absolute pre-requisites of being a Couch Potato user are the ability to enter accurate and timely data. To this end the interface includes a notification feature that will post reminders when games are due to start, or if a game is about the finish.
The home page is the first page you'll see and from where you can log in or create an account.
Note: If you've set up your own Couch Potato server then the url for the application will be at your discretion, but if you're using a hosted version then PBSA will supply you with a url.
If you already have an account then enter your User name and Password and then click on the Login
button. Once your log in credentials are verified you'll be taken to the Dashboard.
If you don't have an account then click on the Create Account
link and you'll be taken to the Create Account screen where you'll be able to create your new account.
The replay screen is displayed by clicking on the REPLAY
button on the Dashboard header.
The purpose of the Replay feature is to give you a manual way to send, or re-send, game create incidents to all of the BOS endpoints if for any reason they weren't correctly sent before.
Normally you won't need to use this feature very often as a create incident is automatically sent every time a game is created. But there could be occasions when the application correctly records a game as being created but the information isn't recorded by the BOS nodes. If that happened then running a Replay will 'flush' all the games between the start and end dates and send create incidents to the BOS nodes a second time.
Important: The Replay feature can only be used for games that are not yet started. Once a game is started a new create incident would be ignored.
You can select sports and leagues individually using check-boxes, or select / de-select all sports and leagues by using the Select All checkbox/toggle.
Set the range of data to be replayed by entering values in the Start and End fields.
The Change Password screen is opened by clicking on the Change Password
menu item in the account menu.
First enter your current password and then enter a new password and confirm the new password. The same validation rules apply as when creating a new account.
The New Password must be between eight and 40 characters.
If all fields are valid you'll be returned to the Dashboard
You can exit this screen at any time by clicking on the X at the top right of the screen.
The account menu is accessed by clicking on the user icon at the far right of the header.
The account menu has two features.
Click on the Change Password menu item to open the Change Password screen
Click on the Log Out menu item to log out of the application and return to the Home Page.
Note: The Account Menu will automatically close after 10 seconds if it isn't used.
The Game Selector is opened by clicking on the day cell of any calendar. The Game Selector is the engine behind all of the games that are created and then posted to the Bookie Oracle System and on to BookiePro.
The Game Selector is opened when you click on any day cell on the Calendar.
The Game Selector is used for creating new games and then starting then, adding scores and finally finishing them.
You can also use the Game Selector to Cancel or Delete games.
To add new game use the input fields at the bottom of the screen and then click on the ADD
button.
The Away Team and Home Team dropdown lists will only display valid teams for the selected sport / league combination.
It is permitted to add a start time that's in the past as a game could start earlier than expected. However, if this is the case then the game needs to be started as soon as possible.
Note: There is no check on whether the same match is added twice. The reason for this is that in some sports it's common to have a 'double-header', so two matches on the same day is perfectly acceptable.
Note: The score input fields are disabled until a game is started.
As soon as a the game is added you'll see it in the game list with any other games scheduled for the same day.
To start a game click on the Start
button next to the game in the game list. The game status will change to In Progress
Important: You must start a game as close as possible to the ACTUAL start time of the game; games seldom start at the scheduled time. This is the time that's recorded as the 'whistle start time' and the time that BOS will compare with start times reported by other data proxies.
Once a game has started you can't delete it, but you can still cancel it.
To finish a game enter the score for both home and away teams and click on the Finish
button next to the game. The game status will change to Finished
Important: You must finish a game as close as possible to the ACTUAL time that the game finishes. This is the time that's recorded as the 'whistle end time' and the time that BOS will compare with finish times reported by other data proxies.
Once a game is finished it's no longer possible to cancel it.
Note: It's important that scores are entered correctly the first time as it's not possible to correct scores and re-submit them.
You can cancel a game as long as it's either in a Not Started
or In Progress.
status.
To cancel a game click on the Cancel
text next to the game.
A confirmation message similar to the following will be shown.
Click on Yes
to cancel the game (game status will then change to Canceled)
or No
to to return without canceling.
A canceled
message will be sent to BOS.
Note: A canceled game can also be interpreted as postponed but not as delayed. A delayed game is expected to restart. But once a game has been canceled it can't be restarted. If a game is canceled and then played the following day it would have to re-created with the new start time.
A game can only be deleted if it hasn't been started (has a status of Not Started
).
To delete a game click on the Delete text next to the game.
A confirmation message similar to the following will be shown:
Click on Yes
to delete the game (game will be removed) or No
to to return without deleting.
If you delete a game then a canceled
message will also be sent to BOS so that BOS can tag the game in the same way as a canceled game.
Note: The difference between a canceled game and a deleted game is that a deleted game is basically a game that was entered in error and once deleted is removed from the database so it can be re-entered correctly if needed. A canceled game is a game that for one reason or other doesn't take place after being created.
The following is a list of the tables in the Couch Potato database:
Legend:
PK- Primary Key
NN - Not Null
AI - Auto Increment
None.
None
None
None
None
None
None
The dashboard is the main screen from where you can navigate around the features of the application in preparation for inputting data.
The dashboard is best thought of as five feature sections:
Header
Sport Tabs
League (event Group) Tabs
Calendar
Notifications
The dashboard header is shown at the top of the screen and is non-scrollable. That is to say that if you run the application on a small display such that you have to scroll up and down, to see all of the dashboard, the header is always 'pinned' to the top of the screen.
The features of the header are:
Application version number.
Real time clock.
User name.
The sports tab runs horizontally across the dashboard and displays one tab for each sport that is enabled. The tabs are dynamic and configured through a database table that can have new sports added or deleted at any time
There is no limit on the number of sports tabs that can be created. If the tabs reach the horizontal limit of the application then they will stack in to multiple rows.
By clicking on any tab you will:
Update the Leagues Tabs to show only the leagues associated with the selected sport.
Change the calendar display to show only events for the selected sport and league.
When you select a sports tab the league will default to the first one in the list.
The leagues tab runs vertically down the left side of the dashboard and displays one tab for each league that is configured for the selected sport. The tabs are dynamic and configured through a database table so that new leagues can be added or deleted at any time.
There is no limit on the number of sports tabs that can be created. If the tabs reach the vertical limit of the application then they will stack in to multiple columns.
If you click on any of the tabs the calendar will show only events for the selected sport and league.
The calendar component is the main 'engine' of the application. It's here that you'll navigate, enter and select new games.
The calendar dynamically creates a month plan for each month selected using the forward (>) and backward (<) selectors. There is no limit on the number of months/years that can scrolled through.
As you move the cursor over any cell it'll be highlighted , and the current day is displayed as a solid number.
If a day has at least one game scheduled then the crest for the league associated with the current calendar will be shown in that day cell.
If a day has at least one game scheduled then a badge for the total number of games will be shown in that day cell.
To enter new games click on any day cell and the game selector for that sport / league / date combination will be shown.
The notifications panel is displayed on the right side of the dashboard and is where all notifications (reminders) will be posted for any games about to start or finish.
Each notification takes the form of a 'note' which has the following attributes:
The colour of the notes is very important and must be visibly obvious. The colour of the note is set by the following criteria:
Green
Any game that is in the range 30 - 15 minutes to it's scheduled start time.
Any game that is in the range 30 - 15 minutes from it's predicted end time.
Amber
Any game that is in the range 1 - 14 minutes to it's scheduled start time.
Any game that is in the range 1 - 14 minutes from it's predicted end time.
Red
Any game that should have started according to its scheduled start time.
Any game that is in the range 30 - 15 minutes to it's predicted end time.
The notifications rely on the start and end times of each game. The start time is taken as the time entered for any game when it was created.
The end time is more complicated because for many sports it's very hard to predict when a game ends because of time-outs, extra-time etc. For example, a game of soccer is much more predictable because the clock doesn't stop during play. So a game is likely to be two halves of 45 minutes, 15 minutes of half-time and perhaps 5 minutes of extra time, so 45+45+15+5 = 110 minutes.
However, a game of football, even thought it's four quarters of 15 minutes, has time-outs and regular clock stops, so the time the game will finish is a very broad average.
Each note is tied to a game and as such will be automatically removed from the notification panel as soon as the status of the game is updated. For example, if a note states that game x "SHOULD HAVE STARTED", then as soon as the game is started, using the game selector, the note will be removed.
To keep things tidy notes can be set to be be automatically removed after a set interval (in hours). This means that if games haven't been started, instead of the warning note appearing even after [x] days when there would be no point updating the status of the game, it will be removed.
You can set the number of hours after which a notes should be removed is set in the notifications->end
property in config-dataproxy.json.
The default value is 240 (10 days).
Each note is 'clickable', so if you click on it you will automatically open the game selector for the selected date / league / sport combination. The game in the selector, that corresponds to the game on the selected note, will be highlighted for easy identification. For more information see:
The dashboard sis opened as soon as you log in from the .
Data Replay button. See .
User icon for opening the account menu. See .
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
timestamp
DATETIME
❌
✅
❌
CURRENT_TIMESTAMP
status
VARCHAR(4)
❌
✅
❌
subcode
VARCHAR(4)
❌
✅
❌
title
VARCHAR(255)
❌
✅
❌
message
VARCHAR(1000)
❌
❌
❌
NULL
url
VARCHAR(255)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
user
INT(11)
❌
✅
❌
league
VARCHAR(45)
❌
✅
❌
date
DATE
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
fk_user_idx
INDEX
user
ASC
fk_league_idx
INDEX
league
ASC
Foreign Key
Referenced Table
Column
Referenced Column
fk_user
'couch_potato'.'users'
user
id
fk_leagues
'couch_potato'.'leagues'
league
name
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
user
INT(11)
❌
✅
❌
event
INT(11)
❌
✅
❌
hometeam
VARCHAR(100)
❌
✅
❌
awayteam
VARCHAR(100)
❌
✅
❌
starttime
VARCHAR(12)
❌
✅
❌
homescore
INT(11)
❌
❌
❌
NULL
awayscore
INT(11)
❌
❌
❌
NULL
whistle_start_time
VARCHAR(32)
❌
❌
❌
NULL
whistle_end_time
VARCHAR(32)
❌
❌
❌
NULL
Index
Type
Columns
Order
PRIMARY
PRIMARY
id, user
ASC
user_idx
INDEX
user
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
timestamp
DATETIME
❌
✅
❌
CURRENT_TIMESTAMP
type
VARCHAR(12)
❌
✅
❌
url
VARCHAR(255)
❌
✅
❌
uniqueid
VARCHAR(255)
❌
✅
❌
approveid
VARCHAR(255)
❌
✅
❌
message
VARCHAR(1000)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
timestamp
VARCHAR(60)
❌
✅
❌
uniquename
VARCHAR(255)
❌
✅
❌
call
VARCHAR(20)
❌
✅
❌
message
JSON
❌
✅
❌
url
VARCHAR(255)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
name
VARCHAR(45)
❌
✅
❌
sport
INT(11)
❌
✅
❌
icon
VARCHAR(64)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id, name
ASC
idx_name
INDEX
name
ASC
fk_sport_idx
INDEX
sport
ASC
Foreign Key
Referenced Table
Column
Referenced Column
fk_sport
'couch_potato'.'sports'
sport
id
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
game
INT(11)
❌
✅
❌
status
INT(11)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
fk_status_idx
INDEX
status
ASC
Foreign Key
Referenced Table
Column
Referenced Column
fk_status
'couch_potato'.'status'
status
id
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
name
VARCHAR(45)
✅
✅
❌
icon
VARCHAR(45)
❌
✅
❌
duration
INT(11)
❌
❌
❌
NULL
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
fk_leagues_idx
INDEX
name
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
name
VARCHAR(20)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
name
VARCHAR(100)
✅
✅
❌
icon
VARCHAR(45)
❌
✅
❌
league
INT(11)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
fk_teams_leagues_idx
INDEX
league
ASC
Foreign Key
Referenced Table
Column
Referenced Column
fk_teams_leagues
'couch_potato'.'leagues'
league
id
Column
Datatype
PK
NN
AI
Default
id
INT(11)
✅
✅
✅
timestamp
DATETIME
❌
✅
❌
CURRENT_TIMESTAMP
username
VARCHAR(45)
❌
✅
❌
salt
VARCHAR(128)
❌
✅
❌
password
CHAR(255)
❌
✅
❌
VARCHAR(60)
❌
✅
❌
Index
Type
Columns
Order
PRIMARY
PRIMARY
id
ASC
Running a backup node on another server to prevent downtime
When running a node, it's important to minimize downtime. This can be difficult when you have to stop the node to perform upgrades to the software or to troubleshoot an issue. This is where having a backup server comes in handy.
This document will explain how to set up a backup server that runs in parallel to your main server. When necessary, you can quickly switch to your backup node so users will not experience any downtime. Then you can perform your maintenance and switch back once complete.
In this tutorial, it's assumed that you have a Witness node up and running according to the Witness node documentation. As such, you'll have your witness-id and public-private key-pair containing the signing key within the config.ini
file. As an example, you might have the following in the config:
Just like the first time you installed a witness node, you'll need to do it again on a separate server. You can follow the same instructions provided in the witness node documentation.
The important part of the second install is to generate another signing key, different from the first. To do this, you'll use the suggest_brain_key
command:
Which will return something like this:
And then you'll use the wif_priv_key
in the output above in the config.ini
file on the second server, for example:
At this point you'll see the following message on the second server if you were to run that node:
Since you haven't switched the nodes yet, that's what we expect.
To flip the node production, you'll run the update_witness
command in the cli_wallet:
Soon after you'll see this message on your first server:
Your second server will now be producing blocks. You just pulled a quick switch from server 1 to server 2 without missing any blocks. Now you can safely shut down server 1 and update the code. once the code is updated and the node is restarted, you can switch back signing blocks to server 1 using the update_witness
command again with the public-key for server 1.
It's always a good idea to maintain a backup witness server for block production, in case of downtime on one server or the need to update code without shutting down your production node. A back up server will also be useful in case of an attack on the network.
Congrats! You updated your witness to use the new signing key!
A small amount of PPY tokens are required for paying transaction fees on the network. Here are a few ways to get some.
Using the Peerplays DEX, you can buy PPY with HIVE or HBD. PPY (Peerplays coin) is the native asset of Peerplays network. All exchanges in Peerplays DEX are based on exchanging assets with PPY. PPY is the main store of value in Peerplays and used to pay transaction fees on network.
Setting up a Witness node requires about 15 PPY. This is to pay for an upgraded account (5 PPY) and to create a new witness (8 PPY). The remaining funds are to pay for various transaction fees while setting up the node (like voting for yourself!).
Note that these fees will likely change over time as recommended by the Committee of Advisors.
1. If you are an existing user with Peerplays, login to https://market.peerplays.com using valid credentials. If you're new to the exchange create an account to begin.
2. After successful login, select “Asset” from left pane to check the available assets (PPY, HIVE, HBD, and BTC).
3. In case of insufficient PPY token, it can be obtained in exchange with HIVE or HBD.
Prerequisites
a. A Hive blockchain account
b. Sufficient balance of HIVE or HBD in your Hive wallet for transfer
If you are a new user on the exchange, please create an account and ensure you have a sufficient balance of HIVE or HBD before the exchange process.
Steps to transfer funds from Hive wallet to https://market.peerplays.com account:
1. Login to the Hive wallet using https://wallet.hive.blog
2. Click on the Fund transfer Icon to begin the HIVE or HBD transfer.
3. Select the HIVE or HBD dropdown to choose desired action. In this case, choose the “Send” option.
4. In the next step, mention the “son-account” account in the "to" field (the Peerplays side-chain account), input the number of HIVE or HBD to be transferred, finally mention your Peerplays account name in the memo section and click “send” to proceed further.
5. Now confirm the details for fund transfer and click “confirm”.
6. The funds will be reflected in market.peerplays.com account and it can be used to buy PPY token.
Steps to Buy PPY Token using HIVE or HBD
Login to https://market.peerplays.com using your valid credentials.
2. From the list of options available on left pane, select “Exchange” tab and choose either the “PPY/HIVE” or the "PPY/HBD" options to buy PPY Token.
3. Under “ORDER BOOK” section choose the category based on the price.
4. Input the required number of PPY token (at least 15 to make a Witness node) and click on “Buy PPY”.
5. Review the order and proceed with “Buy PPY now” which prompts for password to unlock account for transfer.
6. After successful transfer, PPY tokens will be obtained in the account.
CLI commands that witnesses use.
Creates a witness object owned by the given account. An account can have at most one witness object.
Parameters
Example Call
Return Format
Update a witness object owned by the given account.
Parameters
Example Call
Return Format
Example Successful Return
Returns information about the given witness.
Parameters
Example Call
Return Format
Example Successful Return
Vote for a given witness. An account can publish a list of all witnesses they approve of. This command allows you to add or remove witnesses from this list. Each account's vote is weighted according to the number of PPY owned by that account at the time the votes are tallied. Note that you can't vote against a witness, you can only vote for the witness or not vote for the witness.
Parameters
Example Call
Return Format
name
data type
description
details
owner_account
string
The name or id of the account which is creating the witness.
no quotes required.
url
string
a URL to include in the witness record in the blockchain. Clients may display this when showing a list of witnesses.
May be blank.
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
witness_name
string
The name of the witness's owner account. Also accepts the ID of the owner account or the ID of the witness.
no quotes required.
url
string
Same as for create_witness. An empty string makes it remain the same.
n/a
block_signing_key
string
The new block signing public key. The empty string makes it remain the same.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
name
data type
description
details
owner_account
string
The name or id of the witness account owner, or the id of the witness.
No quotes required.
name
data type
description
details
voting_account
string
The name or id of the account who is voting with their PPY.
No quotes required.
witness
string
The name or id of the witness' owner account.
No quotes required.
approve
bool
true
if you wish to vote in favor of that witness, false
to remove your vote in favor of that witness.
n/a
broadcast
bool
true
to broadcast the transaction on the network.
n/a
From install, to setup, to running.
CLI stands for "Command Line Interface" which means that the program uses the text-based command line window to take user input and show its output.
The Peerplays CLI wallet is a program (named cli_wallet
) that is installed along with the node software when installing a Peerplays node on a server. It's used as a way to store your Peerplays account keys locally, and as a way to interact with the Peerplays chain for account and asset-related transactions. You can also use the CLI wallet to look up information from the chain.
The wallet is encrypted with a password of your choosing. Additionally, it stores all keys locally, never exposing your keys to anyone as it signs transactions locally before transmitting them to the connected node. The node then broadcasts the signed transactions to the network.
The wallet creates a local wallet.json file that contains the encrypted private keys required to access the funds in your account.
If you have installed a Peerplays witness, API, seed, or SON node you already have the CLI wallet installed.
The CLI wallet requires a connection to a running node to reach the blockchain. You can run the wallet using your own node or another node that allows external connections. Either way, the node needs to be synced with the chain.
Once your node has synced with the blockchain, you can simply run the program:
Since the node must be running, you will either have to open a new command line window or run the node in the background to run the CLI wallet.
You can choose to connect to someone else's running and synced node. In that case, you can specify the connection as a program parameter:
The <Websocket Address>
in the code above must be replaced with the address of some public node. The address will use the websocket or secure WebSocket protocol (ws://
or wss://
respectively). Some node operators may have mapped their websocket address to a more friendly-looking domain name which can be used here as well.
It is completely safe to use the CLI wallet with an external node because your private keys are never sent to the remote server. 👍
If you have started the CLI wallet successfully, you will receive the new >>>
prompt. At this point, you'll be asked to set a password. Here's what you'll see:
Type set_password
followed by a password of your choice and hit enter. It will look like this:
If the password was saved successfully, the prompt will change to locked >>>
. At this point, the CLI wallet is locked and nobody can access it without the password you just set.
Be sure to remember/back up/save/write down your password (securely of course) because it can't be recovered if you lose it.
Then to unlock your wallet, you'll use the unlock
command with your password and hit enter.
If successful, the prompt will now read unlocked >>>
. Your wallet is now unlocked and ready for use.
After the CLI wallet has been unlocked, if there are any funds in the wallet, they are accessible. In general, lock
the wallet and unlock
when it’s needed.
To lock it, type lock
and hit enter.
The prompt will return to locked >>>
.
If the current password needs to be changed, unlock the wallet and use set_password
to do so.
Type set_password
and the new password then hit enter.
You can get more detailed information by issuing the gethelp
command. Detailed explanations for most calls are available. For example:
You can also use the help
command to get a list of all commands supported by the wallet. Note that you can use the help
and gethelp
commands even if the wallet is locked!
Elasticsearch is a free search engine used for finding and indexing schema-free JSON documents. This makes viewing large data stores fast and simple. For our purposes, Elasticsearch is most useful for full nodes which host all account histories on the blockchain.
For more information, see the Elasticsearch Wikipedia page.
Elasticsearch is already installed along with the Witness node. There's no need to do any install for Elasticsearch. You just have to set up the config to enable its use.
Enabling Elasticsearch is as simple as editing the node config file. First you'll add the Elasticsearch plugins to the list of active plugins. Then you'll add the Elasticsearch URL to a couple of options within the config.
Inside ./witness_node_data_dir/config.ini
edit the following:
Uncomment plugins =
and add the elasticsearch
and es_object
plugins.
Uncomment elasticsearch-node-url =
and add the Endpoint URL for your Elasticsearch instance.
Make sure to keep the trailing slash at the end of the URL.
Unless you makes some custom changes to the Elasticsearch configuration, the endpoint url is going to be http://localhost:9200/
. So your config in this case will be:elasticsearch-node-url = http://localhost:9200/
Uncomment es-objects-elasticsearch-url =
and add the Endpoint URL for your Elasticsearch instance.
Make sure to keep the trailing slash at the end of the URL.
Like the last one, unless you make some changes to your Elasticsearch config, the endpoint url is going to be http://localhost:9200/
. So your config in this case will be:es-objects-elasticsearch-url = http://localhost:9200/
Start the witness node to being pushing indexes to Elasticsearch. The beginning few logs should show the Elasticsearch plugins started:
You can check the indexes created after the witness start with the next call:
You can also get index search data with:
Manual installation steps to configure a witness node running on ubuntu 18.04/20.04
This is an introduction for a new witness node to get up to speed on the Peerplays blockchain. It is intended for Witnesses planning to join a live, already deployed, blockchain. The node running on ubuntu 18.04 or ubuntu 20.04 can follow the steps in the document for manual installation.
The following steps outline the manual installation of a Witness Node:
Preparing the Environment
Build Peerplays
Update the config.ini
File
Start the node
Please see the general Witness hardware requirements.
For the manual install, the requirements that we'll need for this guide would be as follows (as per the hardware requirements doc):
The memory requirements shown in the table above are adequate to operate the node. Building and installing the node from source code (as with this manual installation guide) will require more memory. You may run into errors during the build and install process if the system memory is too low. See Installing vs Operating for more details.
The following dependencies are necessary for a clean install of Ubuntu 20.04:
Cmake is an open-source, cross platform tool that uses independent configuration files to generate native build tool files specific to the compiler and platform. It consists of precompiled binaries and the cmake tools makes configuration, building, and debugging much easier.
Install the cmake using the below commands:
Boost libraries provide free peer-reviewed portable C++ source libraries and it can be used across broad spectrum of application.
Install Boost libraries using the below commands:
The components libzmp and cppzmp are used for relaying messages between nodes.
First, install libzmq using the below commands:
Next, install cppzmp using the below commands:
GSL is the GNU scientific library for numerical computing. It is a collection of routines for numerical computing in e.g. linear algebra, probability, random number generation, statistics, differentiation, etc.,
Install the gsl using the below commands:
The libbitcoin toolkit is a set of cross platform C++ libraries for building bitcoin applications. The toolkit consists of several libraries, most of which depend on the base libbitcoin-system library.
Install the libbitcoin using the below commands:
Doxygen is a software utility that recognizes comments within C++ code that have a certain form, and uses them to produce a collection of HTML files containing the information in those comments.
Install the Doxygen using the below commands:
Perl is a high-level, general-purpose, interpreted, dynamic programming language originally developed for text manipulation.
Install the Perl using the below commands:
Use the below commands to build Peerplays:
If we have installed the blockchain following the above steps, the node can be started as follows:
Launching the Witness for the first time creates the directories which contain the configuration files.
Next, stop the Witness node before continuing (Ctrl + c
).
We need to set the endpoint and seed-node addresses so we can access the cli_wallet and download all the initial blocks from the chain. Within the config.ini file, locate the p2p-endpoint, rpc-endpoint, and seed-node settings and enter the following addresses.
Save the changes and start the node back up.
We have successfully started the witness node and it is now ready for configuration.
Next step is to configure the witness node based on the requirement. There are different ways in which the nodes can be configured such as block producer, SON node, API node, and delayed node.
Becoming a block producer is one of the important steps as it is mandatory to use the node for transactions across the wallet. Follow the steps from the below document to become a block producer,
There are other ways it which the node can be configured. The below document showcase the other ways available for node configuration.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
Success! You built a Peerplays witness node from the latest source code and now it's up and running.
After configuring the node with desired configuration, click below to learn the NEXT steps
Node Type?
CPU
Memory
Storage
Bandwidth
OS
Witness
4 Cores
100GB SSD
1Gbps
Ubuntu 18.04
Setup a Witness Node using a pre-configured Docker container
This document assumes that you are running Ubuntu 18.04 or 20.04. Other Debian based releases may also work with the provided script.
The following steps outline the Docker installation of a Witness Node:
Preparing the Environment
Installing Docker
Installing the Peerplays image
Starting the Container
Update the config.ini
File
Create a Peerplays Account
Update config.ini
with Witness Account Info
Start the Container and Vote for Yourself
Note that these fees will likely change over time as recommended by the Committee of Advisors.
For the docker install on Peerplays Mainnet, the requirements that we'll need for this guide would be as follows (as per the hardware requirements doc):
Then we'll clone the Peerplays Docker repository.
It is required to have Docker installed on the system that will be performing the steps in this document.
Docker can be installed using the run.sh
script inside the Peerplays Docker repository:
Since the script has added the currently logged in user to the Docker group, you'll need to re-login (or close and reconnect SSH) for Docker to function correctly.
Copy the example.env
to .env
located in the root of the repository (ie peerplays-docker folder)
We're going to have to make some changes to the .env
file so we'll open that now using a text editor.
Here are the important parts of the .env
file. These will be the parts that need to be edited or optionally edited. The rest of the file should be unchanged.
Use run.sh
to pull the node image:
With at least 8GB of disk space available in your home folder, we'll start the node. This will create and/or start the Peerplays docker container.
Then we'll check the status of the container to see if all is well.
Last we'll stop the container so we can make updates to the config.ini
file.
We need to set the endpoint and seed-node addresses so we can access the cli_wallet and download all the initial blocks from the chain. Within the config.ini
file, locate the p2p-endpoint, rpc-endpoint, and seed-node settings and enter the following addresses.
Save the changes and start the container back up.
We have successfully started the witness node and it is now ready for configuration.
Exit the cli_wallet with the quit
command. We'll stop the container and edit the config.ini
file once again.
Once again, we need to wait for the node to sync the blocks to use the cli_wallet. After the sync, you can vote for yourself.
Now you can check your votes to verify it worked.
Next step is to configure the witness node based on the requirement. There are different ways in which the nodes can be configured such as block producer, SON node, API node, and delayed node.
Becoming a block producer is one of the important steps as it is mandatory to use the node for transactions across the wallet. Follow the steps from the below document to become a block producer,
There are other ways it which the node can be configured. The below document showcase the other ways available for node configuration.
run.sh
commands liststart
- starts seed container
start_son
- starts son seed container
start_son_regtest
- starts son seed container and bitcoind container under the docker network
clean
- Remove blockchain, p2p, and/or shared mem folder contents, seed, bitcoind, and son docker network (warns beforehand)
dlblocks
- download and decompress the blockchain to speed up your first start
replay
- starts seed container (in replay mode)
replay_son
- starts son seed container (in replay mode)
memory_replay
- starts seed container (in replay mode, with --memory-replay)
shm_size
- resizes /dev/shm to size given, e.g. ./run.sh shm_size 10G
stop
- stops seed container
status
- show status of seed container
restart
- restarts seed container
install_docker
- install docker
install
- pulls latest docker image from server (no compiling)
install_full
- pulls latest (FULL NODE FOR RPC) docker image from server (no compiling)
rebuild
- builds seed container (from docker file), and then restarts it
build
- only builds seed container (from docker file)
logs
- show all logs inc. docker logs, and seed logs
wallet
- open cli_wallet in the container
remote_wallet
- open cli_wallet in the container connecting to a remote seed
enter
- enter a bash session in the currently running container
shell
- launch the seed container with appropriate mounts, then open bash for inspection
Witness: An independent server operator which validates network transactions.
Witness node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
Set up a Sidechain Operator Node (SON) by building the source code
The process of manually installing a SON is similar to installing a Witness Node. This is an introduction for new SONs to get up to speed on the Peerplays blockchain. It is intended for SONs planning to join a live, already deployed, blockchain.
This tutorial will take you through the steps required to have an operating SON. Since SONs serve the purpose of facilitating transfers of assets between the Peerplays blockchain and other blockchains, we'll need to connect to another chain to be of any use...
Please review the Requirements for setting up a SON before continuing to run a manual install following this guide.
The following steps outline the manual installation of a (Bitcoin enabled) SON.
Preparing the Environment
Build Peerplays
Connect to the Bitcoin Network and Generate an Address
Create a SON Account
Configure the SON
Start the SON
(Optional) Automatically Start the Node as a Service
Note that these fees will likely change over time as recommended by the Committee of Advisors.
For the manual install, we'll be using a self-hosted Bitcoin node. The requirements that we'll need for this guide would be as follows (as per the hardware requirements doc):
The following dependencies are necessary for a clean install on Ubuntu 18.04 LTS:
Boost is a C++ library that handles common program functions like generating config files and basic file system i/o. Peerplays uses Boost to handle such functions. Since Boost is a dependency, we must build it here.
Now we build Peerplays with the official source code from GitHub.
If we have installed the blockchain following the above steps, the node can be started as follows:
Running the witness_node program will create a config.ini
file with some default settings. We'll need to edit the config file so we'll stop the program for now. Stop the program with ctrl + c
.
There are two options available to connect to the Bitcoin network.
Run a Bitcoin node yourself
Find an open Bitcoin node to connect to
For the purposes of this guide, I'll discuss how to run a node yourself as that will be a more reliable connection for now. Either way you go, you'll need to collect the following information to use in the config.ini
file:
The IP address of a Bitcoin node you can connect to (127.0.0.1 if self-hosting)
ZMQ port of the Bitcoin node (default is 1111)
RPC port of the Bitcoin node (default is 8332)
Bitcoin RPC connection username (default is 1)
Bitcoin RPC connection password (default is 1)
Bitcoin wallet label (default is son-wallet)
Bitcoin wallet password
A new Bitcoin address
The Public key of the Bitcoin address
The Private key of the Bitcoin address
First we'll download and install one of the official Bitcoin Core binaries:
The latest supported version is 22.0 as of July 2022.
Then we make a config file to manage the settings of our new Bitcoin node.
in the Vim text editor we'll set the following:
Save and quit the Vim editor.
Lastly we'll set a Cron job to ensure the Bitcoin node starts up every time the server starts.
At the bottom of the crontab file, add the following:
Save and quit the crontab file. Now we're ready to fire up the Bitcoin node!
If successful, you'll see Bitcoin Core starting
. As an extra check to see if everything is working, try the bitcoin-cli -version
or bitcoin-cli getblockchaininfo
commands.
Your Bitcoin node should now be downloading the Bitcoin blockchain data from other nodes. This might take a few hours to complete even though we cut down the requirements with block pruning. It's a lot of data after all.
We'll need a wallet to store the new Bitcoin address.
Now we will create a Bitcoin address.
Then we'll use this address to get its keys.
Now we get the private key.
That was a lot to go over. Let's collect our data.
Keep this tuple handy. We'll need it in the Peerplays config file.
Becoming a SON is very similar to becoming a witness. You will need:
An active user account, upgraded to lifetime member, which will be the owner of the SON account
Create two vesting balances (types "son" and "normal") of 50 PPY, and get their IDs
The Bitcoin address created for the SON account
Create the SON account, and get its ID
Set the signing key for the SON account (usually, its a signing key of the owner account)
Set the Bitcoin address as a sidechain address for the SON account
We can run the Peerplays cli wallet connecting to the Peerplays node we have set up so far. Before we can do that we'll need to make a quick edit to the config.ini file.
in the first section of the config.ini file is the rpc-endpoint setting. We have to open our rpc-endpoint so we can use the Peerplays cli wallet. We'll enter the following:
Save the file and quit.
Our Peerplays node will have to be completely in sync with the blockchain before we can use the cli wallet so we'll start the node and wait for it to download all the data.
downloading all the transaction and block data will take hours. Unfortunately this is unavoidable the first time the node syncs with the blockchain. You might want to let this run overnight.
If you just can't wait for your node to sync, you can run the cli_wallet program on someone else's node. Simply pass the IP address of the other node like so. (In another command line window)
A good resource for server-rpc-endpoints is https://beta.eifos.org/status. They will be listed as API nodes and use the wss:// protocol.
Now that we have the cli_wallet running, you'll notice a new prompt.
This means we're in a cli_wallet session. First we'll make a new wallet and unlock it.
The key beginning with "PPY" is the public key. The other key is the private key. We'll need to import this private key into the cli_wallet.
Next we'll upgrade the account to a lifetime membership.
Next we'll create the vesting balances.
Now we have all the info we need to create a SON account.
To get the SON ID:
We'll set the signing key using the active key from the owning account:
Now we have our SON account ID and the public and private keys for the SON account. We'll need this for the config.ini
file.
The generated config.ini
file will be located at /home/ubuntu/witness_node_data_dir/config.ini
. We'll begin by editing this config file.
This file will be rather large so let's focus on the important part for configuring a SON node:
This section contains all the SON related configuration. Ensure the following config settings are in the config.ini
file under the peerplays_sidechain plugin options.
We're almost done, we also have to make sure the peerplays_sidechain plugin is listed in the plugins. Find the plugins
setting in the first section of the config.ini
file. If it's not already there, add the peerplays_sidechain
plugin to the list. Like so:
After setting up the config.ini
file for SON operation, we'll start the node back up.
Your SON is born! (pun intended)
Up until this point we have been running the node in the foreground which is fragile and inconvenient. So let's start the node as a service when the system boots up instead.
After that, it would be smart to create a backup server to enable you to make software updates, troubleshoot issues with the node, and otherwise take your node offline without causing service outages.
Why stop at Bitcoin?
Now you have a SON, but have you thought about becoming a Witness? It will be a piece of cake for you since you've already set up a SON.
If you have a node that is accessible from the internet (for example, an API or Seed node) it would be wise to enable SSL connections to your node.
SON: Sidechain Operator Node - An independent server operator which facilitates the transfer of off-chain assets (like Bitcoin or Ethereum tokens) between the Peerplays chain and the asset's native chain.
Witness: An independent server operator which validates network transactions.
Witness Node: Nodes with a closed RPC port. They don't allow external connections. Instead these nodes focus on processing transactions into blocks.
CLI checks to ensure the successful installation of Bitcoin-SON node.
After installing a Bitcoin SON node, you might want to run some basic tests to ensure everything is running smoothly with your Bitcoin node. Here are a few bitcoin-cli commands that you can run to check your node's functionality.
You can use these commands to get an overview of the Bitcoin network, how your node connects to the network, and your configured wallet and address settings.
List all commands, or get help for a specified command.
The "command"
in the above code block can be one of any bitcoin-cli commands listed in the reference doc. It's also optional and if left out will list all available commands. The help command is a good place to start to ensure the bitcoin-cli is actually available on your system.
Returns an object containing various state info regarding blockchain processing.
This command doesn't have any parameters. Running this command will list a lot of important information about the chain and your node. This is the command to use to see how much of the chain your node has validated and that you are connected to Bitcoin's mainnet. Here's what is returned in the call:
Returns details on the active state of the TX memory pool.
The TX memory pool, or "mempool", is the pool of unverified transactions that don't yet belong to a block in the chain. These transactions are basically waiting for miners to verify and include them in blocks to make them official.
This command is useful to view the network backlog of transactions. Here's what is returned:
Returns an object containing various state info regarding P2P networking.
This command is important for understanding the network connections of your node. Here is what is returned in this call:
Returns an object containing various wallet state info.
This shows the configuration of any wallets belonging to your node. In our case this will show us the "son-wallet" we should have set up. Here's what's returned:
Return information about the given bitcoin address.
Some of the information will only be present if the address is in the active wallet.
This is how you can view the pubkey for your Bitcoin addresses. Much more than that is returned:
16GB
Before we begin, to set up a Witness node requires about 15 PPY. This is to pay for an upgraded account (5 PPY) and to create a new witness (8 PPY). The remaining funds are to pay for various transaction fees while setting up the node (like voting for yourself!). Please see for more info.
Please see the Witness node .
The run.sh
script contains many commands to make managing the node easy. A list of all its are listed in section 9 of this document.
You can look at to learn more on how to install Docker. Or if you are having permission issues trying to run Docker, use sudo
or look at .
Complete! You've installed your Witness node and you're up and running.
After configuring the node with desired configuration, click below to learn the NEXT steps
Let's use Bitcoin!
Before we begin, to set up a SON node requires about 110 PPY. This is to pay for an upgraded account (5 PPY) and to fund two vesting balances (50 PPY each). The remaining funds are to pay for various transaction fees while setting up the node. Please see for more info.
Please see the general SON .
The memory requirements shown in the table above are adequate to operate the node. Building and installing the node from source code (as with this manual installation guide) will require more memory. You may run into errors during the build and install process if the system memory is too low. See for more details.
Note: "1.5.18" can be replaced with the most recent release tag. For example: git checkout 1.5.18
where 1.5.18 is the latest production release tag as of July 2022. The list of releases is .
We start the SON Node with the witness_node
command although we are only intending to set up this node as a SON. This is because the same program is used to operate different types of nodes depending on how we configure the program. For more information on this, see .
The official Bitcoin Core binaries can be found here:
The settings in the config file above are set to reduce the requirements of the server. Block pruning and setting the node to Blocks Only save network and storage resources. For more information, see .
A list of CLI wallet commands is available here:
Assuming we're starting without any account, it's easiest to create an account with the Peerplays GUI Wallet. The latest release is located here . When you create an account with the GUI wallet, you should have a username and password. We'll need those for the next steps. First we'll get the private key for the new account.
At the time of writing this guide, this costs 5 PPY to perform this operation. You'll need that in your account first! To this end, see .
Save the file and quit. Configuration of the Peerplays SON node is complete!
But seriously, that was no small feat. Congratulations on this accomplishment!
Vim: A text editing program available for Ubuntu 18.04. See
Node Type? | CPU | Memory | Storage | Bandwidth | OS |
Witness | 4 Cores | 16GB | 100GB SSD | 1Gbps | Ubuntu 18.04 |
BookiePro requires real-time data feeds in order to create all the various sports, events, markets etc. that are the basis of the sporting exchange. An going challenge has been getting enough of these data feeds that are accurate and reliable.
Many operations required for Bookie, such as creating a new game, require more than one approval which means that at least two data feeds send the exact same information. This is a real problem when there might only be two or three data feeds available at any given time.
Each data feed is consumed by data proxy software, operated by independent organizations, before parsing and normalizing the data and sending it through to BOS for validation, and then finally on to the blockchain.
In it's simplest form the sports data flows from data feed provider -> data proxy - > BOS - > blockchain -> BookiePro. This process is automated, which is desirable, but this brings it's own problems because there isn't enough manual validation on the data until after it fails. The automated process has little ability to correct data that is either sent from the feed provider or incorrectly normalized by the data proxy.
The idea of Couch Potato came about from looking at the process backwards. We know data comes from a feed provider, as what we see as the original source of truth. But if you go back from there who knows how many other layers there are. Does the feed provider get their data directly? Maybe they use a third party to scrape the data and then they purchase it. Maybe the data scraper uses a third party ... and so on. Ultimately though there has to be somebody watching games and inputting the data.
This is the Couch Potato concept. Why not just have a person inputting data directly into a portal or API that then gets posted directly to BOS. In this model the person is the 'data feed provider' and the API or portal the 'data proxy'
Couch Potato aims to improve the data feed and data proxy challenges facing BookiePro by:
Creating a simple to use web portal that uses only data taken directly from Bookiesports and therefore guaranteed correct.
Pushing data directly to BOS as it's entered, no latency or 'spooling' of data.
Incorporating an extensive, powerful, API that can perform all the functions or the web application completely independently, or can be combined with the web application as well.
Create an 'infinitely scalable' system that can be spun up on as many servers as data proxies are required. Every instance of Couch Potato is it's own data proxy.
Allowing third parties to take the API and integrate into their own data gathering processes, whether manual or automated.
Becoming a 'pay-per-input' service for the users of Couch Potato that are entering the data. Couch Potato will track every input such that a suitable payment model can be created for reimbursing the users.
Keeping the data feed providers accountable. If any instance of Couch Potato, operating as a mainnet data proxy, consistently delivers bad data or is unreliable, then that data proxy won't qualify for payment, and ultimately will be removed by witness consensus.
For validation of the data format presented in the sports folder, a validation is performed.
The corresponding validation schemata are stored in the schema/ subdirectory and used internally when instantiating bookiesports.BookieSports.
Bitcoin node type | CPU | Memory | Storage | Bandwidth | OS |
Self-Hosted, Reduced Storage | 2 Cores | 150GB SSD | 1Gbps | Ubuntu 18.04 |
BookieSports is a module that contains the management information for BOS. This management information describes the sports, event groups, events, betting market groups (BMGs) and markets that are used to be used by Bookie.
The files have multi-token support so all of the above data sets can be created differently for each token.
The configuration files are all in YAML format and the number of files varies according to the sports and events groups and teams supported.
The following is an example of a YAML file used for American Football:
The configuration files would need updating several times a year as it's not known long in advance what teams will be in certain leagues, or in the playoffs and also new sports or tokens could be added along with additional event and betting market groups.
for more details on the BookieSports schemata see:
The Data Proxy serves as a middle man between the Data Feed Providers (DFPs) and the Bookie Oracle System (BOS) operated by the Witnesses.
The simplest way to understand this relationship is knowing that as BOS requires all data it receives to be parsed/ normalized to the exactly the same format, a process needs to exist to make this happen. This 'process' is the data proxy.
Each DFP provides data on sports events in some format, but no two DFPs might use the same format, or necessarily support the same sports and events. Both Data Proxies and BOS use the Bookiesports module to manage this common format and ensure the consistency of data, regardless of how many Data Proxies are operating.
The normalized data is then sent to the subscribed Witnesses.
Popular sports betting, analysis and reporting sites are usually just tied to single data feed provider. This is fine for what they're doing because they're not claiming to be decentralized, or provably fair.
But as BookiePro is the world's first decentralized sports betting exchange it's important that the decentralization includes the (sports) data feeds. BookiePro achieves this through the combination of independent BOS subscribers (Witnesses) and a diversity of Data Proxies. Each Data Proxy then further decentralizes the data by using a separate Data Feed Provider.
As we see in the diagram above, no two Data Proxies ever share the same DFP. However, each instance of BOS subscribes to all Data Proxies so that BOS ensures that no Incident is ever processed without a consensus from the all the Witnesses. This means that through the combination of the Data Proxy architecture and the magic of BOS, there is no single source of data for any BookiePro Incident.
It's impossible for BookiePro to record that "team A beat team B" based on only a single piece of information.
This set of docker images contains a self-contained, Peerplays QA environment. It features 16 Peerplays nodes(running 27 witnesses), 16 Bitcoin SONs, 16 Hive SONs, 1 Redis and 1 faucet node.
Here's a guideline for the hardware requirements for building the QA environment:
Of course, the requirements will be highly dependent on what you're using the environment for. Intensive development of an enterprise-level application will need much more resources than simply exploring your own private environment.
Following are the software requirements:
* If any of the software mentioned above is not installed, please use the following pages to install them:
ubuntu: https://ubuntu.com/server/docs/installation
docker: https://docs.docker.com/engine/install/ubuntu/
git: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
Clone the peerplays-utils project in gitlab to get the latest setup scripts for Peerplays QA environment:
The environment setup needs to be run in the following order:
BITCOIN
HIVE
PEERPLAYS
REDIS
FAUCET
Note: After running the Peerplays initialization script, wait for the next maintenance block before start using the environment for testing.
The first step is to build the bitcoin container. Open a new terminal and run the following command:
This will take some time to complete. Once you see the logs on the terminal have stopped, open a new terminal and login to the newly built Bitcoin container with the following command:
Once inside the container run init-network.sh script to setup the Bitcoin environment:
Now we need to build the HIVE container. Open a new terminal and run the following command:
This will take some time to complete. Wait till the HIVE blockchain starts generating blocks(Generated block #1 with timestamp ....) and then open a new terminal and login to the newly built HIVE container with the following command:
Once inside the container run init-network.sh script to setup the HIVE environment:
Next, we need to build the Peerplays container. Open a new terminal and run the following command:
This will take some time to complete. Wait till the HIVE blockchain starts generating blocks(Generated block...) and then open a new terminal and login to the newly built HIVE container with the following command:
Once inside the container run init-network.sh script to setup the HIVE environment:
Note: Wait for the next maintenance block before start using the environment
Next, we build the Redis container. Open a new terminal and run the following command:
Wait for the message "Ready to accept connections"
The final container which we need to build is faucet. Open a new terminal and run the following command:
Wait for the message "Running on http://10.11.12.50:5000/ (Press CTRL+C to quit)"
In order to view the logs of the containers following commands can be used:
Bitcoin
2. Hive
3. Peerplays
There are 16 peerplays nodes running and in this example, we are trying to see the logs of peerplays10 node.
Note: In case you want to see the logs of peerplays01 node replace peerplays10 with peerplays01 in the command given above.
The IP address and the configuration of the nodes in the environment are given below:
To monitor computer resource usage by docker containers in real-time, use the following command:
The output will be similar to this:
How to manually set up a Peerplays private testnet.
Creating your own private Peerplays testnet has many perks. You can develop new dapps, troubleshoot network issues, or experiment on some new ideas on a chain that you can customize and fully control.
Installing a private testnet requires building the Peerplays program from source code. This is because the initial state of the blockchain is embedded into the binaries. Since we need to begin a new chain from the very first block, we'll need to build from source and (if desired) embed your own custom genesis file into the built binaries.
Building the Peerplays chain from source code is memory intensive. This means the initial hardware requirements are quite high. But after all the building is done, the requirements to run the private chain are much lower.
Here's a guideline for the hardware requirements for the running of the private testnet:
The memory requirements shown in the table above are adequate to operate the node. Building and installing the node from source code (as with this guide) will require more memory. You may run into errors during the build and install process if the system memory is too low. See Installing vs Operating for more details.
Of course, the requirements will be highly dependent on what you're using the testnet for. Intensive development of an enterprise-level application will need much more resources than simply exploring your own private testnet.
The following packages are required for this guide.
Boost is a comprehensive C++ library for common development tasks. It's required for Peerplays.
This is very similar to installing a Witness node. The key difference here is the -DGRAPHENE_EGENESIS_JSON=""
setting. This means we're choosing to not embed a genesis file (yet). It's an override because the install will embed a default genesis file unless we tell it not to.
Since we want to potentially customize the genesis file, we'll choose not to embed one now.
Before entering this next command, now would be the time to move on to section 4.1. if you wish to customize hard-coded parts of the chain. Otherwise, proceed.
Before we install Peerplays with the sudo make install
command, we have the opportunity to make some customization to the hard-coded configuration. This config is located in this file:
$HOME/src/peerplays/libraries/chain/include/graphene/chain/config.hpp
If you edit this file, with sudo vim $HOME/src/peerplays/libraries/chain/include/graphene/chain/config.hpp
, for example, you can tweak dozens of settings. Things like:
The symbol of the core token. (PPY
in Production, TEST
in Develop)
The prefix of the account addresses. (PPY
in Production, TEST
in Develop)
Min / Max lengths of strings.
Various time limits.
Token precision and decimal places.
Many fee related settings.
Block and transaction sizes.
These may or may not be useful for you to change given that we're making a testnet designed to roughly mimic the conditions in production, but they are interesting nonetheless. If you make any changes here, save the file and then move on to sudo make install
from the $HOME/src/peerplays
directory.
We're going to create a directory to store the custom genesis file. And then use the witness_node
program to generate a genesis template in the directory.
This will also create the witness_node_data_dir
directory that stores the config.ini
file we'll need later.
The generated genesis file is good enough on its own to run the testnet. But you can edit the my-genesis.json
file if you wish. In this file, you can specify accounts that exist from the beginning of the chain as well as their account balance. You can add assets, change the fees for operations, add witness accounts, add committee member accounts, and change some initial parameters.
To create the keys to use for accounts in the genesis file, the get_dev_keys
program is used.
So the keys will be generated using the strings you put into the program. In the example above, the get_dev_key
program will use test-account-owner
and then test-account-active
to generate a public/private key pair and address, each.
The strings don't really mean anything other than just being used as seeds to create keys and accounts. The strings are not used for account names.
Then using the output, you can make new accounts in the my-genesis.json
file like the following:
sudo vim $HOME/genesis/my-genesis.json
The most important part is to use the generated keys given to the initial witnesses for any witnesses you use. These keys are hard-coded into the program and are used for the initial block producing accounts. The public/private key pair the witnesses use will also be generated in the config.ini
file as the private-key
. It's best to leave these alone.
And of course, you should have an account with an initial balance so we can import the balance to the cli_wallet
later.
While this step is not required, there are a couple of benefits to embedding the genesis file into the Peerplays build. First, you won't have to supply the genesis file location in the config.ini
file. And second, you won't have to supply the chain id to the cli_wallet
program when interacting with the testnet.
But the cost of embedding the genesis file is that it takes a lot of time and computer resources to rebuild Peerplays. And with the genesis "baked into" the binaries, it won't be available for future customization if you wanted to make some changes and reset the chain back to block 1.
Ultimately the decision to embed the file is a matter of taste; Is it more important to have the convenience of doing so, or the flexibility of not?
In the set of commands below, we're setting the -DGRAPHENE_EGENESIS_JSON="$HOME/genesis/my-genesis.json"
option in the make cache to use this file in the build process.
We need to make a few edits to the config.ini
file to finish setting up the testnet.
sudo vim $HOME/witness_node_data_dir/config.ini
Here are the settings we need to add / change:
Everything else should be left as-is.
We'll need to tell the program not to use any seed nodes with --seed-nodes="[]"
. If we don't do this, the program will attempt to use some hard-coded default seed nodes which don't have anything to do with our private testnet.
If the program runs successfully you'll see your own unique chain id. The chain id is a hash of the genesis file you're using when running your testnet. If you change anything in your genesis file, the chain id will be different! This is used by the cli_wallet
program to prevent unintended transactions from happening on the wrong chain.
Take note of this chain id and keep it for your future reference. If you have not embedded the genesis file, you will need the chain id when running the cli_wallet
program.
After that, your testnet should be producing blocks!
We are now ready to connect the CLI to your testnet witness node. Keep your witness node running and in another terminal window you'll run the CLI Wallet.
If you have not embedded the genesis file, you'll need the chain id to run this command:
Just be sure to replace the example 8b7bd36a146a03d0e5d0a971e286098f41230b209d96f92465cd62bd64294824
with your own chain id from earlier.
Or if you embedded the genesis file, there's no need for the chain id in this command:
If you see the new >>>
prompt, you have successfully connected to your node and you're ready to create a password with set_password
.
Now you can unlock the newly created wallet:
In Peerplays, balances are contained in accounts. To import an account into your wallet, all you need to know its name and its private key. We will now import into the wallet an account called nathan
using the import_key
command:
Note that nathan
happens to be the account name defined in the genesis file. If you had edited your my-genesis.json
file just after it was created, you could have put a different name there. Also, note that 5KQwrPbwdL...P79zkvFD3
is the private key defined in the config.ini
file.
Now we have the private key imported into the wallet but still no funds associated with it. Funds are stored in genesis balance objects. These funds can be claimed, with no fee, using the import_balance
command:
As a result, we have one account (named nathan
) imported into the wallet and this account is well funded with TEST
as we have claimed the funds stored in the genesis file. You can view this account by using this command:
...and its balance by using this command:
We will now create another account (named alpha
) so that we can transfer funds back and forth between nathan
and alpha
.
Creating a new account is always done by using an existing account. We need it because someone (the registrar) has to fund the registration fee. Also, there is the requirement for the registrar account to have a lifetime member (LTM) status. Therefore we need to upgrade the account nathan
to LTM, before we can proceed with creating other accounts. To upgrade to LTM, use the upgrade_account
command:
Verify that nathan
has now a LTM status:
In the response, next to membership_expiration_date
you should see something similar to 2106-02-07T06:28:15
. If you get 1970-01-01T00:00:00
something is wrong and nathan
has not been successfully upgraded.
We can now register an account by using nathan
as registrar. But first we need to get hold of the public key for the new account. We do it by using the suggest_brain_key
command:
And the response should be something similar to this:
So in this example:
the public key is TEST78CuY47V...WPr1zRL5
the private key is 5JDh3XmH...9idNisYnE
and let's assume our new account will be called alpha
Copy those keys as we will need them soon.
Your public and private keys will be different (as the result of the suggest_brain_key
command is random) so make sure you use those. Also, you are free to choose any other name different from alpha
.
The register_account
command allows you to register an account using only a public key.
Make sure you replace TEST78CuY4...WPr1zRL5
with your version of it.
The new account has been created but it's not in your wallet at this stage. We need to import it using the import_key
command and alpha
's private key:
Make sure you replace 5JDh3XmH...9idNisYnE
with your version of it.
As a final step, we will transfer some money from nathan
to alpha
. For that we use the transfer
command:
The text here is some cash
is an arbitrary memo you can attach to a transfer. If you don't need a memo, just use ""
instead.
And now you can verify that alpha
has indeed received the money:
If you want to set up a second node (with the same genesis file) and connect it to the first node, use the p2p-endpoint of the first node as the seed-node for the second. The below are example settings.
In the first node config.ini
:
In the second node config.ini
:
We set the first node's p2p-endpoint
as the second node's seed-node
.
Lastly, the same witness IDs can be used in the second node, but the keys used for node production must be different. This allows you to swap block production between the two nodes by updating the witness accounts' signing keys (with update_witness
).
Another option is to use different witnesses on each node so that block production alternates between the nodes. The log output of each node should show blocks received from the other node. (i.e., got_block
)
This document explains the Couch Potato server installation as both "The long way" and "The short way"
The document does assume some prior knowledge of simple web server installation.
Couch Potato front-end is an Ionic web application using the Angular framework. It interfaces with the back-end through a PHP API that provides connectivity to a MySQL database.
Although the diagram above shows an Apache server, other servers compatible with PHP and MySQL could be used, such as Nginx.
These short steps show how to install all components and dependencies.
There are several open source PHP stacks readily available that are by far the easiest way of getting set up and include MySQL as well. The most popular being WAMP and LAMP.
These stacks, and installation instructions, are readily available for download from many sources and won't be covered any further here.
At the time of writing the PHP and SQL versions used are:
PHP 7.2
MySQL 5.7
After installing PHP and MySQL if your versions aren't at least as new as these then they need to be updated.
The process for updating varies according to the operating system, but if PHP and MySQL were installed as a stack in Step 1, such as LAMP, then as long as that was the most recent version there shouldn't be an issue with old versions of PHP or MySQL.
The Couch Potato API additional libraries that either aren't part of the standard installation and need to be loaded, or are in PHP but haven't been enabled.
This is the PHP->MySQL library used by the API. For installation instructions see:
A script will be provided to all new Couch Potato operators that will create the database schema and pre-populate the database with all the starting data.
Important: After the script is run you should have a new database schema called couch_potato
. To avoid any issues or additional configuration changes don't change the database name.
Run the script on the MySQL database instance created in the previous steps.
The PHP API must be updated with the correct database connection credentials that were used to create the database. To do this some environment variables need to be changed as follows:
Open the .env
file from the root location where the PHP API was loaded.
Next update the DB_HOST
, DB_USER
and DB_PASS
values to those of your database.
Installing the web components is very simple but does depend where you're planning on hosting the website.
So assuming you have a suitable web server/directory already set up then copy all the files from the www
folder to the web server / webapp folder, or copy the entire www
folder.
The www
folder can be found here:
TBD
There is one additional dependency that needs to be added to the web server to support the cryptography library that Couch Potato uses.
From the console run:
To configure the application to use the API and other options open the config.json file from the www/assets folder.
In this file you can change the following attributes:
16GB
OS | Docker | git |
---|---|---|
Memory
Storage
OS
32GB
~300GB
Ubuntu 18.04
Ubuntu 18.04
20.10.8 or higher
2.33.0 or higher
Name | Constraints | Placeholder Text |
User Name | Max Length: 24 Min Length 8 | User name |
Password | Max Length: 40 Min Length: 8 | Password |
Exception | Error Message |
No user name | Username not entered |
No password | Password not entered |
Password or user name is invalid | Invalid username or password |
Node Type?
CPU
Memory
Storage
Bandwidth
OS
Witness
4 Cores
100GB SSD
1Gbps
Ubuntu 18.04
Name | Description |
api_url | The URL for the Couch Potato API, see above steps. |
notifications: delay |
notifications: start | The number of hours that the notifications report back. For example, 36 means that the notifications will report on any games that are up to three days old. |
notifications: end | The number of hours that the notifications report ahead. For example, 240 means that the notifications will report on any games that are up to 10 days away. |
title1 | For white-labelling purposes the title can be customized to any text |
title2 | See |
logolarge | For white-labelling purposes the large logo can be changed to any valid URL |
logosmall | Same as |
Caption | Type | Action |
LOGIN | Button |
Create Account | Text |
The create account screen is opened from the Home Page and is the screen where every new account is created/registered.
Captions
Inputs
Actions
Validation
Note: For the first release there will be no additional validation on the password format for strength or special characters etc. The only constraint is that the length must be >=8 and <= 40 characters.
Data proxies are a concept introduced in Peerplays blockchain to anonymize input data feeds. For now we are using the same to anonymize sports data feeds in a decentralized manner. This document provides setup instructions.
We are assuming Ubuntu 18.04 LTS and Python3.X available by default as the versions to be supported.
Prepare the servers
Update and upgrade the server with the correct packages
Get the code
The data proxy configuration file should be at the top level folder, ie bos-dataproxy-legacy. The sample file config-example.yaml
can be copied to config-dataproxy.yam
Install the necessary package by running run_dev_server.sh
. This calls setup.s
h internally to assemble everything.
PS: for production this will be run_production_server.sh
These steps will install the necessary packages. Now for every feed provider, we will have to use the Provider functionality and write wrappers to fetch the code. Once the code is fetched, this will be normalized and passed on to the BOS component.
Running the dev environment using the following command starts the general environment.
For example, using Scorespro as a provider:
make sure that the configuration file config-dataproxy.yaml
is in the working directory and run the following command. The above script is a wrapper. Example configuration file is provided.
result:
We have code available for a few other data feed providers which is available on request.
For additional providers, we need to pull or receive push data and then write a wrapper to handle it.
We can use the screen
command to start the above commands in daemon mode.
To be added. Not mandatory unless the status needs to be published.
The components of the dashboard are:
There is no limit on the number of sports tabs that can be created. If the tabs reach the horizontal limit of the application then they will stack in to multiple rows. Realistically there should never be so many sports enabled at any one time to cause the tabs to be stacked.
Important: The sports tabs must be 100% configurable through the database only. Sports must be added or removed without any code changes.
Clicking on any unselected tab will:
Update the Leagues Tabs to show only the leagues associated with the selected sport.
Change the calendar display to show only events for the selected sport and league.
By default, when a new sports tab is selected the league will default to the first one in the list.
Note: There is no restriction on the icons/images to be used for each sport, but logically they should reflect the sport!
16GB
The time in milliseconds at which the game notifications are refreshed. See
Validate user name and password and then open the
Open the screen
The dashboard is the main screen and is opened from the as soon as the user is logged in.
The sports tab runs horizontally across the dashboard and displays one tab for each sport that is enabled. The tabs are dynamic and configured through the MySql database table.
The order the sports tabs are displayed in is defined by their id
value in the table.
Text
Type
Comments
Create New Account
Static
Data Proxy
Static
[proxy name]
Dynamic
Value set in config-dataproxy.json
*Required fields
Static
Name
Constraints
Placeholder Text
User Name
Max Length: 24
Min Length: 8
User name
Password
Max Length: 40
Min Length: 8
Password
Confirm Password
Max Length: 40
Min Length: 8
Confirm Password
Caption
Type
Action
REGISTER
Button
Validate all fields and then return to the Home Page
X
Image
Close the screen without adding a new account and return to the Home Page
Exception
Error Message
No user name
Username not entered
No password
Password not entered
Password too short
Password must be at least 8 characters
No confirm password
Confirm password not entered
Password and confirm password not the same
Password and Confirm Password are different
apt update ; apt install
-y gunicorn mongodb-server virtualenv python3 python3-dev git htop mosh openssl libssl-dev
git clone git@github.com:PBSA/bos-dataproxy-legacy.git
cd
bos-dataproxy-legacy; source
run_dev_server.sh
source
provider_service.sh
python provider_service.py scorespro run_here
(env) root@208e204975be:~/work/code/BOS/l/bos-dataproxy-legacy# python provider_service.py scorespro run_hereSetting up logger handling for dataproxy...... done2019-08-07 16:51:50,560 INFO dataproxy: Custom config has been loaded from working directory: config-dataproxy.yaml2019-08-07 16:51:50,708 INFO bos_incidents: Custom config has been loaded ;config-defaults.yaml;incident_storage_config.yamlSetting up logger handling for dataproxy...... done2019-08-07 16:52:03,106 INFO dataproxy.provider.scorespro.pap_task: Sport SOC found and added to history watch
Caption | Type | Action |
[Sport] | Text | Change calendar and leagues to selected sport. |
Sports
Sports
Text | Type | Comments |
[sports name] | Dynamic |
[icon] | Dynamic | The icon itself must exist in the corresponding |
The replay screen is displayed by clicking on the REPLAY
button on the Dashboard header.
The purpose of the Replay feature is to give the user a manual way to send, or re-send, game create incidents to all of the BOS endpoints if for any reason they weren't correctly sent before.
Normally this feature shouldn't need to be used very often as a create incident is automatically sent every time a game is created. But there could be occasions when the application correctly records a game as being created but the information isn't recorded by the BOS nodes. If that happened then running a Replay will 'flush' all the games between the start and end dates and send create incidents to the BOS nodes a second time.
Important: The Replay feature can only be used for games that are not yet started. Once a game is started a new create incident would be ignored.
Sports and leagues can be selected individually using check-boxes, or all sports and leagues can be selected or de-selected using the Select All checkbox/toggle.
The range of data to be replayed will be set from the Start and End fields.
Captions
Inputs
Actions
Validation
The calendar component is the main 'engine' of the application. It's here that the user will navigate through, enter and select new games.
The calendar will dynamically create a month plan for each month selected using the forward (>) and backward (<) selectors. There will be no upper or lower limits for the first release.
Every time the calendar changes from December -> January or January <- December the year will change accordingly.
The numbers of days will be adjusted for each month and take into account leap years.
Weekdays will be displayed as Sunday -> Saturday.
The current day should be a different colour and larger than the other days.
Moving the cursor over any day cell will highlight it.
If a day has at least one game scheduled then the crest for the league associated with the current calendar will be shown in that day cell.
If a day has at least one game scheduled then a badge for the total number of games will be shown in that day cell.
Value set in table
Path and name defined in the icon
column of the table.
Text/Image
Type
Comments
Data Replay
Static
Select All
Static
[sport]
Dynamic
All sport name in the sport
table
[league]
Dynamic
All league names associated with the selected sport taken from the leagues
table
[sport icon]
Dynamic
Applicable sport icon in the sport
table
[league icon]
Dynamic
Applicable league icon in the leagues
table
Start:
Static
End:
Static
Name
Type
Constraints
Select All
Checkbox
[sport]
Checkbox
[league]
Checkbox
Start
List
Valid date from list
End
List
Valid date from list
Caption
Type
Action
REPLAY ⤵
Button
Start the Replay
X
Image
Close the screen.
Exception
Error Message
No start date
Start date not entered
No end date
End data not entered
End date before start date
End date is before the Start date
Text | Type | Comments |
[Month] | Dynamic | Changes according to the selected month. |
[Year] | Dynamic | Changes according to the movement of the month. |
Day Names | Static | Sunday thru Saturday |
[Day Number] | Dynamic | Generated according to the number of days in the month and the day name of the first day |
[League Crest] | Dynamic | Depends on the sport and league for the calendar at the time. |
[Game Counter] | Dynamic | The number of games for the date, sport, league in any given day cell. |
Sports
Sports
Caption | Type | Action |
< | Text | Move month forward |
> | Text | Move month backward |
[Day Cell] | Button |
The Game Selector is opened by clicking on the day cell of any calendar. The Game Selector is the engine behind all of the game incidents that are created and then posted to BOS.
The Game Selector is both used for creating new games/matches and then moving each game through the following standard incident workflow of create -> in_progress -> result -> finished.
The selector can also be used to Cancel or Delete games.
The game selector header displays the following information:
Captions
Actions
To add new game use the input fields at the bottom of the screen and then click on the ADD
button.
Captions
Inputs
Actions
Validation
Note: There is no validation to stop the same game from being created twice. The reason for this is because it's common in certain sports to have 'double-headers' where two teams play each other more than once in a day.
Note: There is no validation to stop a game start time from being in the passed. This is because game start times do change and it maybe necessary to start a game in the selector that has already started in real time.
Each new game will have its status set to Not Started
A create
incident will be pushed to the BOS instances.
To start a game click on the Start
button next to the game. The game status will then change to In Progress
Actions
An in_progress
incident will be pushed to the BOS instances with the whistle_start_time
set to the time when the Start
button was clicked.
To finish a game enter the score for both home and away teams and click on the Finish
button next to the game. The game status will then change to Finished
Inputs
Actions
A result
incident followed by a finish
incident will be pushed to the BOS instances with the whistle_end_time
set to the time when the Finish button was clicked, and the result to the home score and away score values.
Note: It's not possible to corrects scores and re-send them to BOS. For this reason the finish
incident is sent immediately after the result
incident as a result of just clicking on the Finish
button.
Any game can be cancelled as long as it's either Not Started
or In Progress.
To cancel a game click on the Cancel
text next to the game.
A confirmation message will be shown.
Click on Yes
to cancel the game (game status will then change to Canceled)
or No
to to return without canceling.
A canceled
incident will be sent to BOS.
Tip: for the purposes of BOS incidents 'canceled' can also be interpreted as postponed but not as delayed. A delayed game is expected to restart. But once a game has been canceled it can't be restarted. If a game is canceled and then played the following day it would have to re-created with the new start time.
A game can only be deleted if it hasn't been started (has a status of Not Started
).
To delete a game click on the Delete text next to the game.
A confirmation message will be shown.
Click on Yes
to delete the game (game will be removed) or No
to to return without deleting.
If a game is deleted than a canceled
incident must also be sent to BOS so that BOS can tag the game in the same way as a canceled game.
Note: The difference between a canceled game and a deleted game is that a deleted game is basically a game that was entered in error and once deleted is removed from the database so it can be re-entered correctly if needed. A canceled game is a proper game that for one reason or other doesn't take place after being created correctly.
The selector grid is where all games are recorded as they get entered and moved through the workflow.
The selector grid is made up as follows:
Open the screen
Text/Image
Type
Comments
[league logo]
Dynamic
The logo of the selected league
[league name]
Dynamic
The name of the selected league
[date]
Dynamic
The date of the games.
Caption
Type
Action
X
Button
Close the game selector.
Text/Image
Type
Comments
Start
Static
Home Team
Static
Away Team
Static
Name
Type
Constraints
Start
Date Selector
Any valid date
Home Team
Drop Down selector
Drop down list of all teams associated with the selected sport and league.
Away Team
Drop Down selector
Drop down list of all teams associated with the selected sport and league.
Caption
Type
Action
ADD +
Button
Add the game to the list of created games.
Exception
Error Message
No start time
Start time not entered
Home Team
No home team selected
Away Team
No away team selected
Home Team and Away Team must be different
Teams must be different
Caption
Type
Action
Start
Button
Start the selected game.
Name
Type
Constraints
Home Score
Text Box
Numeric, max 999
Away
Score
Text Box
Numeric, max 999
Caption
Type
Action
Finish
Button
Finish the selected game and record the score.
Column
Type
Description
Start
Text
Start time of the game
Game
Text
Home team v Away team with logos
Home Score
Input
The home team score.
Away Score
Input
The away team score.
Actions
Button/Hyperlinks
Changes according to the status of a game. Available options are:
Start Finish Cancel Delete
Status
Caption
The status of the game, one of:
Not Started
In Progress
Finished
Cancelled
The Change Password screen is opened by clicking on the Change Password
menu item in the account menu.
Captions
Inputs
Actions
Validation
Note: For the first release there will be no additional validation on the password format for strength or special characters etc. The only constraint is that the length must be >=8 and <= 40 characters.
Text
Type
Comments
Change Password
Static
Required fields*
Static
Name
Constraints
Placeholder Text
Current Password
Max Length: 40
Min Length: 8
Current Password
New Password
Max Length: 40
Min Length: 8
New Password
Confirm Password
Max Length: 40
Min Length: 8
Confirm Password
Caption
Type
Action
CHANGE PASSWORD
Button
Validate all fields then update the password and return to the Dashboard
X
Image
Close the screen without adding a new account and return to the Dashboard
Exception
Error Message
No current password
Current password not entered
Current password is wrong
Current password is incorrect.
No new password
New password not entered
New password too short
New password must be at least 8 characters
No confirm new password
Confirm new password not entered.
Confirm new password too short
Confirm new password must be at least 8 characters
New and confirm new don't match
New password and confirm new password are different
The Couch Potato API is a RESTFul API.
A RESTful API -- also referred to as a RESTful web service or REST API -- is based on representational state transfer (REST) technology, an architectural style and approach to communications often used in web services development.
This documentation assumes prior knowledge of using (consuming) a REST API.
Each API endpoint is carefully described in the API Reference, along with code examples written in Typescript. Successful response messages or messages are returned by all API calls and documented in the Objects and Error Codes sections of this document.
For easy reference, client side error codes (400) are given sub-codes as well.
All the functionality of the Couch Potato web application can be reproduced purely through the API making it easy to create your own applications and data proxies.
API calls that change data or create and post BOS incidents (triggers) also update the Couch Potato MySQL database so that the changes are reflected on the web application if it's being used.
This allows for a kind of hybrid development and implementation of Couch Potato. For example, you could use your own interface combined with the add_game
API to create all new games and send the trigger to BOS, but then use the Couch Potato web application to start games and add scores.
The most important API calls to become familiar with are the five core functions that create BOS messages and send them as triggers to BOS. These five calls are:
add_game
- adds a new game and sends a create
trigger to BOS.
start_game
- starts a game and sends an in_progress
trigger to BOS.
add_score
- adds scores and send a result
trigger to BOS.
finish_game
- finishes a game and sends a finish
trigger to BOS.
cancel_game
- cancels a game that is either not started or in progress and sends a canceled
trigger to BOS.