Data Providers

As a data provider, you can publish your data sets to the Enigma Data Marketplace and get rewarded in ENG tokens. As a provider, you set the price of your data sets for monthly subscriptions. The Enigma Data Marketplace is governed by a smart contract that brokers all the transactions in the marketplace.

Data Sets

A data set may contain any data which fits in a tabular format. However, since data will be consumed by Catalyst for the purpose of algorithmic trading, it must have some predictable properties. All data sets must be structured as time series, which must contain these two columns:

  • Date: The date/time of the event corresponding to each row of the data set.
  • Symbol: The symbol of the currency of the market associated with each event. If an event involves more that one currency or market, multiple entries can be provided with a semicolon separator.

Here are the applicable data type conventions:

  • String: String (or text) data must surrounded with double-quotes

  • Date/Time: Both date and date/time types are supported. The date fields should be represented as a String following the ISO 8601 format using the appropriate level of precision. Valid examples include:

    • “2017-11-01” or “2017-11-1” (for November 1st, 2017)
    • “2017-12-14 23:00” (for December 14th, 2017 at 11:00PM)

    Dates should default to UTC for the timezone.

  • Numbers: Numbers should be provided as integer or floats without quotes. Use as many decimals as necessary.

This sample data set contains market cap information:

date symbol day_volume_usd available_supply market_cap_usd
2018-01-22 00:42:00+00:00 btc 10010200064 16817962 197321785344
2018-01-22 00:42:00+00:00 eth 3380800000 97128896 103240245248
2018-01-22 00:42:00+00:00 xrp 2935330048 38739144704 54674878464
2018-01-22 00:42:00+00:00 bch 737806976 16924476 30479964160
2018-01-22 00:42:00+00:00 ada 822998976 25927069696 15821813760
2018-01-22 00:42:00+00:00 ltc 465356000 54861960 10571514880
2018-01-22 00:42:00+00:00 xem 99484704 8999999488 9715949568
2018-01-22 00:42:00+00:00 neo 355087008 65000000 8795799552

Data Frequency and Availability

Data frequency and availability associated fields must be provided when registering each data source, as per the scheme outlined below:

  • Data frequency: How frequently do data events occur?

    • daily: Publish one event each day at a set time.
    • hourly: Publish one event each hour between a set range of minutes.
    • minute: Publish one event each minute between a set range of seconds.
  • Data Availability:

    • Historical: The dataset includes historical data.
    • Live: Data events will be published at the specified frequency on an ongoing basis.

Registering Data Sets

To register a new data set, download and install the Catalyst client. Then, use the catalyst marketplace register command. In this example, data is published multiple times per hour at a variable time:

$ catalyst marketplace register
Enter the name of the dataset to register: test
Enter the price for a monthly subscription to this dataset in ENG: 10
Enter the data frequency [daily, hourly, minute]: daily
Does it include historical data? [default: Y]:
Doest it include live data? [default: Y]:

Publishing Historical Data

To upload data in an registered data set, use the catalyst marketplace publish command:

$ catalyst marketplace publish --dataset=test --datadir=~/test-data/

Upon execution, Catalyst will automatically identify, validate and upload the data in all CSV files directly inside the specified datadir. It will not scan recursively.

The file naming convention is inconsequential; Catalyst will process any file with a CSV extension. As long as the data is correctly represented, it can be contained in one file or split across multiple files.

On error, Catalyst simply stops and displays the error in the standard output. It does not roll-back the files already published.

Publishing Live Data

Publishing live data works the same as publishing historical data running the same command every time new data is available:

$ catalyst marketplace publish --dataset=test --datadir=~/test-data/

Withdraw Tokens Owed

As a publisher, you are entitled to receive ENG tokens from those who have subscribed to your dataset in accordance with the subscription cycle. You can check to see how many ENG tokens you can withdraw at this given moment:

$ catalyst marketplace get_withdraw_amount --dataset=test

To carry out the withdrawal transaction, run this command:

$ catalyst marketplace withdraw --dataset=test

Publishers’ API

In order to facilitate the process of automating the publication of live data, the Data Marketplace provides the following Application Programming Interface (API).

Base URL:


The endpoint requires an API key/secret pair. Use the catalyst client to publish data once manually, and it will generate the key/secret pair for you, and store it in the following location: $HOME/.catalyst/data/marketplace/addresses.json from where you can retrieve it to use it programmatically.

In your API request, you have to include the following HTTP headers:
(example code provided in Python)

import time
import hashlib
import hmac

def get_signed_headers(dataset, key, secret):
  nonce = str(int(time.time() * 1000))

  signature =
        '{}{}'.format(dataset, nonce).encode('utf-8'),

  headers = {
        'Sign': signature,
        'Key': key,
        'Nonce': nonce,
        'Dataset': dataset,

  return headers

The nonce must be a monotonically increasing counter (in the example above generated using the number of seconds since the epoch), and the signature is the keyed-hash of the concatenation of the name of the dataset for which you want to publish data and the nonce, encrypted with the secret.

Publish endpoint

The endpoint used for publishing data is located at /marketplace/publish, and it only accepts POST requests. If you try to visit the endpoint with your browser, you will get a Method Not Allowed error because by default the browser will use GET to retrieve the page, and it will fail.

You need to include the files you want to upload in the request as follows:
(example code provided in Python)

import glob
import requests


dataset = ''    # specify your dataset
key = ''        # specify your key
secret = ''     # specify your secret
datadir = ''    # specify your data folder

filenames = glob.glob(os.path.join(datadir, '*.csv'))

if not filenames:
  raise ValueError('No files to upload.')

files = []
for file in filenames:
    files.append(('file', open(file, 'rb')))

headers = get_signed_headers(dataset, key, secret)

r ='{}/marketplace/publish'.format(BASE_URL),

if r.status_code != 200:
    raise ValueError('Error uploading file: {}'.format(r.status_code))

if 'error' in r.json():
    raise ValueError('Error uploading file: {}'.format(r.json()['error'])

print('Dataset {} uploaded successfully.'.format(dataset))