Skip to main content
Version: testnet (v0.73)

Export network history as CSV

Export CSV table data from network history between two block heights.

The requested block heights must fall on network history segment boundaries, which can be discovered by calling the API to list all network history segments. By default segments contain 1000 blocks. In that case ranges such as (1, 1000), (1001, 2000), (1, 3000) would all fall on segment boundaries and be valid.

The generated CSV file is compressed into a ZIP file and returned, with the file name in the following format: [chain id]-[table name]-[start block]-[end block].zip

In gRPC, results are returned in a chunked stream of base64 encoded data.

Through the REST gateway, the base64 data chunks are decoded and streamed as a content-type: application/zip HTTP response.

The CSV exported data uses a comma as a DELIMITER between fields, and " for QUOTE-ing fields.

If a value contains any of: DELIMITER, QUOTE, carriage return, or line feed then the whole value is prefixed and suffixed by the QUOTE character and any occurrence within the value of a QUOTE character preceded by another QUOTE.

A NULL is output as the NULL parameter string and is not quoted, while a non-NULL value matching the NULL parameter string is quoted.

For example, with the default settings, a NULL is written as an unquoted empty string, while an empty string data value is written with double quotes.

Note that CSV files produced may contain quoted values containing embedded carriage returns and line feeds. Thus the files are not strictly one line per table row like text-format files.

The first row of the CSV file is a header that describes the contents of each column in subsequent rows.

Usually the ZIP file will contain only a single CSV file. However it is possible that the (from_block, to_block) request spans over a range of blocks in which the underlying schema of the database changes. For example, a column may have been added, removed, or renamed.

If this happens, the CSV file will be split at the point of the schema change and the zip file will contain multiple CSV files, with a potentially different set of headers. The 'version' number of the database schema is part of the in the CSV filename:

[chain id]-[table name]-[schema version]-[start block]-[end block].zip

For example, a zip file might be called mainnet-sometable-000001-003000.zip

And contain two CSV files: mainnet-sometable-1-000001-002000.csv:

timestamp, value 1, foo 2, bar

And mainnet-sometable-2-002001-003000.csv:

timestamp, value, extra_value 3, baz, apple

It is worth noting that the schema will not change within a single network history segment. buf:lint:ignore RPC_RESPONSE_STANDARD_NAME buf:lint:ignore RPC_REQUEST_RESPONSE_UNIQUE

Query Parameters
    fromBlock int64

    Block to begin exporting from. Must be the first block of a history segment, which by default are 1000 blocks each; in that case - 1, 1001, 2001 etc. are valid values. This can be checked by first calling the API to list all network history segments.

    toBlock int64

    Last block to export up to and including. Must be the last block of a history segment which by default are 1000 blocks each; in that case - 1000, 2000, 3000 etc. are valid values. This can be checked by first calling the API to list all network history segments.

    table string

    Possible values: [TABLE_UNSPECIFIED, TABLE_BALANCES, TABLE_CHECKPOINTS, TABLE_DELEGATIONS, TABLE_LEDGER, TABLE_ORDERS, TABLE_TRADES, TABLE_MARKET_DATA, TABLE_MARGIN_LEVELS, TABLE_POSITIONS, TABLE_LIQUIDITY_PROVISIONS, TABLE_MARKETS, TABLE_DEPOSITS, TABLE_WITHDRAWALS, TABLE_BLOCKS, TABLE_REWARDS]

    Default value: TABLE_UNSPECIFIED

    Table to export data from.

Responses

A successful response.(streaming responses)


Schema
    error object
    code int32
    details object[]
  • Array [
  • @type string

    A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration). The name should be in a canonical form (e.g., leading "." is not accepted).

    In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http, https, or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows:

    • If no scheme is provided, https is assumed.
    • An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error.
    • Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.)

    Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com.

    Schemes other than http, https (or the empty scheme) might be used with implementation specific semantics.

  • ]
  • message string
    result object

    Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page.

    This message can be used both in streaming and non-streaming API methods in the request as well as the response.

    It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body.

    Example:

    message GetResourceRequest {
    // A unique request id.
    string request_id = 1;

    // The raw HTTP body is bound to this field.
    google.api.HttpBody http_body = 2;

    }

    service ResourceService {
    rpc GetResource(GetResourceRequest)
    returns (google.api.HttpBody);
    rpc UpdateResource(google.api.HttpBody)
    returns (google.protobuf.Empty);

    }

    Example with streaming methods:

    service CaldavService {
    rpc GetCalendar(stream google.api.HttpBody)
    returns (stream google.api.HttpBody);
    rpc UpdateCalendar(stream google.api.HttpBody)
    returns (stream google.api.HttpBody);

    }

    Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.

    contentType string

    The HTTP Content-Type header value specifying the content type of the body.

    data byte

    The HTTP request/response body as raw binary.

    extensions object[]

    Application specific response metadata. Must be set in the first response for streaming APIs.

  • Array [
  • @type string

    A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration). The name should be in a canonical form (e.g., leading "." is not accepted).

    In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http, https, or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows:

    • If no scheme is provided, https is assumed.
    • An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error.
    • Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.)

    Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com.

    Schemes other than http, https (or the empty scheme) might be used with implementation specific semantics.

  • ]
Loading...