Avalanche Manifest File
Avalanche Manifest File
Avalanche SDK is Deprecated
We are no longer supporting @subql/node-avalanche
and all Avalanche users should migrate their projects to use @subql/node-ethereum
to recieve the latest updates.
The new @subql/node-ethereum
is feature equivalent, and provides some massive performance improvements and support for new features.
The migration effort is easy and should only take a few minutes. You can follow a step by step guide here.
Important
We use Ethereum packages, runtimes, and handlers (e.g. @subql/node-ethereum
, ethereum/Runtime
, and ethereum/*Handler
) for Avalanche. Since Avalanche's C-Chain is EVM-compatible, we can use the core Ethereum framework to index it.
The Manifest project.ts
file can be seen as an entry point of your project and it defines most of the details on how SubQuery will index and transform the chain data. It clearly indicates where we are indexing data from, and to what on chain events we are subscribing to.
The Manifest can be in either Typescript, Yaml, or JSON format.
With the number of new features we are adding to SubQuery, and the slight differences between each chain that mostly occur in the manifest, the project manifest is now written by default in Typescript. This means that you get a fully typed project manifest with documentation and examples provided your code editor.
Below is a standard example of a basic project.ts
.
import {
EthereumProject,
EthereumDatasourceKind,
EthereumHandlerKind,
} from "@subql/types-ethereum";
// Can expand the Datasource processor types via the generic param
const project: EthereumProject = {
specVersion: "1.0.0",
version: "0.0.1",
name: "avalanche-subql-starter",
description:
"This project can be use as a starting point for developing your new Avalanche C-Chain SubQuery project",
runner: {
node: {
name: "@subql/node-ethereum",
version: ">=3.0.0",
},
query: {
name: "@subql/query",
version: "*",
},
},
schema: {
file: "./schema.graphql",
},
network: {
/**
* chainId is the EVM Chain ID, for Avalanche-C this is 43113
* https://chainlist.org/chain/43113
*/
chainId: "43114",
/**
* These endpoint(s) should be non-pruned archive nodes
* Public nodes may be rate limited, which can affect indexing speed
* When developing your project we suggest getting a private API key
# We suggest providing an array of endpoints for increased speed and reliability
*/
endpoint: ["https://avalanche.api.onfinality.io/public/ext/bc/C/rpc"],
dictionary: "https://dict-tyk.subquery.network/query/avalanche",
},
dataSources: [
{
kind: EthereumDatasourceKind.Runtime,
// Contract creation of Pangolin Token https://snowtrace.io/tx/0xfab84552e997848a43f05e440998617d641788d355e3195b6882e9006996d8f9
startBlock: 57360,
options: {
// Must be a key of assets
abi: "erc20",
// This is the contract address for Wrapped Ether https://testnet.snowtrace.io/token/0x60781C2586D68229fde47564546784ab3fACA982
address: "0x60781C2586D68229fde47564546784ab3fACA982",
},
assets: new Map([["erc20", { file: "./abis/PangolinERC20.json" }]]),
mapping: {
file: "./dist/index.js",
handlers: [
{
kind: EthereumHandlerKind.Call,
handler: "handleTransaction",
filter: {
/**
* The function can either be the function fragment or signature
* function: '0x095ea7b3'
* function: '0x7ff36ab500000000000000000000000000000000000000000000000000000000'
*/
function: "deposit(uint256 amount)",
},
},
{
kind: EthereumHandlerKind.Event,
handler: "handleLog",
filter: {
/**
* Follows standard log filters https://docs.ethers.io/v5/concepts/events/
* address: "0x60781C2586D68229fde47564546784ab3fACA982"
*/
topics: [
"Transfer(address indexed from, address indexed to, uint256 amount)",
],
},
},
],
},
},
],
repository: "https://github.com/subquery/ethereum-subql-starter",
};
// Must set default to the project instance
export default project;
Below is a standard example of the legacy YAML version (project.yaml
).
Legacy YAML Manifest
specVersion: "1.0.0"
name: "avalanche-subql-starter"
version: "0.0.1"
runner:
node:
name: "@subql/node-ethereum"
version: "*"
query:
name: "@subql/query"
version: "*"
description: "This project can be use as a starting point for developing your new Avalanche C-Chain SubQuery project"
repository: "https://github.com/subquery/ethereum-subql-starter"
schema:
file: "./schema.graphql"
network:
# chainId is the EVM Chain ID, for Avalanche-C chain is 43114
# https://chainlist.org/chain/43114
chainId: "43114"
# This endpoint must be a public non-pruned archive node
# We recommend providing more than one endpoint for improved reliability, performance, and uptime
# Public nodes may be rate limited, which can affect indexing speed
# When developing your project we suggest getting a private API key
# You can get them from OnFinality for free https://app.onfinality.io
# https://documentation.onfinality.io/support/the-enhanced-api-service
endpoint: ["https://avalanche.api.onfinality.io/public/ext/bc/C/rpc"]
# Recommended to provide the HTTP endpoint of a full chain dictionary to speed up processing
dictionary: "https://api.subquery.network/sq/subquery/avalanche-dictionary"
dataSources:
- kind: ethereum/Runtime # We use ethereum runtime since Avalanche is compatible
startBlock: 57360 # Contract creation of Pangolin Token https://snowtrace.io/tx/0xfab84552e997848a43f05e440998617d641788d355e3195b6882e9006996d8f9
options:
# Must be a key of assets
abi: erc20
## Pangolin token https://snowtrace.io/token/0x60781c2586d68229fde47564546784ab3faca982
address: "0x60781C2586D68229fde47564546784ab3fACA982"
assets:
erc20:
file: "./abis/PangolinERC20.json"
mapping:
file: "./dist/index.js"
handlers:
- handler: handleTransaction
kind: ethereum/TransactionHandler # We use ethereum handlers since Avalanche is compatible
filter:
## The function can either be the function fragment or signature
# function: '0x095ea7b3'
# function: '0x7ff36ab500000000000000000000000000000000000000000000000000000000'
function: deposit(uint256 amount)
- handler: handleLog
kind: ethereum/LogHandler # We use ethereum handlers since Avalanche is compatible
filter:
topics:
## Follows standard log filters https://docs.ethers.io/v5/concepts/events/
- Transfer(address indexed from, address indexed to, uint256 amount)
# address: "0x60781C2586D68229fde47564546784ab3fACA982"
Overview
Top Level Spec
Field | Type | Description |
---|---|---|
specVersion | String | The spec version of the manifest file |
name | String | Name of your project |
version | String | Version of your project |
description | String | Description of your project |
runner | Runner Spec | Runner specs info |
repository | String | Git repository address of your project |
schema | Schema Spec | The location of your GraphQL schema file |
network | Network Spec | Detail of the network to be indexed |
dataSources | DataSource Spec | The datasource to your project |
templates | Templates Spec | Allows creating new datasources from this templates |
Schema Spec
Field | Type | Description |
---|---|---|
file | String | The location of your GraphQL schema file |
Network Spec
If you start your project by using the subql init
command, you'll generally receive a starter project with the correct network settings. If you are changing the target chain of an existing project, you'll need to edit the Network Spec section of this manifest.
The chainId
is the network identifier of the blockchain, for Avalanche this is 43114
. See https://chainlist.org/chain/43114
Additionally you will need to update the endpoint
. This defines the (HTTP or WSS) endpoint of the blockchain to be indexed - this must be a full archive node. This property can be a string or an array of strings (e.g. endpoint: ['rpc1.endpoint.com', 'rpc2.endpoint.com']
). We suggest providing an array of endpoints as it has the following benefits:
- Increased speed - When enabled with worker threads, RPC calls are distributed and parallelised among RPC providers. Historically, RPC latency is often the limiting factor with SubQuery.
- Increased reliability - If an endpoint goes offline, SubQuery will automatically switch to other RPC providers to continue indexing without interruption.
- Reduced load on RPC providers - Indexing is a computationally expensive process on RPC providers, by distributing requests among RPC providers you are lowering the chance that your project will be rate limited.
Public nodes may be rate limited which can affect indexing speed, when developing your project we suggest getting a private API key from a professional RPC provider like OnFinality.
Field | Type | Description |
---|---|---|
chainId | String | A network identifier for the blockchain |
endpoint | String or String[] or Record<String, IEndpointConfig> | Defines the endpoint of the blockchain to be indexed, this can be a string, an array of endpoints, or a record of endpoints to endpoint configs - This must be a full archive node. |
dictionary | String | It is suggested to provide the HTTP endpoint of a full chain dictionary to speed up processing - read how a SubQuery Dictionary works. |
bypassBlocks | Array | Bypasses stated block numbers, the values can be a range (e.g. "10- 50" ) or integer , see Bypass Blocks |
Runner Spec
Field | Type | Description |
---|---|---|
node | Runner node spec | Describe the node service use for indexing |
query | Runner query spec | Describe the query service |
Runner Node Spec
Field | Type | Description |
---|---|---|
name | String | @subql/node-ethereum We use the Ethereum node package for Avalanche since it is compatible with the Ethereum framework |
version | String | Version of the indexer Node service, it must follow the SEMVER rules or latest , you can also find available versions in subquery SDK releases |
options | Runner Node Options | Runner specific options for how to run your project. These will have an impact on the data your project produces. CLI flags can be used to override these. |
Runner Query Spec
Field | Type | Description |
---|---|---|
name | String | @subql/query and @subql/query-subgraph |
version | String | Version of the Query service, available @subql/query versions and @subql/query-subgraph versions, it also must follow the SEMVER rules or latest . |
Runner Node Options
Field | v1.0.0 (default) | Description |
---|---|---|
historical | Boolean (true) | Historical indexing allows you to query the state at a specific block height. e.g A users balance in the past. |
unfinalizedBlocks | Boolean (false) | If enabled unfinalized blocks will be indexed, when a fork is detected the project will be reindexed from the fork. Requires historical. |
unsafe | Boolean (false) | Removes all sandbox restrictions and allows access to all inbuilt node packages as well as being able to make network requests. WARNING: this can make your project non-deterministic. |
skipTransactions | Boolean (false) | If your project contains only event handlers and you don't access any other block data except for the block header you can speed your project up. Handlers should be updated to use LightEthereumLog instead of EthereumLog to ensure you are not accessing data that is unavailable. |
Datasource Spec
Defines the data that will be filtered and extracted and the location of the mapping function handler for the data transformation to be applied.
Field | Type | Description |
---|---|---|
kind | string | ethereum/Runtime We use the Ethereum runtime for Avalanche since it is compatible with the Ethereum framework |
startBlock | Integer | This changes your indexing start block for this datasource, set this as high as possible to skip initial blocks with no relevant data |
endBlock | Integer | This sets a end block for processing on the datasource. After this block is processed, this datasource will no longer index your data. Useful when your contracts change at a certain block height, or when you want to insert data at genesis. For example, setting both the startBlock and endBlock to 320, will mean this datasource only operates on block 320 |
mapping | Mapping Spec |
Mapping Spec
Field | Type | Description |
---|---|---|
handlers & filters | Default handlers and filters | List all the mapping functions and their corresponding handler types, with additional mapping filters. |
Data Sources and Mapping
In this section, we will talk about the default Avalanche runtime and its mapping. Here is an example:
{
dataSources: [
{
kind: EthereumDatasourceKind.Runtime, // Indicates that this is default runtime
startBlock: 1, // This changes your indexing start block, set this higher to skip initial blocks with less data
options: {
// Must be a Record of assets
abi: "erc20",
// # this is the contract address for your target contract
address: "0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2",
},
assets: new Map([["erc20", { file: "./abis/erc20.abi.json" }]]),
mapping: {
file: "./dist/index.js", // Entry path for this mapping
handlers: [
/* Enter handers here */
],
},
},
];
}
Mapping Handlers and Filters
The following table explains filters supported by different handlers.
Your SubQuery project will be much more efficient when you only use TransactionHandler
or LogHandler
handlers with appropriate mapping filters (e.g. NOT a BlockHandler
).
Handler | Supported filter |
---|---|
ethereum/BlockHandler | modulo , timestamp |
ethereum/TransactionHandler | function filters (either be the function fragment or signature), from (address), to (address) |
ethereum/LogHandler | topics filters, and address |
Default runtime mapping filters are an extremely useful feature to decide what block, event, or extrinsic will trigger a mapping handler.
Only incoming data that satisfies the filter conditions will be processed by the mapping functions. Mapping filters are optional but are highly recommended as they significantly reduce the amount of data processed by your SubQuery project and will improve indexing performance.
The modulo
filter allows handling every N blocks, which is useful if you want to group or calculate data at a set interval. The following example shows how to use this filter.
filter:
modulo: 50 # Index every 50 blocks: 0, 50, 100, 150....
The timestamp
filter is very useful when indexing block data with specific time intervals between them. It can be used in cases where you are aggregating data on a hourly/daily basis. It can be also used to set a delay between calls to blockHandler
functions to reduce the computational costs of this handler.
The timestamp
filter accepts a valid cron expression and runs on schedule against the timestamps of the blocks being indexed. Times are considered on UTC dates and times. The block handler will run on the first block that is after the next iteration of the cron expression.
filter:
# This cron expression will index blocks with at least 5 minutes interval
# between their timestamps starting at startBlock given under the datasource.
timestamp: "*/5 * * * *"
Note
We use the cron-converter package to generate unix timestamps for iterations out of the given cron expression. So, make sure the format of the cron expression given in the timestamp
filter is compatible with the package.
Some common examples
# Every minute
timestamp: "* * * * *"
# Every hour on the hour (UTC)
timestamp: "0 * * * *"
# Every day at 1am UTC
timestamp: "0 1 * * *"
# Every Sunday (weekly) at 0:00 UTC
timestamp: "0 0 * * 0"
Simplifying your Project Manifest for a large number contract addresses
If your project has the same handlers for multiple versions of the same type of contract your project manifest can get quite repetitive. e.g you want to index the transfers for many similar ERC20 contracts, there are ways to better handle a large static list of contract addresses.
Note that there is also dynamic datasources for when your list of addresses is dynamic (e.g. you use a factory contract).
Real-time indexing (Block Confirmations)
As indexers are an additional layer in your data processing pipeline, they can introduce a massive delay between when an on-chain event occurs and when the data is processed and able to be queried from the indexer.
SubQuery provides real time indexing of unconfirmed data directly from the RPC endpoint that solves this problem. SubQuery takes the most probabilistic data before it is confirmed to provide to the app. In the unlikely event that the data isn’t confirmed and a reorg occurs, SubQuery will automatically roll back and correct its mistakes quickly and efficiently - resulting in an insanely quick user experience for your customers.
To control this feature, please adjust the --block-confirmations command to fine tune your project and also ensure that historic indexing is enabled (enabled by default)
Bypass Blocks
Bypass Blocks allows you to skip the stated blocks, this is useful when there are erroneous blocks in the chain or when a chain skips a block after an outage or a hard fork. It accepts both a range
or single integer
entry in the array.
When declaring a range
use an string in the format of "start - end"
. Both start and end are inclusive, e.g. a range of "100-102"
will skip blocks 100
, 101
, and 102
.
{
network: {
bypassBlocks: [1, 2, 3, "105-200", 290];
}
}
Endpoint Config
This allows you to set specific options relevant to each specific RPC endpoint that you are indexing from. This is very useful when endpoints have unique authentication requirements, or they operate with different rate limits.
Here is an example of how to set an API key in the header of RPC requests in your endpoint config.
{
network: {
endpoint: {
"https://arbitrum.rpc.subquery.network/public": {
headers: {
"x-api-key": "your-api-key",
},
// NOTE: setting this to 0 will not use batch requests
batchSize: 5
}
}
}
}