Quickstart
Build a Hydra Indexer and GraphQL server from scratch under five minutes
0. Hello Hydra!
Start off by setting up a project folder
mkdir hello-hydra && cd hello-hydra1. From zero to one
Run the scaffold command, which generates all the required files in a new folder hydra-sample
hydra-cli scaffold -d hydra-sampleAnswer the prompts and the scaffolder will generate a sample backbone for our Hydra project. This includes:
Sample GraphQL data schema in
schema.graphqldescribing proposals in the Kusama networkSample mapping scripts in the
./mappingfolder translating substrate events into theProposalentity CRUD operationsdockerfolder with scripts for running a Hydra Indexer and Hydra Processor locally.envwith all the necessary environment variables. It is pre-populated with the prompt answers but can be edited at any time.package.jsonwith a few utility yarn scripts to be used later on.
2. Codegen
Make sure a Postgres database is up and running in the background and is accessible with the credentials provided during the scaffolding. Run
It will generate the model files as defined in schema.graphql, create the database schema and run all the necessary migrations.
NB! Use with caution in production, as it will delete all the existing records in the processor database.
Under the fold, yarn booststrap creates generated/graphql-server with a ready-to-use Apollo GraphQL server powering the query node API.
3. Typegen for events and extrinsics
Now let's inspect manifest.yml which defines which events and extrinsics are going to be processed by Hydra Processor. Two most important sections are typegen and mappings
hydra-typegen is an auxiliary tool for generating typesafe event and extrinsic classes from the on-chain metadata. It is not strictly necessary to use it, but type safety significantly simplifies the development of the event and extrinsic handlers.
The typegen section of the manifest lists the events and extrinsics for which typescript classes will be generated together with the metadata source and the output directory.
Typegen fetches the metadata from the chain from the block with a given hash (or from the top block if no hash is provided). For chains with non-standard types one should additionally provide custom type definitions, as below:
Run
and inspect mappings/generated/types where the newly created classes for the declared events and extrinsics will be generated.
4. Mappings
Mapping are defined in the mappings section of the manifest file and reside in the mappings folder.
Run
to build the mappings into a js module. Make sure the mappings are rebuilt after each change.
5. Run Hydra Indexer locally
Hydra's two-tier architecture separates data ingesting and indexing (done by Hydra Indexer) and processing (done by Hydra Processor, of course). Hydra Indexer + API gateway is a set-and-forget service which requires maintainance only when there is a major runtime upgrade. The scaffolder conveniently creatres a stub for running the indexer stack with docker-compose, as defined in docker-compose-indexer.yml
The WS_PROVIDER_ENDPOINT_URIenvironment variable defines the node to connect. Additionally, one can map volumes as json files with runtime type definitions. The following environment variables
TYPES_JSONSPEC_TYPESCHAIN_TYPESBUNDLE_TYPES
can be used to inject custom types and type overrides for spec, chain and bundle definitions. For more info, consult polkadot.js docs
Let's run a local indexer against a Polkadot chain. Since all Polkadot type definitions are already included in polkadot.js library, there is no need to add type definition and the only change is to set WS_PROVIDER_ENDPOINT_URI=wss://rpc.polkadot.io together with the database variables and run
Check the status of the indexer by navigating to the indexer playground at localhost:4001/graphql and querying
Make sure the major hydraVersion matches the one of hydra-cli and declared in manifest.yml
6. Run Hydra Processor locally
Hydra Processor connects to a Hydra Indexer gateway for sourcing the indexed block, event and extrinsic data for processing.
Set INDEXER_ENDPOINT_URL in .env to the local indexer http://localhost:4001/graphql and run
7. Run Query Node API
Run
The query node API is now available at http://localhost:4000/graphql and you can find some transfers:
7. Dockerize & deploy
Among other things, the scaffolder generates a docker folder with Dockerfiles.
First, build the builder image:
Now the images for the GraphQL query node and the processor can be built (they use the builder image under the hood)
In order to run the docker-compose stack, we need to create the schema and run the database migrations.
The last command runs yarn db:bootstrap in the builder image. A similar setup strategy may be used for Kubernetes (with builderas a starter container).
Now everything is ready:
What to do next?
Explore more examples
Describe your own schema in
schema.graphqlWrite your indexer mappings
Push your Hydra indexer and GraphQL Docker images to Docker Hub and deploy
Last updated
Was this helpful?