Filecoin Specification
Filecoin Specification
protocol version: v0.1.0
spec doc version: v1.1-c8fdb768
last published: 2019-12-14_17:46:52Z

Introduction

Warning: This draft of the Filecoin protocol specification is a work in progress. It is intended to establish the rough overall structure of the document, enabling experts to fill in different sections in parallel. However, within each section, content may be out-of-order, incorrect, and/or incomplete. The reader is advised to refer to the official Filecoin spec document for specification and implementation questions.

Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.

The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).

Architecture Diagrams

Filecoin Systems

Status Legend:

  • πŸ›‘ Bare - Very incomplete at this time.
    • Implementors: This is far from ready for you.
  • ⚠️ Rough – work in progress, heavy changes coming, as we put in place key functionality.
    • Implementors: This will be ready for you soon.
  • πŸ” Refining - Key functionality is there, some small things expected to change. Some big things may change.
    • Implementors: Almost ready for you. You can start building these parts, but beware there may be changes still.
  • βœ… Stable - Mostly complete, minor things expected to change, no major changes expected.
    • Implementors: Ready for you. You can build these parts.

[Show / Hide ] status indicators

Overview Diagram

TODO:

  • cleanup / reorganize
    • this diagram is accurate, and helps lots to navigate, but it’s still a bit confusing
    • the arrows and lines make it a bit hard to follow. We should have a much cleaner version (maybe based on C4)
  • reflect addition of Token system
    • move data_transfers into Token
Protocol Overview Diagram (open in new tab)

Protocol Flow Diagram – deals on chain

Protocol Sequence Diagram - Deals on Chain (open in new tab)

Parameter Calculation Dependency Graph

This is a diagram of the model for parameter calculation. This is made with orient, our tool for modeling and solving for constraints.

Parameter Calculation Dependency Graph (open in new tab)

Key Concepts

For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:

  • Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).

  • Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).

  • Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.

  • APIs are messages that can be sent to components. A client’s view of a given sub-protocol, such as a request to a miner node’s {% {} %} component to store files in the storage market, may require the execution of a series of APIs.

  • Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components, and supports all of the APIs detailed in the spec.

  • Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.

  • Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.

Filecoin VM

The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages, and a checkpoint of the current global state after the application of those messages.

The global state here consists of a set of actors, each with their own private state.

An actor is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state pointer, a code CID which tells the system what type of actor it is, and a nonce which tracks the number of messages sent by this actor. (TODO: the nonce is really only needed for external user interface actors, AKA account actors. Maybe we should find a way to clean that up?)

There are two routes to calling a method on an actor. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message to the network, and pay a fee to the miner that includes your message. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the messages execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).

Second, an actor may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).

For full implementation details, see the VM subsystem.

Filecoin Spec Process (v1)

πŸš€ Pre-launch mode

Until we launch, we are making lots of changes to the spec to finish documenting the current version of the protocol. Changes will be made to the spec by a simple PR process, with approvals by key stakeholders. Some refinements are still to happen and testnet is expected to bring a few significant fixes/improvements. Most changes now are changing the document, NOT changing the protocol, at least not in a major way.

Until we launch, if something is missing, PR it in. If something is wrong, PR a fix. If something needs to be elaborated, PR in updates. What is in the top level of this repo, in master, is the spec, is the Filecoin Protocol. Nothing else matters (ie. no other documents, issues contain “the protocol”).

New Proposals -> Drafts -> Spec

⚠️ WARNING: Filecoin is in pre-launch mode, and we are finishing protocol spec and implementations of the current construction/version of the protocol only. We are highly unlikely to merge anything new into the Filecoin Protocol until after mainnet. Feel free to explore ideas anyway and prepare improvements for the future.

For anything that is not part of the currently speced systems (like ‘repair’, for example) the process we will use is:

  • (1) First, discuss the problem(s) and solution(s) in an issue
    • Or several issues, if the space is large and multithreaded enough.
    • Work out all the details required to make this proposal work.
  • (2) Write a draft with all the details.
    • When you feel like a solution is near, write up a draft document that contains all the details, and includes what changes would need to happen to the spec
    • E.g. “Add a System called X with …”, or “Add a library called Y, …”, or “Modify vm/state_tree to include …”
    • Place this document inside the src/drafts/ directory.
    • Anybody is welcome to contribute well-reasoned and detailed drafts.
    • (Note: these drafts will give way to FIPs in the future)
  • (3) Seek approval to merge this into the specification.
    • To seek approval, open an issue and discuss it.
    • If the draft approved by the owners of the filecoin-spec, then the changes to the spec will need to be made in a PR.
    • Once changes make it into the spec, remove the draft.

It is acceptable for a PR for a draft to stay open for quite a while, as thought and discussion on the topic happens. At some point, if the reviewers and the author feel that the current state of the draft is stable enough (though not ‘done’) then it should be merged into the repo. Further changes to the draft are additional PRs, which may generate more discussion. Comments on these drafts are welcome from anyone, but if you wish to be involved in the actual research process, you will need to devote very considerable time and energy to the process.

On merging

For anything in the drafts or notes folder, merge yourself after a review from a relevant person. For anything in the top level (canonical spec), @zixuanzh, @anorth, @whyrusleeping or @jbenet will merge after proper review.

Issues

Issues in the specs repo will be high signal. They will either be proposals, or issues directly relating to problems in the spec. More speculative research questions and discussion will happen in the research repo.

About this specification

TODO

FIPs - Filecoin Improvement Proposals

TODO

Contributing to the Filecoin spec

TODO

Change Log - Version History

v1.1 - 2019-10-30 - c3f6a6dd

  • Deals on chain
    • Storage Deals
    • Full StorageMarketActor logic:
      • client and miner balances: deposits, locking, charges, and withdrawls
      • collateral slashing
    • Full StorageMinerActor logic:
      • sector states, state transitions, state accounting, power accounting
      • DeclareFaults + RecoverSectors flow
      • CommitSector flow
      • SubmitElectionPost or SubmitSurprisePoSt flow
        • Sector proving, faults, recovery, and expiry
      • OnMissedSurprisePost flow
        • Fault sectors, drop power, expiry, and more
    • StoragePowerActor
      • power accounting based on StorageMinerActor state changes
      • Collaterals: deposit, locking, withdrawal
      • Slashing collaerals
    • Interactive-Post
      • StorageMinerActor: PrecommitSector and CommitSector
    • Surprise-Post
      • Challenge flow through CronActor -> StoragePowerActor -> StorageMiner
  • Virtual Machine
    • Extracted VM system out of blockchain
    • Addresses
    • Actors
      • Separation of code and state
    • Messages
      • Method invocation representation
    • Runtime
      • Slimmed down interface
      • Safer state Acquire, Release, Commit flow
      • Exit codes
      • Full invocation flow
      • Safer recursive context construction
      • Error levels and handling
      • Detecting and handling out of gas errors
    • Interpreter
      • ApplyMessage
      • {Deduct,Deposit} -> Transfer - safer
      • Gas accounting
    • VM system actors
      • InitActor basic flow, plug into Runtime
      • CronActor full flow, static registry
    • AccountActor basic flow
  • Data Transfer
    • Full Data Transfer flows
      • push, pull, 1-RTT pull
    • protocol, data structures, interface
    • diagrams
  • blockchain/ChainSync:
    • first version of ChainSync protocol description
    • Includes protocol state machine description
    • Network bootstrap – connectivity and state
    • Progressive Block Validation
    • Progressive Block Propagation
  • Other
    • Spec section status indicators
    • Changelog

v1.0 - 2019-10-07 - 583b1d06

  • Full spec reorganization
  • Tooling
    • added a build system to compile tools
    • added diagraming tools (dot, mermaid, etc)
    • added dependency installation
    • added Orient to calculate protocol parameters
  • Content
    • filecoin_nodes
      • types - an overview of different filecoin node types
      • repository - local data-structure storage
      • network interface - connecting to libp2p
      • clock - a wall clock
    • files & data
      • file - basic representation of data
      • piece - representation of data to store in filecoin
    • blockchain
      • blocks - basic blockchain data structures (block, tipset, chain, etc)
      • storage power consensus - basic algorithms and crypto artifacts for SPC
      • StoragePowerActor basics
    • token
      • skeleton of sections
    • storage mining
      • storage miner: module that controls and coordinates storage mining
      • sector: unit of storage, sealing, crypto artifacts, etc.
      • sector index: accounting sectors and metadata
      • storage proving: seals, posts, and more
    • market
      • deals: storage market deal basics
      • storage market: StorageMarketActor basics
    • orient
      • orient models for proofs and block sizes
    • libraries
      • filcrypto - sealing, PoRep, PoSt algorithms
      • ipld - cids, ipldstores
      • libp2p - host/node representation
      • ipfs - graphsync and bitswap
      • multiformats - multihash, multiaddr
    • diagrams
      • system overview
      • full protocol mermaid flow

pre v1.0

System Decomposition

What are Systems? How do they work?

Filecoin decouples and modularizes functionality into loosely-joined systems. Each system adds significant functionality, usually to achieve a set of important and tightly related goals.

For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.

Why is System decoupling useful?

This decoupling is useful for:

  • Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
  • Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
  • Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
  • Scalability: systems and various use cases may drive different performance requirements for different opertators. System decoupling makes it easier for operators to scale their deployments along system boundaries.

Filecoin Nodes don’t need all the systems

Filecoin Nodes vary significantly, and do not need all the systems. Most systems are only needed for a subset of use cases.

For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust. Of course, such nodes

Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:

  • Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
  • Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
  • Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
  • Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.

Separating Systems

How do we determine what functionality belongs in one system vs another?

Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.

Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For example, the StoragePowerActor and the StorageMarketActor were a single Actor previously. This caused a large coupling of functionality across StorageDeal making, the StorageMarket, markets in general, with Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality requried breaking apart the one actor into two.

Decomposing within a System

Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.

Implementing Systems

System Requirements

In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).

All Systems, as defined in this document, require the following:

  • Repository:
    • Local IpldStore. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes.
    • User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
    • Local, Secure KeyStore. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie the KeyStore) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
  • Local FileStore. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB).
  • Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes. Systems expect to be initialized with a libp2p.Node on which they can mount their own protocols.
  • Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.

For this purpose, we use the FilecoinNode data structure, which is passed into all systems at initialization:

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/key_store"

type FilecoinNode struct {
    Node        libp2p.Node

    Repository  repo.Repository
    FileStore   filestore.FileStore
    Clock       clock.UTCClock
    LocalGraph  ipld.GraphStore
    KeyStore    key_store.KeyStore

    SubmitMessage(m msg.SignedMessage) error
}
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    config          config.Config
    ipldStore       ipld.Store
    keyStore        key.Store

    // CreateRepository(config Config, ipldStore IPLDDagStore, keyStore KeyStore) &Repository
    GetIPLDStore()  ipld.Store
    GetKeyStore()   key.Store
    GetConfig()     config.Config
}

System Limitations

Further, Systems MUST abide by the following limitations:

  • Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achived by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
  • Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it). This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
  • No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
  • No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).

Systems

Filecoin Nodes

Node Types

Node Interface

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/key_store"

type FilecoinNode struct {
    Node        libp2p.Node

    Repository  repo.Repository
    FileStore   filestore.FileStore
    Clock       clock.UTCClock
    LocalGraph  ipld.GraphStore
    KeyStore    key_store.KeyStore

    SubmitMessage(m msg.SignedMessage) error
}

Examples

There are many kinds of Filecoin Nodes …

This section should contain:

  • what all nodes must have, and why
  • examples of using different systems

Chain Verifier Node

type ChainVerifierNode interface {
  FilecoinNode

  systems.Blockchain
}

Client Node

type ClientNode struct {
  FilecoinNode

  systems.Blockchain
  markets.StorageMarketClient
  markets.RetrievalMarketClient
  markets.MarketOrderBook
  markets.DataTransfers
}

Storage Miner Node

type StorageMinerNode interface {
  FilecoinNode

  systems.Blockchain
  systems.Mining
  markets.StorageMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Retrieval Miner Node

type RetrievalMinerNode interface {
  FilecoinNode

  blockchain.Blockchain
  markets.RetrievalMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Relayer Node

type RelayerNode interface {
  FilecoinNode

  blockchain.MessagePool
  markets.MarketOrderBook
}

Repository - Local Storage for Chain Data and Systems

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    config          config.Config
    ipldStore       ipld.Store
    keyStore        key.Store

    // CreateRepository(config Config, ipldStore IPLDDagStore, keyStore KeyStore) &Repository
    GetIPLDStore()  ipld.Store
    GetKeyStore()   key.Store
    GetConfig()     config.Config
}

Config - Local Storage for ConfigurationValues

Filecoin Node configuration

type ConfigKey string
type ConfigVal Bytes

type Config struct {
    Get(k ConfigKey) union {c ConfigVal, e error}
    Put(k ConfigKey, v ConfigVal) error

    Subconfig(k ConfigKey) Config
}

KeyStore & user keys

type Key struct {
    //  Algo Algorithm
    Data Bytes
}

// key.Name
type Name string

// key.Store
// TODO: redo this providing access to enc, dec, sign, sigverify operations, and not the keys.
type Store struct {
    Put(n Name, key Key) error
    Get(n Name) union {k Key, e error}
    //  Sign(n Name, data Bytes) Signature
}

type Algorithm union {
    Sig SignatureAlgorithm
}

type SignatureAlgoC struct {
    Sign(b Bytes) union {s Signature, e error}
    Verify(b Bytes, s Signature) union {b bool, e error}
}

type Secp256k1SignatureAlgorithm SignatureAlgoC
type BLSAggregateSignatureAlgorithm SignatureAlgoC

type SignatureAlgorithm union {
    Secp256k1SigAlgo  Secp256k1SignatureAlgorithm
    BLSSigAlgo        BLSAggregateSignatureAlgorithm
}

type Signature struct {
    Algo           SignatureAlgorithm
    Data           Bytes

    Verify(k Key)  union {b bool, e error}
}

IpldStore - Local Storage for hash-linked data

type Store GraphStore

// imported as ipld.Object
type Object interface {
    CID() CID

    // Populate(v interface{}) error
}

TODO:

  • What is IPLD
    • hash linked data
    • from IPFS
  • Why is it relevant to filecoin
    • all network datastructures are definitively IPLD
    • all local datastructures can be IPLD
  • What is an IpldStore
    • local storage of dags
  • How to use IpldStores in filecoin
    • pass it around
  • One ipldstore or many
    • temporary caches
    • intermediately computed state
  • Garbage Collection

Usage in Systems

TODO: - Explain how repo is used with systems and subsystems - compartmentalized local storage - store ipld datastructures of stateful objects

Network Interface

import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

type Node libp2p.Node

Filecoin nodes use the libp2p protocol for peer discovery, peer routing, and message multicast, and so on. Libp2p is a set of modular protocols common to the peer-to-peer networking stack. Nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protcols will be mounted under /fil/... protocol identifiers.

Here is the list of libp2p protocols used by Filecoin.

  • Graphsync:
    • Graphsync is used to transfer blockchain and user data
    • Draft spec
    • No filecoin specific modifications to the protocol id
  • Gossipsub:
    • block headers and messages are broadcasted through a Gossip PubSub protocol where nodes can subscribe to topics for blockchain data and receive messages in those topics. When receiving messages related to a topic, nodes processes the message and forwards it to its peers who also subscribed to the same topic.
    • Spec is here
    • No filecoin specific modifications to the protocol id. However the topic identifiers MUST be of the form fil/blocks/<network-name> and fil/msgs/<network-name>
  • KademliaDHT:
    • Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. Kad DHT is used primarily for peer routing as well as peer discovery in the Filecoin protocol.
    • Spec TODO reference implementation
    • The protocol id must be of the form fil/kad/<network-name>
  • Bootstrap List:
    • Bootstrap is a list of nodes that a new node attempts to connect upon joining the network. The list of bootstrap nodes and their addresses are defined by the users.
  • Peer Exchange:
    • Peer Exchange is a discovery protocol enabling peers to create and issue queries for desired peers against their existing peers
    • spec TODO
    • No Filecoin specific modifications to the protocol id.
  • DNSDiscovery: Design and spec needed before implementing
  • HTTPDiscovery: Design and spec needed before implementing
  • Hello:
    • Hello protocol handles new connections to filecoin nodes. It is an important part of the discovery process for ambient protocols (like KademliaDHT)
    • Spec TODO.
    • No Filecoin specific modifications to the protocol id.

Clock

type UnixTime int64  // unix timestamp

// UTCClock is a normal, system clock reporting UTC time.
// It should be kept in sync, with drift less than 1 second.
type UTCClock struct {
    NowUTCUnix() UnixTime
}

// ChainEpoch represents a round of a blockchain protocol.
type ChainEpoch UVarint

// ChainEpochClock is a clock that represents epochs of the protocol.
type ChainEpochClock struct {
    // GenesisTime is the time of the first block. EpochClock counts
    // up from there.
    GenesisTime              UnixTime

    EpochAtTime(t UnixTime)  ChainEpoch
}
package clock

import "time"

// UTCSyncPeriod notes how often to sync the UTC clock with an authoritative
// source, such as NTP, or a very precise hardware clock.
var UTCSyncPeriod = time.Hour

// EpochDuration is a constant that represents the duration in seconds
// of a blockchain epoch.
var EpochDuration = UnixTime(15)

func (_ *UTCClock_I) NowUTCUnix() UnixTime {
	return UnixTime(time.Now().Unix())
}

// EpochAtTime returns the ChainEpoch corresponding to time `t`.
// It first subtracts GenesisTime, then divides by EpochDuration
// and returns the resulting number of epochs.
func (c *ChainEpochClock_I) EpochAtTime(t UnixTime) ChainEpoch {
	difference := t - c.GenesisTime()
	epochs := difference / EpochDuration
	return ChainEpoch(epochs)
}

Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock (tolerating some bounded drift).

Filecoin relies on this system clock in order to secure consensus. Specifically the clock is necessary to support validation rules that prevent block producers from mining blocks with a future timstamp, and running leader elections more frequently than the protocol allows.

Clock uses

The Filecoin system clock is used:

  • by syncing nodes to validate that incoming blocks were mined in the appropriate epoch given their timestamp (see Block Validation). This is possible because the system clock maps all times to a unique epoch number totally determined by the start time in the genesis block.
  • by syncing nodes to drop blocks coming from a future epoch
  • by mining nodes to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in the current round (see Storage Power Consensus).

In order to allow miners to do the above, the system clock must:

  1. Have low enough clock drift relative to other nodes so that blocks are not mined in epochs considered future epochs from the persective of other nodes.
  2. Set epoch number on node initialization equal to epoch = Floor[(current_time - genesis_time) / epoch_time]

It is expected that other subsystems will register to a NewRound() event from the clock subsystem.

Clock Requirements

Computer-grade clock crystals can be expected to have drift rates on the order of 1ppm (i.e. 1 microsecond every second or .6 seconds a week), therefore, in order to respect the first above-requirement,

  • clients SHOULD query an NTP server (pool.ntp.org is recommended) on an hourly basis to adjust clock skew.
  • clients MAY consider using cesium clocks instead for accurate synchrony within larger mining operations

Mining operations have a strong incentive to prevent their clock from drifting ahead more than one epoch to keep their block submissions from being rejected. Likewise they have an incentive to prevent their clocks from drifting behind more than one epoch to avoid partitioning themselves off from the synchronized nodes in the network.

Future work

If either of the above metrics show significant network skew over time, future versions of Filecoin may include potential timestamp/epoch correction periods at regular intervals.

When recoverying from exceptional chain halting outages (for example all implementations panic on a given block) the network can potentially opt for per-outage “dead zone” rules banning the authoring of blocks during the outage epochs to prevent attack vectors related to unmined epochs during chain restart.

Future versions of the Filecoin protocol may use Verifiable Delay Functions (VDFs) to strongly enforce block time and fulfill this leader election requirement; we choose to explicitly assume clock synchrony until hardware VDF security has been proven more extensively.

Key Store

The Key Store is a fundamental abstraction in any full Filecoin node used to store the keypairs associated to a given miner’s address and distinct workers (should the miner choose to run multiple workers).

Node security depends in large part on keeping these keys secure. To that end we recommend keeping keys separate from any given subsystem and using a separate key store to sign requests as required by subsystems as well as keeping those keys not used as part of mining in cold storage.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type KeyStore struct {
    MinerAddress  address.Address
    OwnerKey      filcrypto.VRFKeyPair
    WorkerKey     filcrypto.VRFKeyPair
}
package key_store

Filecoin storage miners rely on three main components: - The miner address uniquely assigned to a given storage miner actor upon calling registerMiner() in the Storage Power Consensus Subsystem. It is a unique identifier for a given storage miner to which its power and other keys will be associated. - The owner keypair is provided by the miner ahead of registration and its public key associated to the miner address. Block rewards and other payments are made to the ownerAddress. - The worker keypair can be chosen and changed by the miner, its public key associated to the miner address. It is used to sign transactions, signatures, etc.

While miner addresses are unique, multiple storage miner actors can share an owner public key or likewise a worker public key.

TODO:

  • potential reccomendations or clear disclaimers with regards to consequences of failed key security
  • protocol for changing worker keys in filecoin

Files & Data

Filecoin’s primary aim is to store client’s Files and Data. This section details data structures and tooling related to working with files, chunking, encoding, graph representations, Pieces, storage abstractions, and more.

File

// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string

// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
    Path()   Path
    Size()   int
    Close()  error

    // Read reads from File into buf, starting at offset, and for size bytes.
    Read(offset int, size int, buf Bytes) struct {size int, e error}

    // Write writes from buf into File, starting at offset, and for size bytes.
    Write(offset int, size int, buf Bytes) struct {size int, e error}
}

FileStore - Local Storage for Files

The FileStore is an abstraction used to refer to any underlying system or device that Filecoin will store its data to. It is based on Unix filesystem semantics, and includes the notion of Paths. This abstraction is here in order to make sure Filecoin implementations make it easy for end-users to replace the underlying storage system with whatever suits their needs. The simplest version of FileStore is just the host operating system’s file system.

// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
    Open(p Path)           union {f File, e error}
    Create(p Path)         union {f File, e error}
    Store(p Path, f File)  error
    Delete(p Path)         error

    // maybe add:
    // Copy(SrcPath, DstPath)
}
Varying user needs

Filecoin user needs vary significantly, and many users – especially miners – will implement complex storage architectures underneath and around Filecoin. The FileStore abstraction is here to make it easy for these varying needs to be easy to satisfy. All file and sector local data storage in the Filecoin Protocol is defined in terms of this FileStore interface, which makes it easy for implementations to make swappable, and for end-users to swap out with their system of choice.

Implementation examples

The FileStore interface may be implemented by many kinds of backing data storage systems. For example:

  • The host Operating System file system
  • Any Unix/Posix file system
  • RAID-backed file systems
  • Networked of distributed file systems (NFS, HDFS, etc)
  • IPFS
  • Databases
  • NAS systems
  • Raw serial or block devices
  • Raw hard drives (hdd sectors, etc)

Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.

Piece - a part of a file

A Piece is an object that represents a whole or part of a File, and is used by Clients and Miners in Deals. Clients hire Miners to store Pieces.

The piece data structure is designed for proving storage of arbitrary IPLD graphs and client data. This diagram shows the detailed composition of a piece and its proving tree, including both full and bandwidth-optimized piece data structures.

Pieces, Proving Trees, and Piece Data Structures (open in new tab)
import ipld "github.com/filecoin-project/specs/libraries/ipld"

// PieceCID is the main reference to pieces in Filecoin. It is the CID
// of the Piece.
type PieceCID ipld.CID

type NumBytes UVarint  // TODO: move into util

// PieceSize is the size of a piece, in bytes
type PieceSize struct {
    PayloadSize   NumBytes
    OverheadSize  NumBytes

    Total()       NumBytes
}

// PieceInfo is an object that describes details about a piece, and allows
// decoupling storage of this information from the piece itself.
type PieceInfo struct {
    ID    PieceID
    Size  PieceSize
    // TODO: store which algorithms were used to construct this piece.
}

// Piece represents the basic unit of tradeable data in Filecoin. Clients
// break files and data up into Pieces, maybe apply some transformations,
// and then hire Miners to store the Pieces.
//
// The kinds of transformations that may ocurr include erasure coding,
// encryption, and more.
//
// Note: pieces are well formed.
type Piece struct {
    Info       PieceInfo

    // tree is the internal representation of Piece. It is a tree
    // formed according to a sequence of algorithms, which make the
    // piece able to be verified.
    tree       PieceTree

    // Payload is the user's data.
    Payload()  Bytes

    // Data returns the serialized representation of the Piece.
    // It includes the payload data, and intermediate tree objects,
    // formed according to relevant storage algorithms.
    Data()     Bytes
}

// // LocalPieceRef is an object used to refer to pieces in local storage.
// // This is used by subsystems to store and locate pieces.
// type LocalPieceRef struct {
//   ID   PieceID
//   Path file.Path
// }

// PieceTree is a data structure used to form pieces. The algorithms involved
// in the storage proofs determine the shape of PieceTree and how it must be
// constructed.
//
// Usually, a node in PieceTree will include either Children or Data, but not
// both.
//
// TODO: move this into filproofs -- use a tree from there, as that's where
// the algorightms are defined. Or keep this as an interface, met by others.
type PieceTree struct {
    Children  [PieceTree]
    Data      Bytes
}

PieceStore - storing and indexing pieces

A PieceStore is an object that can store and retrieve pieces from some local storage. The PieceStore additionally keeps an index of pieces.

import ipld "github.com/filecoin-project/specs/libraries/ipld"

type PieceID UVarint

// PieceStore is an object that stores pieces into some local storage.
// it is internally backed by an IpldStore.
type PieceStore struct {
    Store              ipld.Store
    Index              {PieceID: Piece}

    Get(i PieceID)     struct {p Piece, e error}
    Put(p Piece)       error
    Delete(i PieceID)  error
}

Data Transfer in Filecoin

Data Transfer is a system for transferring all or part of a Piece across the network when a deal is made.

Modules

This diagram shows how Data Tranfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.

Data Transfer - Push Flow (open in new tab)

Terminology

  • Push Request: A request to send data to the other party
  • Pull Request: A request to have the other party send data
  • Requestor: The party that initiates the data transfer request (whether Push or Pull)
  • Responder: The party that receives the data transfer request
  • Data Transfer Voucher: A wrapper around storage or retrieval data that can identify and validate the transfer request to the other party
  • Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage deal or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request.
  • Scheduler: Once a request is negotiated and validated, actual transfer is managed by a scheduler on both sides. The scheduler is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
  • Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
  • GraphSync: The default underlying transfer protocol used by the Scheduler. The full graphsync specification can be found at https://github.com/ipld/specs/blob/master/block-layer/graphsync/graphsync.md

Request Phases

There are two basic phases to any data transfer:

  1. Negotiation - the requestor and responder agree to the transfer by validating with the data transfer voucher
  2. Transfer - Once both parties have negotiated and agreed upon, the data is actually transferred. The default protocol used to do the transfer is Graphsync

Note that the Negotiation and Transfer stages can occur in seperate round trips, or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data.

Example Flows

Push Flow
Data Transfer - Push Flow (open in new tab)
  1. A requestor initiates a Push transfer when it wants to send data to another party.
  2. The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher. It also puts the data transfer in the scheduler queue, meaning it expects the responder to initiate a transfer once the request is verified
  3. The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer
  5. The responder makes a GraphSync request for the data
  6. The requestor receives the graphsync request, verifies it’s in the scheduler and begins sending data
  7. The responder receives data and can produce an indication of progress
  8. The responder completes receiving data, and notifies any listeners

The push flow is ideal for storage deals, where the client initiates the push once it verifies the the deal is signed and on chain

Pull Flow
Data Transfer - Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  3. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer (meaning it is expecting the requestor to initiate the actual transfer)
  5. The responder’s data transfer module sends a response to the requestor saying it has accepted the transfer and is waiting for the requestor to initiate the transfer
  6. The requestor schedules the data transfer
  7. The requestor makes a GraphSync request for the data
  8. The responder receives the graphsync request, verifies it’s in the scheduler and begins sending data
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

The pull flow is ideal for retrieval deals, where the client initiates the pull when the deal is agreed upon.

Alternater Pull Flow - Single Round Trip

Data Transfer - Single Round Trip Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestor’s DTM schedules the data transfer
  3. The requestor makes a Graphsync request to the responder with a data transfer request
  4. The responder receives the graphsync request, and forwards the data transfer request to the data transfer module
  5. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  6. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  7. The responder’s data transfer module schedules the transfer
  8. The responder sends a graphsync response along with a data transfer accepted response piggypacked
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

Protocol

A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a Libp2p protocol type

A Pull request expects a response. The requestor does not initiate the transfer until they know the request is accepted.

The responder should send a response to a push request as well so the requestor can release the resources (if not accepted). However, if the Responder accepts the request they can immediately initiate the transfer

Using the Data Transfer Protocol as an independent libp2p communciation mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.

Data Structures

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"

type StorageDeal struct {}
type RetrievalDeal struct {}

// A DataTransferVoucher is used to validate
// a data transfer request against the underlying storage or retrieval deal
// that precipitated it
type DataTransferVoucher union {
    StorageDealVoucher
    RetrievalDealVoucher
}

type StorageDealVoucher struct {
    deal StorageDeal
}

type RetrievalDealVoucher struct {
    deal RetrievalDeal
}

type Ongoing struct {}
type Paused struct {}
type Completed struct {}
type Failed struct {}
type ChannelNotFoundError struct {}

type DataTransferStatus union {
    Ongoing
    Paused
    Completed
    Failed
    ChannelNotFoundError
}

type TransferID UInt

type ChannelID struct {
    to libp2p.PeerID
    id TransferID
}

// All immutable data for a channel
type DataTransferChannel struct {
    // an identifier for this channel shared by request and responder, set by requestor through protocol
    transferID  TransferID
    // base CID for the piece being transferred
    PieceRef    ipld.CID
    // portion of Piece to return, specified by an IPLD selector
    Selector    ipld.Selector
    // used to verify this channel
    voucher     DataTransferVoucher
    // the party that is sending the data (not who initiated the request)
    sender      libp2p.PeerID
    // the party that is receiving the data (not who initiated the request)
    recipient   libp2p.PeerID
    // expected amount of data to be transferred
    totalSize   UVarint
}

// DataTransferState is immutable channel data plus mutable state
type DataTransferState struct @(mutable) {
    DataTransferChannel
    // total bytes sent from this node (0 if receiver)
    sent                 UVarint
    // total bytes received by this node (0 if sender)
    received             UVarint
}

type Open struct {
    Initiator libp2p.PeerID
}

type SendData struct {
    BytesToSend UInt
}

type Progress struct {
    BytesSent UInt
}

type Pause struct {
    Initiator libp2p.PeerID
}

type Error struct {
    ErrorMsg string
}

type Complete struct {}

type DataTransferEvent union {
    Open
    SendData
    Progress
    Pause
    Error
    Complete
}

type DataTransferSubscriber struct {
    OnEvent(event DataTransferEvent, channelState DataTransferState)
}

// RequestValidator is an interface implemented by the client of the data transfer module to validate requests
type RequestValidator struct {
    ValidatePush(
        sender    libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    )
    ValidatePull(
        receiver  libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    )
    ValidateIntermediate(
        otherPeer  libp2p.PeerID
        voucher    DataTransferVoucher
        PieceRef   ipld.CID
        Selector   ipld.Selector
    )
}

type DataTransferSubsystem struct @(mutable) {
    host              libp2p.Node
    dataTransfers     {ChannelID: DataTransferState}
    requestValidator  RequestValidator
    pieceStore        piece.PieceStore

    // open a data transfer that will send data to the recipient peer and
    // open a data transfer that will send data to the recipient peer and
    // transfer parts of the piece that match the selector
    OpenPushDataChannel(
        to        libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    ) ChannelID

    // open a data transfer that will request data from the sending peer and
    // transfer parts of the piece that match the selector
    OpenPullDataChannel(
        to        libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    ) ChannelID

    // close an open channel (effectively a cancel)
    CloseDataTransferChannel(x ChannelID)

    // get status of a transfer
    TransferChannelStatus(x ChannelID) DataTransferStatus

    // pause an ongoing channel
    PauseChannel(x ChannelID)

    // resume an ongoing channel
    ResumeChannel(x ChannelID)

    // send an additional voucher for an in progress request
    SendIntermediateVoucher(x ChannelID, voucher DataTransferVoucher)

    // get notified when certain types of events happen
    SubscribeToEvents(subscriber DataTransferSubscriber)

    // get all in progress transfers
    InProgressChannels() {ChannelID: DataTransferState}
}

VM - Virtual Machine

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

// VM is the object that controls execution.
// It is a stateless, pure function. It uses no local storage.
//
// TODO: make it just a function: VMExec(...) ?
type VM struct {
    // Execute computes and returns outTree, a new StateTree which is the
    // application of msgs to inTree.
    //
    // *Important:* Execute is intended to be a pure function, with no side-effects.
    // however, storage of the new parts of the computed outTree may exist in
    // local storage.
    //
    // *TODO:* define whether this should take 0, 1, or 2 IpldStores:
    // - (): storage of IPLD datastructures is assumed implicit
    // - (store): get and put to same IpldStore
    // - (inStore, outStore): get from inStore, put new structures into outStore
    //
    // This decision impacts callers, and potentially impacts how we reason about
    // local storage, and intermediate storage. It is definitely the case that
    // implementations may want to operate on this differently, depending on
    // how their IpldStores work.
    Execute(inTree st.StateTree, msgs [msg.UnsignedMessage]) union {outTree st.StateTree, err error}
}

VM Actor Interface

// This contains actor things that are _outside_ of VM exection.
// The VM uses this to execute actors.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

// TokenAmount is an amount of Filecoin tokens. This type is used within
// the VM in message execution, to account movement of tokens, payment
// of VM gas, and more.
type TokenAmount Int  // TODO: should be BigInt (attoFIL)

// MethodNum is an integer that represents a particular method
// in an actor's function table. These numbers are used to compress
// invocation of actor code, and to decouple human language concerns
// about method names from the ability to uniquely refer to a particular
// method.
//
// Consider MethodNum numbers to be similar in concerns as for
// offsets in function tables (in programming languages), and for
// tags in ProtocolBuffer fields. Tags in ProtocolBuffers recommend
// assigning a unique tag to a field and never reusing that tag.
// If a field is no longer used, the field name may change but should
// still remain defined in the code to ensure the tag number is not
// reused accidentally. The same should apply to the MethodNum
// associated with methods in Filecoin VM Actors.
type MethodNum Int

// MethodParams is an array of objects to pass into a method. This
// is the list of arguments/parameters.
type MethodParams [util.Serialization]

// CallSeqNum is an invocation (Call) sequence (Seq) number (Num).
// This is a value used for securing against replay attacks:
// each AccountActor (user) invocation must have a unique CallSeqNum
// value. The sequenctiality of the numbers is used to make it
// easy to verify, and to order messages.
//
// Q&A
// - > Does it have to be sequential?
//   No, a random nonce could work against replay attacks, but
//   making it sequential makes it much easier to verify.
// - > Can it be used to order events?
//   Yes, a user may submit N separate messages with increasing
//   sequence number, causing them to execute in order.
//
type CallSeqNum UVarint

// Code is a serialized object that contains the code for an Actor.
// Until we accept external user-provided contracts, this is the
// serialized code for the actor in the Filecoin Specification.
type Code Bytes

// CodeID identifies an actor's code (either one of the builtin actors,
// or, in the future, potentially a CID of VM code for a custom actor.)
type CodeID union {
    Builtin        BuiltinActorID
    UserDefined    ipld.CID

    IsBuiltin()    bool
    IsSingleton()  bool
}

type BuiltinActorID enum {
    Init
    Cron
    Account
    PaymentChannel
    StoragePower
    StorageMiner
    StorageMarket
}

// Actor is a base computation object in the Filecoin VM. Similar
// to Actors in the Actor Model (programming), or Objects in Object-
// Oriented Programming, or Ethereum Contracts in the EVM.
//
// ActorState represents the on-chain storage all actors keep.
type ActorState struct {
    // Identifies the code this actor executes.
    CodeID
    // CID of the root of optional actor-specific sub-state.
    State       ActorSubstateCID
    // Balance of tokens held by this actor.
    Balance     TokenAmount
    // Expected sequence number of the next message sent by this actor.
    // Initially zero, incremented when an account actor originates a top-level message.
    // Always zero for other actors.
    CallSeqNum
}

type ActorSystemStateCID ipld.CID
type ActorSubstateCID ipld.CID

// ActorState represents the on-chain storage actors keep. This type is a
// union of concrete types, for each of the Actors:
// - InitActor
// - CronActor
// - AccountActor
// - PaymentChannelActor
// - StoragePowerActor
// - StorageMinerActor
// - StroageMarketActor
//
// TODO: move this into a directory inside the VM that patches in all
// the actors from across the system. this will be where we declare/mount
// all actors in the VM.
// type ActorState union {
//     Init struct {
//         AddressMap  {addr.Address: ActorID}
//         NextID      ActorID
//     }
// }

type BalanceTableHAMT {addr.Address: TokenAmount}
package actor

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import util "github.com/filecoin-project/specs/util"

var IMPL_FINISH = util.IMPL_FINISH
var TODO = util.TODO

type Serialization = util.Serialization

const (
	MethodSend        = MethodNum(0)
	MethodConstructor = MethodNum(1)

	// TODO: remove this once canonical method numbers are finalized
	MethodPlaceholder = MethodNum(-(1 << 30))
)

func (st *ActorState_I) CID() ipld.CID {
	panic("TODO")
}

func (id *CodeID_I) IsBuiltin() bool {
	switch id.Which() {
	case CodeID_Case_Builtin:
		return true
	default:
		panic("Actor code ID case not supported")
	}
}

func (id *CodeID_I) IsSingleton() bool {
	if !id.IsBuiltin() {
		return false
	}

	for _, a := range []BuiltinActorID{
		BuiltinActorID_Init,
		BuiltinActorID_Cron,
		BuiltinActorID_Init,
		BuiltinActorID_StoragePower,
		BuiltinActorID_StorageMarket,
	} {
		if id.As_Builtin() == a {
			return true
		}
	}

	for _, a := range []BuiltinActorID{
		BuiltinActorID_Account,
		BuiltinActorID_PaymentChannel,
		BuiltinActorID_StorageMiner,
	} {
		if id.As_Builtin() == a {
			return false
		}
	}

	panic("Actor code ID case not supported")
}

func (x ActorSubstateCID) Ref() *ActorSubstateCID {
	return &x
}

func TokenAmount_Placeholder() TokenAmount {
	TODO()
	panic("")
}

// Interface for runtime/VMContext functionality (to avoid circular dependency in Go imports)
type Has_AbortArg interface {
	AbortArg()
}

func CheckArgs(params *MethodParams, rt Has_AbortArg, cond bool) {
	if !cond {
		rt.AbortArg()
	}
}

func ArgPop(params *MethodParams, rt Has_AbortArg) Serialization {
	CheckArgs(params, rt, len(*params) > 0)
	ret := (*params)[0]
	*params = (*params)[1:]
	return ret
}

func ArgEnd(params *MethodParams, rt Has_AbortArg) {
	CheckArgs(params, rt, len(*params) == 0)
}

// Create a new entry in the balance table, with the specified initial balance.
// May fail if the specified address already exists in the table.
func BalanceTable_WithNewAddressEntry(table BalanceTableHAMT, address addr.Address, initBalance TokenAmount) (
	ret BalanceTableHAMT, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Delete the specified entry in the balance table.
// May fail if the specified address does not exist in the table.
func BalanceTable_WithDeletedAddressEntry(table BalanceTableHAMT, address addr.Address) (
	ret BalanceTableHAMT, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Add the given amount to the given address's balance table entry.
func BalanceTable_WithAdd(table BalanceTableHAMT, address addr.Address, amount TokenAmount) (
	ret BalanceTableHAMT, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Subtract the given amount (or as much as possible, without making the resulting amount negative)
// from the given address's balance table entry, independent of any minimum balance maintenance
// requirement.
// Note: ok should be set to true here, even if the operation caused the entry to hit zero.
// The only failure case is when the address does not exist in the table.
func BalanceTable_WithSubtractPreservingNonnegative(
	table BalanceTableHAMT, address addr.Address, amount TokenAmount) (
	ret BalanceTableHAMT, amountSubtracted TokenAmount, ok bool) {

	return BalanceTable_WithExtractPartial(table, address, amount, TokenAmount(0))
}

// Extract the given amount from the given address's balance table entry, subject to the requirement
// of a minimum balance `minBalanceMaintain`. If not possible to withdraw the entire amount
// requested, then the balance will remain unchanged.
func BalanceTable_WithExtract(
	table BalanceTableHAMT, address addr.Address, amount TokenAmount, minBalanceMaintain TokenAmount) (
	ret BalanceTableHAMT, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Extract as much as possible (may be zero) up to the specified amount from the given address's
// balance table entry, subject to the requirement of a minimum balance `minBalanceMaintain`.
func BalanceTable_WithExtractPartial(
	table BalanceTableHAMT, address addr.Address, amount TokenAmount, minBalanceMaintain TokenAmount) (
	ret BalanceTableHAMT, amountExtracted TokenAmount, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Determine whether the given address's entry in the balance table meets the required minimum
// `minBalanceMaintain`.
func BalanceTable_IsEntrySufficient(
	table BalanceTableHAMT, address addr.Address, minBalanceMaintain TokenAmount) (ret bool, ok bool) {

	IMPL_FINISH()
	panic("")
}

// Retrieve the balance table entry corresponding to the given address.
func BalanceTable_GetEntry(
	table BalanceTableHAMT, address addr.Address) (
	ret TokenAmount, ok bool) {

	IMPL_FINISH()
	panic("")
}

Address

// Address is defined here because this is where addresses start to make sense.
// Addresses refer to actors defined in the StateTree, so Addresses are defined
// on top of the StateTree.
//
// TODO: potentially move into a library, or its own directory.
type Address struct {
    NetworkID enum {
        Testnet
        Mainnet
    }

    Data union {
        ID                   ActorID
        PublicKey_Secp256k1  KeyHash  // TODO: reorder
        ActorExec            ActorExecHash
        PublicKey_BLS        KeyHash
    }

    VerifySyntax()   bool
    String()         AddressString
    IsIDType()       bool  // Whether the address is an ID-address
    IsKeyType()      bool  // Whether the address is a public key address (SECP or BLS)
    Equals(Address)  bool
    Ref()            Address_Ptr
}

// ActorID is a sequential number assigned to actors in a Filecoin Chain.
// ActorIDs are assigned by the InitActor, when an Actor is introduced into
// the Runtime.
type ActorID Int

type KeyHash Bytes
type ActorExecHash Bytes

type AddressString string

State Tree

The State Tree is the output of applying operations on the Filecoin Blockchain.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

// The on-chain state data structure is a map (HAMT) of addresses to actor states.
// Only ID addresses are expected as keys.
type StateTree struct {
    ActorStates  {addr.Address: actor.ActorState}  // HAMT

    // Returns the CID of the root node of the HAMT.
    RootCID()    ipld.CID

    // Looks up an actor state by address.
    GetActor(a addr.Address) (state actor.ActorState, ok bool)
}

TODO

  • Add ConvenienceAPI state to provide more user-friendly views.

VM Message - Actor Method Invocation

A message is the unit of communication between two actors, and thus the primitive cause of changes in state. A message combines:

  • a token amount to be transferred from the sender to the receiver, and
  • a method with parameters to be invoked on the receiver (optional).

Actor code may send additional messages to other actors while processing a received message. Messages are processed synchronously: an actor waits for a sent message to complete before resuming control.

The processing of a message consumes units of computation and storage denominated in gas. A message’s gas limit provides an upper bound on its computation. The sender of a message pays for the gas units consumed by a message’s execution (including all nested messages) at a gas price they determine. A block producer chooses which messages to include in a block and is rewarded according to each message’s gas price and consumption, forming a market.

Message syntax validation

A syntactically invalid message must not be transmitted, retained in a message pool, or included in a block.

A syntactically valid UnsignedMessage:

  • has a well-formed, non-empty To address,
  • has a well-formed, non-empty From address,
  • has a non-negative CallSeqNum,
  • has Value no less than zero and no greater than the total token supply (2e9 * 1e18), and
  • has a non-negative MethodNum,
  • has non-empty Params only if MethodNum is zero,
  • has non-negative GasPrice,
  • has GasLimit that is at least equal to the gas consumption associated with the message’s serialized bytes,
  • has GasLimit that is no greater than the block gas limit network parameter.

When transmitted individually (before inclusion in a block), a message is packaged as SignedMessage, regardless of signature scheme used. A valid signed message:

  • has a total serialized size no greater than message.MessageMaxSize.

Message semantic validation

Semantic validation refers to validation requiring information outside of the message itself.

A semantically valid SignedMessage must carry a signature that verifies the payload as having been signed with the public key of the account actor identified by the From address. Note that when the From address is an ID-address, the public key must be looked up in the state of the sending account actor in the parent state identified by the block.

Note: the sending actor must exist in the parent state identified by the block that includes the message. This means that it is not valid for a single block to include a message that creates a new account actor and a message from that same actor. The first message from that actor must wait until a subsequent epoch. Message pools may exclude messages from an actor that is not yet present in the chain state.

There is no further semantic validation of a message that can cause a block including the message to be invalid. Every syntactically valid and correctly signed message can be included in a block and will produce a receipt from execution. However, a message may fail to execute to completion, in which case it will not effect the desired state change.

The reason for this “no message semantic validation” policy is that the state that a message will be applied to cannot be known before the message is executed as part of a tipset. A block producer does not know whether another block will precede it in the tipset, thus altering the state to which the block’s messages will apply from the declared parent state.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

// GasAmount is a quantity of gas.
type GasAmount struct {
    value                BigInt

    Add(GasAmount)       GasAmount
    Subtract(GasAmount)  GasAmount
    SubtractIfNonnegative(GasAmount) (ret GasAmount, ok bool)
    LessThan(GasAmount) bool
    Equals(GasAmount) bool
    Scale(int) GasAmount
}

// GasPrice is a Gas-to-FIL cost
type GasPrice actor.TokenAmount

type UnsignedMessage struct {
    // Address of the receiving actor.
    To          addr.Address
    // Address of the sending actor.
    From        addr.Address
    // Expected CallSeqNum of the sending actor (only for top-level messages).
    CallSeqNum  actor.CallSeqNum

    // Amount of value to transfer from sender's to receiver's balance.
    Value       actor.TokenAmount

    // Optional method to invoke on receiver, zero for a plain value send.
    Method      actor.MethodNum
    /// Serialized parameters to the method (if method is non-zero).
    Params      actor.MethodParams

    GasPrice
    GasLimit    GasAmount
}  // representation tuple

type SignedMessage struct {
    Message    UnsignedMessage
    Signature  filcrypto.Signature
}  // representation tuple
package message

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import util "github.com/filecoin-project/specs/util"

var IMPL_FINISH = util.IMPL_FINISH

type Serialization = util.Serialization

// The maximum serialized size of a SignedMessage.
const MessageMaxSize = 32 * 1024

func UnsignedMessage_Make(
	from addr.Address,
	to addr.Address,
	method actor.MethodNum,
	params actor.MethodParams,
	callSeqNum actor.CallSeqNum,
	value actor.TokenAmount,
	gasPrice GasPrice,
	gasLimit GasAmount,
) UnsignedMessage {
	return &UnsignedMessage_I{
		From_:       from,
		To_:         to,
		Method_:     method,
		Params_:     params,
		CallSeqNum_: callSeqNum,
		Value_:      value,
		GasPrice_:   gasPrice,
		GasLimit_:   gasLimit,
	}
}

func SignedMessage_Make(message UnsignedMessage, signature filcrypto.Signature) SignedMessage {
	return &SignedMessage_I{
		Message_:   message,
		Signature_: signature,
	}
}

func Sign(message UnsignedMessage, keyPair filcrypto.SigKeyPair) (SignedMessage, error) {
	sig, err := filcrypto.Sign(keyPair, util.Bytes(Serialize_UnsignedMessage(message)))
	if err != nil {
		return nil, err
	}
	return SignedMessage_Make(message, sig), nil
}

func SignatureVerificationError() error {
	IMPL_FINISH()
	panic("")
}

func Verify(message SignedMessage, publicKey filcrypto.PublicKey) (UnsignedMessage, error) {
	m := util.Bytes(Serialize_UnsignedMessage(message.Message()))
	sigValid, err := filcrypto.Verify(publicKey, message.Signature(), m)
	if err != nil {
		return nil, err
	}
	if !sigValid {
		return nil, SignatureVerificationError()
	}
	return message.Message(), nil
}

func (x *GasAmount_I) Add(y GasAmount) GasAmount {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Subtract(y GasAmount) GasAmount {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) SubtractIfNonnegative(y GasAmount) (ret GasAmount, ok bool) {
	ret = x.Subtract(y)
	ok = true
	if ret.LessThan(GasAmount_Zero()) {
		ret = x
		ok = false
	}
	return
}

func (x *GasAmount_I) LessThan(y GasAmount) bool {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Equals(y GasAmount) bool {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Scale(count int) GasAmount {
	IMPL_FINISH()
	panic("")
}

func GasAmount_Affine(b GasAmount, x int, m GasAmount) GasAmount {
	return b.Add(m.Scale(x))
}

func GasAmount_Zero() GasAmount {
	return GasAmount_FromInt(0)
}

func GasAmount_FromInt(x int) GasAmount {
	IMPL_FINISH()
	panic("")
}

func GasAmount_SentinelUnlimited() GasAmount {
	// Amount of gas larger than any feasible execution; meant to indicated unlimited gas
	// (e.g., for builtin system method invocations).
	return GasAmount_FromInt(1).Scale(1e9).Scale(1e9) // 10^18
}

VM Runtime Environment (Inside the VM)

Receipts

A MessageReceipt contains the result of a top-level message execution.

A syntactically valid receipt has:

  • a non-negative ExitCode,
  • a non empty ReturnValue only if the exit code is zero,
  • a non-negative GasUsed.

vm/runtime interface

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

// Runtime is the VM's internal runtime object.
// this is everything that is accessible to actors, beyond parameters.
type Runtime interface {
    CurrEpoch() block.ChainEpoch

    // Randomness returns a (pseudo)random stream (indexed by offset) for the current epoch.
    Randomness(e block.ChainEpoch, offset UInt) util.Randomness

    // Note: This is the _immediate_ caller.
    // Not necessarily the actor in the From field of the initial on-chain Message.
    ImmediateCaller() addr.Address
    ValidateImmediateCallerIs(caller addr.Address)
    ValidateImmediateCallerAcceptAnyOfType(type_ actor.BuiltinActorID)
    ValidateImmediateCallerAcceptAny()
    ValidateImmediateCallerMatches(CallerPattern)

    // The address of the actor receiving the message.
    CurrReceiver()         addr.Address

    // The actor who mined the block in which the initial on-chain message appears.
    ToplevelBlockWinner()  addr.Address

    AcquireState()         ActorStateHandle

    SuccessReturn()        InvocOutput
    ValueReturn(Bytes)     InvocOutput

    // Throw an error indicating a failure condition has occurred, from which the given actor
    // code is unable to recover.
    Abort(errExitCode exitcode.ExitCode, msg string)

    // Calls Abort with InvalidArguments_User.
    AbortArgMsg(msg string)
    AbortArg()

    // Calls Abort with InconsistentState_User.
    AbortStateMsg(msg string)
    AbortState()

    // Calls Abort with InsufficientFunds_User.
    AbortFundsMsg(msg string)
    AbortFunds()

    // Calls Abort with RuntimeAPIError.
    // For internal use only (not in actor code).
    AbortAPI(msg string)

    // Check that the given condition is true (and call Abort if not).
    Assert(bool)

    CurrentBalance()  actor.TokenAmount
    ValueReceived()   actor.TokenAmount

    // Run a (pure function) computation, consuming the gas cost associated with that function.
    // This mechanism is intended to capture the notion of an ABI between the VM and native
    // functions, and should be used for any function whose computation is expensive.
    Compute(ComputeFunctionID, args [Any]) Any

    // Sends a message to another actor.
    // If the invoked method does not return successfully, this caller will be aborted too.
    SendPropagatingErrors(input InvocInput) InvocOutput
    // Sends a message to another actor, trapping an unsuccessful execution.
    // This may only be invoked by the singleton Cron actor.
    SendCatchingErrors(input InvocInput) (output InvocOutput, exitCode exitcode.ExitCode)

    // Computes an address for a new actor. The returned address is intended to uniquely refer to
    // the actor even in the event of a chain re-org (whereas an ID-address might refer to a
    // different actor after messages are re-ordered).
    NewActorAddress() addr.Address

    // Creates an actor in the state tree, with empty state. May only be called by InitActor.
    CreateActor(
        // The new actor's code identifier.
        codeId   actor.CodeID
        // Address under which the new actor's state will be stored.
        address  addr.Address
    )

    IpldGet(c ipld.CID) union {Bytes, error}
    IpldPut(x ipld.Object) ipld.CID
}

type InvocInput struct {
    To      addr.Address
    Method  actor.MethodNum
    Params  actor.MethodParams
    Value   actor.TokenAmount
}

type InvocOutput struct {
    ExitCode     exitcode.ExitCode
    ReturnValue  Bytes
}

type MessageReceipt struct {
    ExitCode     exitcode.ExitCode
    ReturnValue  Bytes
    GasUsed      msg.GasAmount
}  // representation tuple

type ActorExecAddressSeed struct {
    creator             addr.Address
    toplevelCallSeqNum  actor.CallSeqNum
    internalCallSeqNum  actor.CallSeqNum
}

vm/runtime implementation

package runtime

import (
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
	gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
	st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
)

type ActorSubstateCID = actor.ActorSubstateCID
type ExitCode = exitcode.ExitCode
type RuntimeError = exitcode.RuntimeError

var EnsureErrorCode = exitcode.EnsureErrorCode
var SystemError = exitcode.SystemError

var Assert = util.Assert
var IMPL_FINISH = util.IMPL_FINISH
var TODO = util.TODO

// Name should be set per unique filecoin network
var Name = "mainnet"

func ActorSubstateCID_Equals(x, y ActorSubstateCID) bool {
	IMPL_FINISH()
	panic("")
}

func NetworkName() string {
	return Name
}

// ActorCode is the interface that all actor code types should satisfy.
// It is merely a method dispatch interface.
type ActorCode interface {
	InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput
}

type ActorStateHandle struct {
	_initValue *ActorSubstateCID
	_rt        *VMContext
}

func (h *ActorStateHandle) UpdateRelease(newStateCID ActorSubstateCID) {
	h._rt._updateReleaseActorSubstate(newStateCID)
}

func (h *ActorStateHandle) Release(checkStateCID ActorSubstateCID) {
	h._rt._releaseActorSubstate(checkStateCID)
}

func (h *ActorStateHandle) Take() ActorSubstateCID {
	if h._initValue == nil {
		h._rt._apiError("Must call Take() only once on actor substate object")
	}
	ret := *h._initValue
	h._initValue = nil
	return ret
}

// Concrete instantiation of the Runtime interface. This should be instantiated by the
// interpreter once per actor method invocation, and responds to that method's Runtime
// API calls.
type VMContext struct {
	_globalStateInit      st.StateTree
	_globalStatePending   st.StateTree
	_running              bool
	_actorAddress         addr.Address
	_actorStateAcquired   bool
	_actorSubstateUpdated bool

	_immediateCaller addr.Address
	// Note: This is the actor in the From field of the initial on-chain message.
	// Not necessarily the immediate caller.
	_toplevelSender      addr.Address
	_toplevelBlockWinner addr.Address
	// Top-level call sequence number of the "From" actor in the initial on-chain message.
	_toplevelSenderCallSeqNum actor.CallSeqNum
	// Sequence number representing the total number of calls (to any actor, any method)
	// during the current top-level message execution.
	// Note: resets with every top-level message, and therefore not necessarily monotonic.
	_internalCallSeqNum actor.CallSeqNum
	_valueReceived      actor.TokenAmount
	_gasRemaining       msg.GasAmount
	_numValidateCalls   int
	_output             InvocOutput
}

func VMContext_Make(
	toplevelSender addr.Address,
	toplevelBlockWinner addr.Address,
	toplevelSenderCallSeqNum actor.CallSeqNum,
	internalCallSeqNum actor.CallSeqNum,
	globalState st.StateTree,
	actorAddress addr.Address,
	valueReceived actor.TokenAmount,
	gasRemaining msg.GasAmount) *VMContext {

	return &VMContext{
		_globalStateInit:      globalState,
		_globalStatePending:   globalState,
		_running:              false,
		_actorAddress:         actorAddress,
		_actorStateAcquired:   false,
		_actorSubstateUpdated: false,

		_toplevelSender:           toplevelSender,
		_toplevelBlockWinner:      toplevelBlockWinner,
		_toplevelSenderCallSeqNum: toplevelSenderCallSeqNum,
		_internalCallSeqNum:       internalCallSeqNum,
		_valueReceived:            valueReceived,
		_gasRemaining:             gasRemaining,
		_numValidateCalls:         0,
		_output:                   nil,
	}
}

func (rt *VMContext) AbortArgMsg(msg string) {
	rt.Abort(exitcode.UserDefinedError(exitcode.InvalidArguments_User), msg)
}

func (rt *VMContext) AbortArg() {
	rt.AbortArgMsg("Invalid arguments")
}

func (rt *VMContext) AbortStateMsg(msg string) {
	rt.Abort(exitcode.UserDefinedError(exitcode.InconsistentState_User), msg)
}

func (rt *VMContext) AbortState() {
	rt.AbortStateMsg("Inconsistent state")
}

func (rt *VMContext) AbortFundsMsg(msg string) {
	rt.Abort(exitcode.UserDefinedError(exitcode.InsufficientFunds_User), msg)
}

func (rt *VMContext) AbortFunds() {
	rt.AbortFundsMsg("Insufficient funds")
}

func (rt *VMContext) AbortAPI(msg string) {
	rt.Abort(exitcode.SystemError(exitcode.RuntimeAPIError), msg)
}

func (rt *VMContext) _rtAllocGasCreateActor() {
	if !rt._actorAddress.Equals(addr.InitActorAddr) {
		rt.AbortAPI("Only InitActor may call rt.CreateActor_DeductGas")
	}

	rt._rtAllocGas(gascost.ExecNewActor)
}

func (rt *VMContext) CreateActor(codeID actor.CodeID, address addr.Address) {
	if !rt._actorAddress.Equals(addr.InitActorAddr) {
		rt.AbortAPI("Only InitActor may call rt.CreateActor")
	}

	// Create empty actor state.
	actorState := &actor.ActorState_I{
		CodeID_:     codeID,
		State_:      actor.ActorSubstateCID(ipld.EmptyCID()),
		Balance_:    actor.TokenAmount(0),
		CallSeqNum_: 0,
	}

	// Put it in the state tree.
	actorStateCID := actor.ActorSystemStateCID(rt.IpldPut(actorState))
	rt._updateActorSystemStateInternal(address, actorStateCID)

	rt._rtAllocGasCreateActor()
}

func (rt *VMContext) _updateActorSystemStateInternal(actorAddress addr.Address, newStateCID actor.ActorSystemStateCID) {
	newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSystemState(rt._actorAddress, newStateCID)
	if err != nil {
		panic("Error in runtime implementation: failed to update actor system state")
	}
	rt._globalStatePending = newGlobalStatePending
}

func (rt *VMContext) _updateActorSubstateInternal(actorAddress addr.Address, newStateCID actor.ActorSubstateCID) {
	newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSubstate(rt._actorAddress, newStateCID)
	if err != nil {
		panic("Error in runtime implementation: failed to update actor substate")
	}
	rt._globalStatePending = newGlobalStatePending
}

func (rt *VMContext) _updateReleaseActorSubstate(newStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()
	rt._updateActorSubstateInternal(rt._actorAddress, newStateCID)
	rt._actorSubstateUpdated = true
	rt._actorStateAcquired = false
}

func (rt *VMContext) _releaseActorSubstate(checkStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()

	prevState, ok := rt._globalStatePending.GetActor(rt._actorAddress)
	util.Assert(ok)
	prevStateCID := prevState.State()
	if !ActorSubstateCID_Equals(prevStateCID, checkStateCID) {
		rt.AbortAPI("State CID differs upon release call")
	}

	rt._actorStateAcquired = false
}

func (rt *VMContext) Assert(cond bool) {
	if !cond {
		rt.Abort(exitcode.SystemError(exitcode.RuntimeAssertFailure), "Runtime assertion failed")
	}
}

func (rt *VMContext) _checkActorStateAcquiredFlag(expected bool) {
	rt._checkRunning()
	if rt._actorStateAcquired != expected {
		rt._apiError("State updates and message sends must be disjoint")
	}
}

func (rt *VMContext) _checkActorStateAcquired() {
	rt._checkActorStateAcquiredFlag(true)
}

func (rt *VMContext) _checkActorStateNotAcquired() {
	rt._checkActorStateAcquiredFlag(false)
}

func (rt *VMContext) Abort(errExitCode exitcode.ExitCode, errMsg string) {
	errExitCode = exitcode.EnsureErrorCode(errExitCode)
	rt._throwErrorFull(errExitCode, errMsg)
}

func (rt *VMContext) ImmediateCaller() addr.Address {
	return rt._immediateCaller
}

func (rt *VMContext) CurrReceiver() addr.Address {
	return rt._actorAddress
}

func (rt *VMContext) ToplevelBlockWinner() addr.Address {
	return rt._toplevelBlockWinner
}

func (rt *VMContext) ValidateImmediateCallerMatches(
	callerExpectedPattern CallerPattern) {

	rt._checkRunning()
	rt._checkNumValidateCalls(0)
	caller := rt.ImmediateCaller()
	if !callerExpectedPattern.Matches(caller) {
		rt.AbortAPI("Method invoked by incorrect caller")
	}
	rt._numValidateCalls += 1
}

type CallerPattern struct {
	Matches func(addr.Address) bool
}

func CallerPattern_MakeSingleton(x addr.Address) CallerPattern {
	return CallerPattern{
		Matches: func(y addr.Address) bool { return x == y },
	}
}

func CallerPattern_MakeAcceptAnyOfType(rt *VMContext, type_ actor.BuiltinActorID) CallerPattern {
	return CallerPattern{
		Matches: func(y addr.Address) bool {
			codeID, ok := rt._getActorCodeID(y)
			if !ok {
				panic("Internal runtime error: actor not found")
			}
			Assert(codeID != nil)
			return (codeID.IsBuiltin() && (codeID.As_Builtin() == type_))
		},
	}
}

func CallerPattern_MakeAcceptAny() CallerPattern {
	return CallerPattern{
		Matches: func(addr.Address) bool { return true },
	}
}

func (rt *VMContext) ValidateImmediateCallerIs(callerExpected addr.Address) {
	rt.ValidateImmediateCallerMatches(CallerPattern_MakeSingleton(callerExpected))
}

func (rt *VMContext) ValidateImmediateCallerAcceptAnyOfType(type_ actor.BuiltinActorID) {
	rt.ValidateImmediateCallerMatches(CallerPattern_MakeAcceptAnyOfType(rt, type_))
}

func (rt *VMContext) ValidateImmediateCallerAcceptAny() {
	rt.ValidateImmediateCallerMatches(CallerPattern_MakeAcceptAny())
}

func (rt *VMContext) _checkNumValidateCalls(x int) {
	if rt._numValidateCalls != x {
		rt.AbortAPI("Method must validate caller identity exactly once")
	}
}

func (rt *VMContext) _checkRunning() {
	if !rt._running {
		panic("Internal runtime error: actor API called with no actor code running")
	}
}
func (rt *VMContext) SuccessReturn() InvocOutput {
	return InvocOutput_Make(nil)
}

func (rt *VMContext) ValueReturn(value util.Bytes) InvocOutput {
	return InvocOutput_Make(value)
}

func (rt *VMContext) _throwError(exitCode ExitCode) {
	rt._throwErrorFull(exitCode, "")
}

func (rt *VMContext) _throwErrorFull(exitCode ExitCode, errMsg string) {
	panic(exitcode.RuntimeError_Make(exitCode, errMsg))
}

func (rt *VMContext) _apiError(errMsg string) {
	rt._throwErrorFull(exitcode.SystemError(exitcode.RuntimeAPIError), errMsg)
}

func _gasAmountAssertValid(x msg.GasAmount) {
	if x.LessThan(msg.GasAmount_Zero()) {
		panic("Interpreter error: negative gas amount")
	}
}

// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
// incurred yet.
func (rt *VMContext) _rtAllocGas(x msg.GasAmount) {
	_gasAmountAssertValid(x)
	var ok bool
	rt._gasRemaining, ok = rt._gasRemaining.SubtractIfNonnegative(x)
	if !ok {
		rt._throwError(exitcode.SystemError(exitcode.OutOfGas))
	}
}

func (rt *VMContext) _transferFunds(from addr.Address, to addr.Address, amount actor.TokenAmount) error {
	rt._checkRunning()
	rt._checkActorStateNotAcquired()

	newGlobalStatePending, err := rt._globalStatePending.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		return err
	}

	rt._globalStatePending = newGlobalStatePending
	return nil
}

func (rt *VMContext) _getActorCodeID(actorAddr addr.Address) (ret actor.CodeID, ok bool) {
	IMPL_FINISH()
	panic("")
}

type ErrorHandlingSpec int

const (
	PropagateErrors ErrorHandlingSpec = 1 + iota
	CatchErrors
)

// TODO: This function should be private (not intended to be exposed to actors).
// (merging runtime and interpreter packages should solve this)
func (rt *VMContext) SendToplevelFromInterpreter(input InvocInput) (
	MessageReceipt, st.StateTree) {

	rt._running = true
	ret := rt._sendInternal(input, CatchErrors)
	rt._running = false
	return ret, rt._globalStatePending
}

func _catchRuntimeErrors(f func() InvocOutput) (output InvocOutput, exitCode exitcode.ExitCode) {
	defer func() {
		if r := recover(); r != nil {
			switch r.(type) {
			case *RuntimeError:
				output = InvocOutput_Make(nil)
				exitCode = (r.(*RuntimeError).ExitCode)
			default:
				panic(r)
			}
		}
	}()

	output = f()
	exitCode = exitcode.OK()
	return
}

func _invokeMethodInternal(
	rt *VMContext,
	actorCode ActorCode,
	method actor.MethodNum,
	params actor.MethodParams) (
	ret InvocOutput, exitCode exitcode.ExitCode, internalCallSeqNumFinal actor.CallSeqNum) {

	if method == actor.MethodSend {
		ret = InvocOutput_Make(nil)
		return
	}

	rt._running = true
	ret, exitCode = _catchRuntimeErrors(func() InvocOutput {
		methodOutput := actorCode.InvokeMethod(rt, method, params)
		if rt._actorSubstateUpdated {
			rt._rtAllocGas(gascost.UpdateActorSubstate)
		}
		rt._checkActorStateNotAcquired()
		rt._checkNumValidateCalls(1)
		return methodOutput
	})
	rt._running = false

	internalCallSeqNumFinal = rt._internalCallSeqNum

	return
}

func (rtOuter *VMContext) _sendInternal(input InvocInput, errSpec ErrorHandlingSpec) MessageReceipt {
	rtOuter._checkRunning()
	rtOuter._checkActorStateNotAcquired()

	initGasRemaining := rtOuter._gasRemaining

	rtOuter._rtAllocGas(gascost.InvokeMethod(input.Value()))

	toActor, ok := rtOuter._globalStatePending.GetActor(input.To())
	if !ok {
		rtOuter._throwError(exitcode.SystemError(exitcode.ActorCodeNotFound))
	}

	toActorCode, err := loadActorCode(toActor.CodeID())
	if err != nil {
		rtOuter._throwError(exitcode.SystemError(exitcode.ActorCodeNotFound))
	}

	err = rtOuter._transferFunds(rtOuter._actorAddress, input.To(), input.Value())
	if err != nil {
		rtOuter._throwError(exitcode.SystemError(exitcode.InsufficientFunds_System))
	}

	rtInner := VMContext_Make(
		rtOuter._toplevelSender,
		rtOuter._toplevelBlockWinner,
		rtOuter._toplevelSenderCallSeqNum,
		rtOuter._internalCallSeqNum+1,
		rtOuter._globalStatePending,
		input.To(),
		input.Value(),
		rtOuter._gasRemaining,
	)

	invocOutput, exitCode, internalCallSeqNumFinal := _invokeMethodInternal(
		rtInner,
		toActorCode,
		input.Method(),
		input.Params(),
	)

	_gasAmountAssertValid(rtOuter._gasRemaining.Subtract(rtInner._gasRemaining))
	rtOuter._gasRemaining = rtInner._gasRemaining
	gasUsed := initGasRemaining.Subtract(rtOuter._gasRemaining)
	_gasAmountAssertValid(gasUsed)

	rtOuter._internalCallSeqNum = internalCallSeqNumFinal

	if exitCode.Equals(exitcode.SystemError(exitcode.OutOfGas)) {
		// OutOfGas error cannot be caught
		rtOuter._throwError(exitCode)
	}

	if errSpec == PropagateErrors && exitCode.IsError() {
		rtOuter._throwError(exitcode.SystemError(exitcode.MethodSubcallError))
	}

	if exitCode.AllowsStateUpdate() {
		rtOuter._globalStatePending = rtInner._globalStatePending
	}

	return MessageReceipt_Make(invocOutput, exitCode, gasUsed)
}

func (rtOuter *VMContext) _sendInternalOutputs(input InvocInput, errSpec ErrorHandlingSpec) (InvocOutput, exitcode.ExitCode) {
	ret := rtOuter._sendInternal(input, errSpec)
	return InvocOutput_Make(ret.ReturnValue()), ret.ExitCode()
}

func (rt *VMContext) SendPropagatingErrors(input InvocInput) InvocOutput {
	ret, _ := rt._sendInternalOutputs(input, PropagateErrors)
	return ret
}

func (rt *VMContext) SendCatchingErrors(input InvocInput) (InvocOutput, exitcode.ExitCode) {
	rt.ValidateImmediateCallerIs(addr.CronActorAddr)
	return rt._sendInternalOutputs(input, CatchErrors)
}

func (rt *VMContext) CurrentBalance() actor.TokenAmount {
	IMPL_FINISH()
	panic("")
}

func (rt *VMContext) ValueReceived() actor.TokenAmount {
	return rt._valueReceived
}

func (rt *VMContext) Randomness(e block.ChainEpoch, offset uint64) util.Randomness {
	// TODO: validate CurrEpoch() - K <= e <= CurrEpoch()?
	// TODO: finish
	TODO()
	panic("")
}

func (rt *VMContext) NewActorAddress() addr.Address {
	seed := &ActorExecAddressSeed_I{
		creator_:            rt._immediateCaller,
		toplevelCallSeqNum_: rt._toplevelSenderCallSeqNum,
		internalCallSeqNum_: rt._internalCallSeqNum,
	}
	hash := addr.ActorExecHash(Serialize_ActorExecAddressSeed(seed))

	return addr.Address_Make_ActorExec(addr.Address_NetworkID_Testnet, hash)
}

func (rt *VMContext) IpldPut(x ipld.Object) ipld.CID {
	var serializedSize int
	IMPL_FINISH()
	panic("") // compute serializedSize

	rt._rtAllocGas(gascost.IpldPut(serializedSize))

	IMPL_FINISH()
	panic("") // write to IPLD store
}

func (rt *VMContext) IpldGet(c ipld.CID) Runtime_IpldGet_FunRet {
	IMPL_FINISH()
	panic("") // get from IPLD store

	var serializedSize int
	IMPL_FINISH()
	panic("") // retrieve serializedSize

	rt._rtAllocGas(gascost.IpldGet(serializedSize))

	IMPL_FINISH()
	panic("") // return item
}

func (rt *VMContext) CurrEpoch() block.ChainEpoch {
	IMPL_FINISH()
	panic("")
}

func (rt *VMContext) AcquireState() ActorStateHandle {
	rt._checkRunning()
	rt._checkActorStateNotAcquired()
	rt._actorStateAcquired = true

	state, ok := rt._globalStatePending.GetActor(rt._actorAddress)
	util.Assert(ok)
	return ActorStateHandle{
		_initValue: state.State().Ref(),
		_rt:        rt,
	}
}

func (rt *VMContext) Compute(f ComputeFunctionID, args []Any) Any {
	def, found := _computeFunctionDefs[f]
	if !found {
		rt.AbortAPI("Function definition in rt.Compute() not found")
	}
	gasCost := def.GasCostFn(args)
	rt._rtAllocGas(gasCost)
	return def.Body(args)
}

func MessageReceipt_Make(output InvocOutput, exitCode exitcode.ExitCode, gasUsed msg.GasAmount) MessageReceipt {
	return &MessageReceipt_I{
		ExitCode_:    exitCode,
		ReturnValue_: output.ReturnValue(),
		GasUsed_:     gasUsed,
	}
}

func MessageReceipt_MakeSystemError(errCode exitcode.SystemErrorCode, gasUsed msg.GasAmount) MessageReceipt {
	return MessageReceipt_Make(
		InvocOutput_Make(nil),
		exitcode.SystemError(errCode),
		gasUsed,
	)
}

func InvocInput_Make(to addr.Address, method actor.MethodNum, params actor.MethodParams, value actor.TokenAmount) InvocInput {
	return &InvocInput_I{
		To_:     to,
		Method_: method,
		Params_: params,
		Value_:  value,
	}
}

func InvocOutput_Make(returnValue util.Bytes) InvocOutput {
	return &InvocOutput_I{
		ReturnValue_: returnValue,
	}
}

Code Loading

package runtime

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

func loadActorCode(codeID actor.CodeID) (ActorCode, error) {

	panic("TODO")
	// TODO: resolve circular dependency

	// // load the code from StateTree.
	// // TODO: this is going to be enabled in the future.
	// // code, err := loadCodeFromStateTree(input.InTree, codeCID)
	// return staticActorCodeRegistry.LoadActor(codeCID)
}

VM Exit Code Constants

type ExitCode union {
    IsSuccess()          bool
    IsError()            bool
    AllowsStateUpdate()  bool
    Equals(ExitCode)     bool

    Success              struct {}
    SystemError          SystemErrorCode
    UserDefinedError     UVarint
}
package exitcode

import (
	"fmt"
)

import util "github.com/filecoin-project/specs/util"

type SystemErrorCode int
type UserDefinedErrorCode int

const (
	// TODO: remove once canonical error codes are finalized
	SystemErrorCode_Placeholder      = SystemErrorCode(-(1 << 30))
	UserDefinedErrorCode_Placeholder = UserDefinedErrorCode(-(1 << 30))
)

var IMPL_FINISH = util.IMPL_FINISH
var TODO = util.TODO

// TODO: assign all of these.
const (
	// ActorNotFound represents a failure to find an actor.
	ActorNotFound = SystemErrorCode_Placeholder + iota

	// ActorCodeNotFound represents a failure to find the code for a
	// particular actor in the VM registry.
	ActorCodeNotFound

	// InvalidMethod represents a failure to find a method in
	// an actor
	InvalidMethod

	// InvalidArgumentsSystem indicates that a method was called with the incorrect
	// number of arguments, or that its arguments did not satisfy its
	// preconditions
	InvalidArguments_System

	// InsufficientFunds represents a failure to apply a message, as
	// it did not carry sufficient funds for its application.
	InsufficientFunds_System

	// InvalidCallSeqNum represents a message invocation out of sequence.
	// This happens when message.CallSeqNum is not exactly actor.CallSeqNum + 1
	InvalidCallSeqNum

	// OutOfGas is returned when the execution of an actor method
	// (including its subcalls) uses more gas than initially allocated.
	OutOfGas

	// RuntimeAPIError is returned when an actor method invocation makes a call
	// to the runtime that does not satisfy its preconditions.
	RuntimeAPIError

	// RuntimeAssertFailure is returned when an actor method invocation calls
	// rt.Assert with a false condition.
	RuntimeAssertFailure

	// MethodSubcallError is returned when an actor method's Send call has
	// returned with a failure error code (and the Send call did not specify
	// to ignore errors).
	MethodSubcallError
)

const (
	InsufficientFunds_User = UserDefinedErrorCode_Placeholder + iota
	InvalidArguments_User
	InconsistentState_User

	InvalidSectorPacking
	SealVerificationFailed
	DeadlineExceeded
	InsufficientPledgeCollateral
)

func OK() ExitCode {
	return ExitCode_Make_Success(&ExitCode_Success_I{})
}

func SystemError(x SystemErrorCode) ExitCode {
	return ExitCode_Make_SystemError(ExitCode_SystemError(x))
}

func (x *ExitCode_I) IsSuccess() bool {
	return x.Which() == ExitCode_Case_Success
}

func (x *ExitCode_I) IsError() bool {
	return !x.IsSuccess()
}

func (x *ExitCode_I) AllowsStateUpdate() bool {
	return x.IsSuccess()
}

func (x *ExitCode_I) Equals(ExitCode) bool {
	IMPL_FINISH()
	panic("")
}

func EnsureErrorCode(x ExitCode) ExitCode {
	if !x.IsError() {
		// Throwing an error with a non-error exit code is itself an error
		x = SystemError(RuntimeAPIError)
	}
	return x
}

type RuntimeError struct {
	ExitCode ExitCode
	ErrMsg   string
}

func (x *RuntimeError) String() string {
	ret := fmt.Sprintf("Runtime error: %v", x.ExitCode)
	if x.ErrMsg != "" {
		ret += fmt.Sprintf(" (\"%v\")", x.ErrMsg)
	}
	return ret
}

func RuntimeError_Make(exitCode ExitCode, errMsg string) *RuntimeError {
	exitCode = EnsureErrorCode(exitCode)
	return &RuntimeError{
		ExitCode: exitCode,
		ErrMsg:   errMsg,
	}
}

func UserDefinedError(e UserDefinedErrorCode) ExitCode {
	return ExitCode_Make_UserDefinedError(ExitCode_UserDefinedError(e))
}

VM Gas Cost Constants

package runtime

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import util "github.com/filecoin-project/specs/util"

type Bytes = util.Bytes

var TODO = util.TODO

var (
	// TODO: assign all of these.
	GasAmountPlaceholder                 = msg.GasAmount_FromInt(1)
	GasAmountPlaceholder_UpdateStateTree = GasAmountPlaceholder
)

var (
	///////////////////////////////////////////////////////////////////////////
	// System operations
	///////////////////////////////////////////////////////////////////////////

	// Gas cost charged to the originator of an on-chain message (regardless of
	// whether it succeeds or fails in application) is given by:
	//   OnChainMessageBase + len(serialized message)*OnChainMessagePerByte
	OnChainMessageBase    = GasAmountPlaceholder
	OnChainMessagePerByte = GasAmountPlaceholder

	// Gas cost charged to the originator of a non-nil return value produced
	// by an on-chain message is given by:
	//   len(return value)*OnChainReturnValuePerByte
	OnChainReturnValuePerByte = GasAmountPlaceholder

	// Gas cost for any method invocation (including the original one initiated
	// by an on-chain message).
	InvokeMethodBase = GasAmountPlaceholder

	// Gas cost charged, in addition to InvokeMethodBase, if a method invocation
	// is accompanied by any nonzero currency amount.
	InvokeMethodTransferFunds = GasAmountPlaceholder_UpdateStateTree

	// Gas cost (Base + len*PerByte) for any Get operation to the IPLD store
	// in the runtime VM context.
	IpldGetBase    = GasAmountPlaceholder
	IpldGetPerByte = GasAmountPlaceholder

	// Gas cost (Base + len*PerByte) for any Put operation to the IPLD store
	// in the runtime VM context.
	//
	// Note: these costs should be significantly higher than the costs for Get
	// operations, since they reflect not only serialization/deserialization
	// but also persistent storage of chain data.
	IpldPutBase    = GasAmountPlaceholder
	IpldPutPerByte = GasAmountPlaceholder

	// Gas cost for updating an actor's substate (i.e., UpdateRelease).
	UpdateActorSubstate = GasAmountPlaceholder_UpdateStateTree

	// Gas cost for creating a new actor (via InitActor's Exec method).
	ExecNewActor = GasAmountPlaceholder

	///////////////////////////////////////////////////////////////////////////
	// Pure functions (VM ABI)
	///////////////////////////////////////////////////////////////////////////

	// Gas cost charged per public-key cryptography operation (e.g., signature
	// verification).
	PublicKeyCryptoOp = GasAmountPlaceholder
)

func OnChainMessage(onChainMessageLen int) msg.GasAmount {
	return msg.GasAmount_Affine(OnChainMessageBase, onChainMessageLen, OnChainMessagePerByte)
}

func OnChainReturnValue(returnValue Bytes) msg.GasAmount {
	retLen := 0
	if returnValue != nil {
		retLen = len(returnValue)
	}

	return msg.GasAmount_Affine(msg.GasAmount_Zero(), retLen, OnChainReturnValuePerByte)
}

func IpldGet(dataSize int) msg.GasAmount {
	return msg.GasAmount_Affine(IpldGetBase, dataSize, IpldGetPerByte)
}

func IpldPut(dataSize int) msg.GasAmount {
	return msg.GasAmount_Affine(IpldPutBase, dataSize, IpldPutPerByte)
}

func InvokeMethod(valueSent actor.TokenAmount) msg.GasAmount {
	ret := InvokeMethodBase

	TODO() // TODO: BigInt
	if valueSent > actor.TokenAmount(0) {
		ret = ret.Add(InvokeMethodTransferFunds)
	}
	return ret
}

System Actors

  • There are two system actors required for VM processing:
    • CronActor - runs critical functions at every epoch
    • InitActor - initializes new actors, records the network name
  • There are two more VM level actors:

InitActor

(You can see the old InitActor here )

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type InitActorState struct {
    // responsible for create new actors
    AddressMap       {addr.Address: addr.ActorID}
    NextID           addr.ActorID
    NetworkName      string

    _assignNextID()  addr.ActorID
}

type InitActorCode struct {
    Constructor(r vmr.Runtime, networkName string)
    Exec(r vmr.Runtime, code actor.CodeID, params actor.MethodParams) addr.Address
    GetActorIDForAddress(r vmr.Runtime, address addr.Address) addr.ActorID
}
package sysactors

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import util "github.com/filecoin-project/specs/util"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

const (
	Method_InitActor_Exec = actor.MethodPlaceholder + iota
	Method_InitActor_GetActorIDForAddress
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime
type Bytes = util.Bytes
type Serialization = util.Serialization

var CheckArgs = actor.CheckArgs
var ArgPop = actor.ArgPop
var ArgEnd = actor.ArgEnd

func _loadState(rt Runtime) (vmr.ActorStateHandle, InitActorState) {
	h := rt.AcquireState()
	stateCID := ipld.CID(h.Take())
	if ipld.CID_Equals(stateCID, ipld.EmptyCID()) {
		rt.AbortAPI("Actor state not initialized")
	}
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.AbortAPI("IPLD lookup error")
	}
	state, err := Deserialize_InitActorState(Serialization(stateBytes.As_Bytes()))
	if err != nil {
		rt.AbortAPI("State deserialization error")
	}
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *InitActorState_I) CID() ipld.CID {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (a *InitActorCode_I) Constructor(rt Runtime) InvocOutput {
	h := rt.AcquireState()
	st := &InitActorState_I{
		AddressMap_:  map[addr.Address]addr.ActorID{}, // TODO: HAMT
		NextID_:      addr.ActorID(addr.FirstNonSingletonActorId),
		NetworkName_: vmr.NetworkName(),
	}
	UpdateRelease(rt, h, st)
	return rt.ValueReturn(nil)
}

func (a *InitActorCode_I) Exec(rt Runtime, codeID actor.CodeID, constructorParams actor.MethodParams) InvocOutput {
	if !_codeIDSupportsExec(codeID) {
		rt.AbortArgMsg("cannot exec an actor of this type")
	}

	// Allocate an ID for this actor.
	h, st := _loadState(rt)
	actorID := st._assignNextID()
	idAddr := addr.Address_Make_ID(addr.Address_NetworkID_Testnet, actorID)

	// Store the mapping of a re-org-stable address to actor ID.
	// This address exists for use by messages coming from outside the system, in order to
	// stably address the newly created actor even if a chain re-org causes it to end up with
	// a different ID.
	newAddr := rt.NewActorAddress()
	st.AddressMap()[newAddr] = actorID
	UpdateRelease(rt, h, st)

	// Create the empty actor.
	// It's (empty) state must be stored under the ID-address before the constructor can be
	// invoked to initialize it.
	rt.CreateActor(codeID, idAddr)

	// Invoke its constructor. If construction fails, the error should propagate and cause
	// Exec to fail too.
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     idAddr,
		Method_: actor.MethodConstructor,
		Params_: constructorParams,
		Value_:  rt.ValueReceived(),
	})

	return rt.ValueReturn(
		Bytes(addr.Serialize_Address_Compact(idAddr)))
}

func (s *InitActorState_I) _assignNextID() addr.ActorID {
	actorID := s.NextID_
	s.NextID_++
	return actorID
}

func (a *InitActorCode_I) GetActorIDForAddress(rt Runtime, address addr.Address) InvocOutput {
	h, st := _loadState(rt)
	actorID := st.AddressMap()[address]
	Release(rt, h, st)
	return rt.ValueReturn(Bytes(addr.Serialize_ActorID(actorID)))
}

func _codeIDSupportsExec(codeID actor.CodeID) bool {
	if !codeID.IsBuiltin() || codeID.IsSingleton() {
		return false
	}

	which := codeID.As_Builtin()

	if which == actor.BuiltinActorID_Account {
		// Special case: account actors must be created implicitly by sending value;
		// cannot be created via exec.
		return false
	}

	util.Assert(
		which == actor.BuiltinActorID_PaymentChannel ||
			which == actor.BuiltinActorID_StorageMiner)

	return true
}

func (a *InitActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	switch method {
	case actor.MethodConstructor:
		ArgEnd(&params, rt)
		return a.Constructor(rt)

	case Method_InitActor_Exec:
		codeId, err := actor.Deserialize_CodeID(ArgPop(&params, rt))
		CheckArgs(&params, rt, err == nil)
		// Note: do not call ArgEnd (params is forwarded to Exec)
		return a.Exec(rt, codeId, params)

	case Method_InitActor_GetActorIDForAddress:
		address, err := addr.Deserialize_Address(ArgPop(&params, rt))
		CheckArgs(&params, rt, err == nil)
		ArgEnd(&params, rt)
		return a.GetActorIDForAddress(rt, address)

	default:
		rt.Abort(exitcode.SystemError(exitcode.InvalidMethod), "Invalid method")
		panic("")
	}
}

CronActor

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type CronActorState struct {
    // Cron has no internal state
}

type CronTableEntry struct {
    ToAddr     addr.Address
    MethodNum  actor.MethodNum
}

type CronActorCode struct {
    // Entries is a set of actors (and corresponding methods) to call during EpochTick.
    // This can be done a bunch of ways. We do it this way here to make it easy to add
    // a handler to Cron elsewhere in the spec code. How to do this is implementation
    // specific.
    Entries [CronTableEntry]

    // EpochTick executes built-in periodic actions, run at every Epoch.
    // EpochTick(r) is called after all other messages in the epoch have been applied.
    // This can be seen as an implicit last message.
    EpochTick(r vmr.Runtime)
}
package sysactors

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import util "github.com/filecoin-project/specs/util"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

const (
	Method_CronActor_EpochTick = actor.MethodPlaceholder + iota
)

func (a *CronActorCode_I) Constructor(rt vmr.Runtime) InvocOutput {
	// Nothing. intentionally left blank.
	return rt.SuccessReturn()
}

func (a *CronActorCode_I) EpochTick(rt vmr.Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(addr.SystemActorAddr)

	// a.Entries is basically a static registry for now, loaded
	// in the interpreter static registry.
	for _, entry := range a.Entries() {
		rt.SendCatchingErrors(&vmr.InvocInput_I{
			To_:     entry.ToAddr(),
			Method_: entry.MethodNum(),
			Params_: []util.Serialization{},
			Value_:  actor.TokenAmount(0),
		})
	}

	return rt.SuccessReturn()
}

func (a *CronActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	switch method {
	case actor.MethodConstructor:
		rt.Assert(len(params) == 0)
		return a.Constructor(rt)

	case Method_CronActor_EpochTick:
		rt.Assert(len(params) == 0)
		return a.EpochTick(rt)

	default:
		rt.Abort(exitcode.SystemError(exitcode.InvalidMethod), "Invalid method")
		panic("")
	}
}

AccountActor

(You can see the old AccountActor here )

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type AccountActorCode struct {}

type AccountActorState struct {
    Address addr.Address
}
package sysactors

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

func (a *AccountActorCode_I) Constructor(rt vmr.Runtime) InvocOutput {
	// Nothing. intentionally left blank.
	return rt.SuccessReturn()
}

func (a *AccountActorCode_I) InvokeMethod(rt vmr.Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	switch method {
	case actor.MethodConstructor:
		rt.Assert(len(params) == 0)
		return a.Constructor(rt)

	default:
		// AccountActor has no methods.
		rt.Abort(exitcode.SystemError(exitcode.InvalidMethod), "Invalid method")
		panic("")
	}
}

RewardActor

RewardActor is where unminted and unvested Filecoin tokens are kept. At genesis, RewardActor is initiailized with investor accounts, tokens, and vesting schedule in a RewardMap which is a mapping from owner addressws to Reward structs. A Reward struct contains a StartEpoch that keeps track of when this Reward is created, Value that represents the total number of tokens rewarded, and ReleaseRate which is the linear rate of release in the unit of FIL per Epoch. WithdrawAmount records how many tokens have been withdrawn from a Reward struct so far. Owner addresses can call WithdrawReward which will withdraw all vested tokens that the investor address has from the RewardMap so far. When WithdrawAmount equals Value in a Reward struct, the Reward struct will be removed from the RewardMap.

RewardMap is also used in block reward minting to preserve the flexibility of introducing block reward vesting to the protocol. MintReward creates a new Reward struct and adds it to the RewardMap.

import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type Reward struct {
    StartEpoch       block.ChainEpoch
    Value            actor.TokenAmount
    ReleaseRate      actor.TokenAmount  // linear slope, unit: FIL per Epoch
    AmountWithdrawn  actor.TokenAmount
}

// ownerAddr to a collection of Reward
type RewardBalanceAMT {addr.Address: [Reward]}

type RewardActorState struct {
    RewardMap RewardBalanceAMT

    _withdrawReward(rt vmr.Runtime, ownerAddr addr.Address) actor.TokenAmount
}

type RewardActorCode struct {
    Constructor(rt vmr.Runtime)

    // Allocates a block reward to the owner of a miner, less
    // - penalty, which is burnt
    // - any amount of pledge collateral shortfall, which is transferred to storage power actor
    AwardBlockReward(rt vmr.Runtime, miner addr.Address, penalty actor.TokenAmount)

    // withdraw available funds from RewardMap
    WithdrawReward(rt vmr.Runtime)
}
package sysactors

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

const (
	Method_RewardActor_AwardBlockReward = actor.MethodPlaceholder + iota
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////

func (a *RewardActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, RewardActorState) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.AbortAPI("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func UpdateReleaseRewardActorState(rt Runtime, h vmr.ActorStateHandle, st RewardActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *RewardActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) RewardActorState {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (st *RewardActorState_I) _withdrawReward(rt vmr.Runtime, ownerAddr addr.Address) actor.TokenAmount {

	rewards, found := st.RewardMap()[ownerAddr]
	if !found {
		rt.AbortStateMsg("ra._withdrawReward: ownerAddr not found in RewardMap.")
	}

	rewardToWithdrawTotal := actor.TokenAmount(0)
	indicesToRemove := make([]int, len(rewards))

	for i, r := range rewards {
		elapsedEpoch := rt.CurrEpoch() - r.StartEpoch()
		unlockedReward := actor.TokenAmount(uint64(r.ReleaseRate()) * uint64(elapsedEpoch))
		withdrawableReward := unlockedReward - r.AmountWithdrawn()

		if withdrawableReward < 0 {
			rt.AbortStateMsg("ra._withdrawReward: negative withdrawableReward.")
		}

		r.Impl().AmountWithdrawn_ = unlockedReward // modify rewards in place
		rewardToWithdrawTotal += withdrawableReward

		if r.AmountWithdrawn() == r.Value() {
			indicesToRemove = append(indicesToRemove, i)
		}
	}

	updatedRewards := removeIndices(rewards, indicesToRemove)
	st.RewardMap()[ownerAddr] = updatedRewards

	return rewardToWithdrawTotal
}

func (a *RewardActorCode_I) Constructor(rt vmr.Runtime) InvocOutput {
	// initialize Reward Map with investor accounts
	panic("TODO")
}

func (a *RewardActorCode_I) AwardBlockReward(rt vmr.Runtime, miner addr.Address, penalty actor.TokenAmount) {
	rt.ValidateImmediateCallerIs(addr.SystemActorAddr)
	// block reward function should live here
	// handle penalty greater than reward
	// put Reward into RewardMap
	panic("TODO")
}

// called by ownerAddress
func (a *RewardActorCode_I) WithdrawReward(rt vmr.Runtime) {
	// withdraw available funds from RewardMap
	h, st := a.State(rt)

	ownerAddr := rt.ImmediateCaller()
	withdrawableReward := st._withdrawReward(rt, ownerAddr)
	UpdateReleaseRewardActorState(rt, h, st)

	// send funds to owner
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:    ownerAddr,
		Value_: withdrawableReward,
	})
}

func removeIndices(rewards []Reward, indices []int) []Reward {
	// remove fully paid out Rewards by indices
	panic("TODO")
}

VM Interpreter - Message Invocation (Outside VM)

The VM interpreter orchestrates the execution of messages from a tipset on that tipset’s parent state, producing a new state and a sequence of message receipts. The CIDs of this new state and of the receipt collection are included in blocks from the subsequent epoch, which must agree about those CIDs in order to form a new tipset.

Every state change is driven by the execution of a message. The messages from all the blocks in a tipset must be executed in order to produce a next state. All messages from the first block are executed before those of second and subsequent blocks in the tipset. For each block, BLS-aggregated messages are executed first, then SECP signed messages.

Implicit messages

In addition to the messages explicitly included in each block, a few state changes at each epoch are made by implicit messages. Implicit messages are not transmitted between nodes, but constructed by the interpreter at evaluation time.

For each block in a tipset, an implicit message:

  • invokes the block producer’s miner actor to process the (already-validated) election PoSt submission, as the first message in the block;
  • invokes the reward actor to pay the block reward to the miner’s owner account, as the final message in the block;

For each tipset, an implicit message:

  • invokes the cron actor to process automated checks and payments, as the final message in the tipset.

All implicit messages are constructed with a From address being the distinguished system account actor. They specify a gas price of zero, but must be included in the computation. They must succeed (have an exit code of zero) in order for the new state to be computed. Receipts for implicit messages are not included in the receipt list; only explicit messages have an explicit receipt.

Gas payments

In most cases, the sender of a message pays the miner which produced the block including that message a gas fee for its execution.

The gas payments for each message execution are paid to the miner owner account immediately after that message is executed. There are no encumbrances to either the block reward or gas fees earned: both may be spent immediately.

Duplicate messages

Since different miners produce blocks in the same epoch, multiple blocks in a single tipset may include the same message (identified by the same CID). When this happens, the message is processed only the first time it is encountered in the tipset’s canonical order. Subsequent instances of the message are ignored and do not result in any state mutation, produce a receipt, or pay gas to the block producer.

The sequence of executions for a tipset is thus summarised:

  • pay reward for first block
  • process election post for first block
  • messages for first block (BLS before SECP)
  • pay reward for second block
  • process election post for second block
  • messages for second block (BLS before SECP, skipping any already encountered)
  • [… subsequent blocks …]
  • cron tick

Message validity and failure

Every message in a valid block can be processed and produce a receipt (note that block validity implies all messages are syntactically valid – see Message Syntax – and correctly signed). However, execution may or may not succeed, depending on the state to which the message is applied. If the execution of a message fails, the corresponding receipt will carry a non-zero exit code.

If a message fails due to a reason that can reasonably be attributed to the miner including a message that could never have succeeded in the parent state, or because the sender lacks funds to cover the maximum message cost, then the miner pays a penalty by burning the gas fee (rather than the sender paying fees to the block miner).

The only state changes resulting from a message failure are either:

  • incrementing of the sending actor’s CallSeqNum, and payment of gas fees from the sender to the owner of the miner of the block including the message; or
  • a penalty equivalent to the gas fee for the failed message, burnt by the miner (sender’s CallSeqNum unchanged).

A message execution will fail if, in the immediately preceding state:

  • the From actor does not exist in the state (miner penalized),
  • the From actor is not an account actor (miner penalized),
  • the CallSeqNum of the message does not match the CallSeqNum of the From actor (miner penalized),
  • the To actor does not exist in state and the To address is not a pubkey-style address (miner penalized),
  • the To actor does not exist in state and the message has a non-zero MethodNum (miner penalized),
  • the To actor exists but does not have a method corresponding to the non-zero MethodNum,
  • deserialized Params is not an array of length matching the arity of the To actor’s MethodNum method,
  • deserialized Params are not valid for the types specified by the To actor’s MethodNum method,
  • the From actor does not have sufficient balance to cover the sum of the message Value plus the maximum gas cost, GasLimit * GasPrice (miner penalized),
  • the invoked method consumes more gas than the GasLimit allows, or
  • the invoked method exits with a non-zero code (via Runtime.Abort()).

Note that if the To actor does not exist in state and the address is a valid H(pubkey) address, it will be created as an account actor (only if the message has a MethodNum of zero).

(You can see the old VM interpreter here )

vm/interpreter interface

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type UInt64 UInt

// The messages from one block in a tipset.
type BlockMessages struct {
    BLSMessages   [msg.UnsignedMessage]
    SECPMessages  [msg.SignedMessage]
    Miner         addr.Address  // The block miner's actor address
    PoStProof     Bytes  // The miner's Election PoSt proof output
}

// The messages from a tipset, grouped by block.
type TipSetMessages struct {
    Blocks  [BlockMessages]
    Epoch   UInt64  // The chain epoch of the blocks
}

type VMInterpreter struct {
    ApplyTipSetMessages(inTree st.StateTree, msgs TipSetMessages) struct {outTree st.StateTree, ret [vmr.MessageReceipt]}
    ApplyMessage(
        inTree     st.StateTree
        msg        msg.UnsignedMessage
        minerAddr  addr.Address
    ) struct {outTree st.StateTree, ret vmr.MessageReceipt}
}

vm/interpreter implementation

package interpreter

import (
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	storage_mining "github.com/filecoin-project/specs/systems/filecoin_mining/storage_mining"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
	exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
	gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
	st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	sysactors "github.com/filecoin-project/specs/systems/filecoin_vm/sysactors"
	util "github.com/filecoin-project/specs/util"
)

type Bytes = util.Bytes

var Assert = util.Assert
var TODO = util.TODO
var IMPL_FINISH = util.IMPL_FINISH

type SenderResolveSpec int

const (
	SenderResolveSpec_OK SenderResolveSpec = 1 + iota
	SenderResolveSpec_Invalid
)

// Applies all the message in a tipset, along with implicit block- and tipset-specific state
// transitions.
func (vmi *VMInterpreter_I) ApplyTipSetMessages(inTree st.StateTree, msgs TipSetMessages) (outTree st.StateTree, receipts []vmr.MessageReceipt) {
	outTree = inTree
	seenMsgs := make(map[ipld.CID]struct{}) // CIDs of messages already seen once.
	var receipt vmr.MessageReceipt

	for _, blk := range msgs.Blocks() {
		// Process block miner's Election PoSt.
		epostMessage := _makeElectionPoStMessage(outTree, blk.Miner())
		outTree = _applyMessageBuiltinAssert(outTree, epostMessage, blk.Miner())

		minerPenaltyTotal := actor.TokenAmount(0)
		var minerPenaltyCurr actor.TokenAmount

		// Process BLS messages from the block.
		for _, m := range blk.BLSMessages() {
			_, found := seenMsgs[_msgCID(m)]
			if found {
				continue
			}
			onChainMessageLen := len(msg.Serialize_UnsignedMessage(m))
			outTree, receipt, minerPenaltyCurr = vmi.ApplyMessage(outTree, m, onChainMessageLen, blk.Miner())
			minerPenaltyTotal += minerPenaltyCurr
			receipts = append(receipts, receipt)
			seenMsgs[_msgCID(m)] = struct{}{}
		}

		// Process SECP messages from the block.
		for _, sm := range blk.SECPMessages() {
			m := sm.Message()
			_, found := seenMsgs[_msgCID(m)]
			if found {
				continue
			}
			onChainMessageLen := len(msg.Serialize_SignedMessage(sm))
			outTree, receipt, minerPenaltyCurr = vmi.ApplyMessage(outTree, m, onChainMessageLen, blk.Miner())
			minerPenaltyTotal += minerPenaltyCurr
			receipts = append(receipts, receipt)
			seenMsgs[_msgCID(m)] = struct{}{}
		}

		// Pay block reward.
		rewardMessage := _makeBlockRewardMessage(outTree, blk.Miner(), minerPenaltyTotal)
		outTree = _applyMessageBuiltinAssert(outTree, rewardMessage, blk.Miner())
	}

	// Invoke cron tick, attributing it to the miner of the first block.

	TODO()
	// TODO: miners shouldn't be able to trigger cron by sending messages;
	// use ControlActor instead (https://github.com/filecoin-project/specs/issues/665)

	firstMiner := msgs.Blocks()[0].Miner()
	cronMessage := _makeCronTickMessage(outTree, firstMiner)
	outTree = _applyMessageBuiltinAssert(outTree, cronMessage, firstMiner)

	return
}

func (vmi *VMInterpreter_I) ApplyMessage(
	inTree st.StateTree, message msg.UnsignedMessage, onChainMessageSize int, minerAddr addr.Address) (
	retTree st.StateTree, retReceipt vmr.MessageReceipt, retMinerPenalty actor.TokenAmount) {

	minerOwner := storage_mining.GetMinerOwnerAddress_Assert(inTree, minerAddr)

	vmiGasRemaining := message.GasLimit()
	vmiGasUsed := msg.GasAmount_Zero()

	_applyReturn := func(
		tree st.StateTree, invocOutput vmr.InvocOutput, exitCode exitcode.ExitCode,
		senderResolveSpec SenderResolveSpec) {

		vmiGasRemainingFIL := _gasToFIL(vmiGasRemaining, message.GasPrice())
		vmiGasUsedFIL := _gasToFIL(vmiGasUsed, message.GasPrice())

		switch senderResolveSpec {
		case SenderResolveSpec_OK:
			// In this case, the sender is valid and has already transferred funds to the burnt funds actor
			// sufficient for the gas limit. Thus, we may refund the unused gas funds to the sender here.
			Assert(!message.GasLimit().LessThan(vmiGasUsed))
			Assert(message.GasLimit().Equals(vmiGasUsed.Add(vmiGasRemaining)))
			tree = _withTransferFundsAssert(tree, addr.BurntFundsActorAddr, message.From(), vmiGasRemainingFIL)
			tree = _withTransferFundsAssert(tree, addr.BurntFundsActorAddr, minerOwner, vmiGasUsedFIL)
			retMinerPenalty = actor.TokenAmount(0)

		case SenderResolveSpec_Invalid:
			retMinerPenalty = vmiGasUsedFIL

		default:
			Assert(false)
		}

		retTree = tree
		retReceipt = vmr.MessageReceipt_Make(invocOutput, exitCode, vmiGasUsed)
	}

	_applyError := func(tree st.StateTree, errExitCode exitcode.SystemErrorCode, senderResolveSpec SenderResolveSpec) {
		_applyReturn(tree, vmr.InvocOutput_Make(nil), exitcode.SystemError(errExitCode), senderResolveSpec)
	}

	// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
	// incurred yet.
	_vmiAllocGas := func(amount msg.GasAmount) (vmiAllocGasOK bool) {
		vmiGasRemaining, vmiAllocGasOK = vmiGasRemaining.SubtractIfNonnegative(amount)
		vmiGasUsed = message.GasLimit().Subtract(vmiGasRemaining)
		Assert(!vmiGasRemaining.LessThan(msg.GasAmount_Zero()))
		Assert(!vmiGasUsed.LessThan(msg.GasAmount_Zero()))
		return
	}

	// Deduct an amount of gas corresponding to costs already incurred, and for which the
	// gas cost must be paid even if it would cause the gas used to exceed the limit.
	_vmiBurnGas := func(amount msg.GasAmount) (vmiBurnGasOK bool) {
		vmiGasUsedPre := vmiGasUsed
		vmiBurnGasOK = _vmiAllocGas(amount)
		if !vmiBurnGasOK {
			vmiGasRemaining = msg.GasAmount_Zero()
			vmiGasUsed = vmiGasUsedPre.Add(amount)
		}
		return
	}

	ok := _vmiBurnGas(gascost.OnChainMessage(onChainMessageSize))
	if !ok {
		// Invalid message; insufficient gas limit to pay for the on-chain message size.
		_applyError(inTree, exitcode.OutOfGas, SenderResolveSpec_Invalid)
		return
	}

	fromActor, ok := inTree.GetActor(message.From())
	if !ok {
		// Execution error; sender does not exist at time of message execution.
		_applyError(inTree, exitcode.ActorNotFound, SenderResolveSpec_Invalid)
		return
	}

	// make sure this is the right message order for fromActor
	if message.CallSeqNum() != fromActor.CallSeqNum() {
		_applyError(inTree, exitcode.InvalidCallSeqNum, SenderResolveSpec_Invalid)
		return
	}

	// Check sender balance.
	gasLimitCost := _gasToFIL(message.GasLimit(), message.GasPrice())
	totalCost := message.Value() + actor.TokenAmount(gasLimitCost)
	if fromActor.Balance() < totalCost {
		// Execution error; sender does not have sufficient funds to pay for the gas limit.
		_applyError(inTree, exitcode.InsufficientFunds_System, SenderResolveSpec_Invalid)
		return
	}

	// Check receiving actor and method exists, possibly creating an account actor.
	// If this succeeds, compTreePreSend will become a state snapshot which includes
	// implicit receiver creation, the sender (rather than miner) paying gas, and the sender's
	// CallSeqNum being incremented; at least that much state change will be persisted even if the
	// method invocation subsequently fails.
	compTreePreSend, ok := _ensureReceiver(inTree, message)
	if !ok {
		// Execution error; receiver actor does not exist (and could not be implicitly created)
		// at time of message execution.
		_applyError(inTree, exitcode.ActorNotFound, SenderResolveSpec_Invalid)
		return
	}

	// Deduct gas limit funds from sender.
	// (This should always succeed, due to the sender balance check above.)
	compTreePreSend = _withTransferFundsAssert(
		compTreePreSend, message.From(), addr.BurntFundsActorAddr, gasLimitCost)

	// Increment sender CallSeqNum.
	compTreePreSend = compTreePreSend.Impl().WithIncrementedCallSeqNum_Assert(message.From())

	sendRet, compTreePostSend := _applyMessageInternal(compTreePreSend, message, vmiGasRemaining, minerAddr)

	ok = _vmiBurnGas(sendRet.GasUsed())
	if !ok {
		panic("Interpreter error: runtime execution used more gas than provided")
	}

	ok = _vmiAllocGas(gascost.OnChainReturnValue(sendRet.ReturnValue()))
	if !ok {
		// Insufficient gas remaining to cover the on-chain return value; proceed as in the case
		// of method execution failure.
		_applyError(compTreePreSend, exitcode.OutOfGas, SenderResolveSpec_OK)
		return
	}

	compTreeRet := compTreePreSend
	if sendRet.ExitCode().AllowsStateUpdate() {
		compTreeRet = compTreePostSend
	}

	_applyReturn(
		compTreeRet, vmr.InvocOutput_Make(sendRet.ReturnValue()), sendRet.ExitCode(), SenderResolveSpec_OK)
	return
}

// Ensures a messages's receiving actor exists in the state tree.
// If it doesn't, and the message is a simple value send to a pubkey-style address,
// creates the receiver as an account actor in the returned state.
func _ensureReceiver(tree st.StateTree, message msg.UnsignedMessage) (st.StateTree, bool) {
	_, found := tree.GetActor(message.To())

	if found {
		return tree, true
	}
	if !message.To().IsKeyType() {
		// Don't implicitly create an account actor for an address without an associated key.
		return tree, false
	}
	if message.Method() != actor.MethodSend {
		// Don't implicitly create account actor if message was expecting to invoke a method.
		return tree, false
	}
	// TODO Create a new account actor via the InitActor, which will receive a new ID address
	// and be placed in the state tree under that address.
	// The init actor will maintain a map from pubkey address to ID address.
	return tree, true
}

func _applyMessageBuiltinAssert(tree st.StateTree, message msg.UnsignedMessage, topLevelBlockWinner addr.Address) st.StateTree {
	Assert(message.From().Equals(addr.SystemActorAddr))
	tree = tree.Impl().WithIncrementedCallSeqNum_Assert(message.From())

	retReceipt, retTree := _applyMessageInternal(tree, message, message.GasLimit(), topLevelBlockWinner)
	if retReceipt.ExitCode() != exitcode.OK() {
		panic("internal message application failed")
	}

	return retTree
}

func _applyMessageInternal(
	tree st.StateTree, message msg.UnsignedMessage, gasRemainingInit msg.GasAmount, topLevelBlockWinner addr.Address) (
	vmr.MessageReceipt, st.StateTree) {

	fromActor, ok := tree.GetActor(message.From())
	Assert(ok)

	rt := vmr.VMContext_Make(
		message.From(),
		topLevelBlockWinner,
		fromActor.CallSeqNum(),
		actor.CallSeqNum(0),
		tree,
		message.From(),
		actor.TokenAmount(0),
		gasRemainingInit,
	)

	return rt.SendToplevelFromInterpreter(
		vmr.InvocInput_Make(
			message.To(),
			message.Method(),
			message.Params(),
			message.Value(),
		),
	)
}

func _withTransferFundsAssert(tree st.StateTree, from addr.Address, to addr.Address, amount actor.TokenAmount) st.StateTree {
	// TODO: assert amount nonnegative
	retTree, err := tree.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		panic("Interpreter error: insufficient funds (or transfer error) despite checks")
	} else {
		return retTree
	}
}

func _gasToFIL(gas msg.GasAmount, price msg.GasPrice) actor.TokenAmount {
	IMPL_FINISH()
	panic("") // BigInt arithmetic
	// return actor.TokenAmount(util.UVarint(gas) * util.UVarint(price))
}

// Builds a message for paying block reward to a miner's owner.
func _makeBlockRewardMessage(state st.StateTree, minerAddr addr.Address, penalty actor.TokenAmount) msg.UnsignedMessage {
	params := make([]util.Serialization, 2)
	params[0] = addr.Serialize_Address(minerAddr)
	params[1] = actor.Serialize_TokenAmount(penalty)

	sysActor, ok := state.GetActor(addr.SystemActorAddr)
	Assert(ok)
	return &msg.UnsignedMessage_I{
		From_:       addr.SystemActorAddr,
		To_:         addr.RewardActorAddr,
		Method_:     sysactors.Method_RewardActor_AwardBlockReward,
		Params_:     params,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

// Builds a message for submitting ElectionPost on behalf of a miner actor.
func _makeElectionPoStMessage(state st.StateTree, minerActorAddr addr.Address) msg.UnsignedMessage {
	// TODO: determine parameters necessary for this message.
	params := make([]util.Serialization, 0)

	sysActor, ok := state.GetActor(addr.SystemActorAddr)
	Assert(ok)
	return &msg.UnsignedMessage_I{
		From_:       addr.SystemActorAddr,
		To_:         minerActorAddr,
		Method_:     storage_mining.Method_StorageMinerActor_ProcessVerifiedElectionPoSt,
		Params_:     params,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

// Builds a message for invoking the cron actor tick.
func _makeCronTickMessage(state st.StateTree, minerActorAddr addr.Address) msg.UnsignedMessage {
	sysActor, ok := state.GetActor(addr.SystemActorAddr)
	Assert(ok)
	return &msg.UnsignedMessage_I{
		From_:       addr.SystemActorAddr,
		To_:         addr.CronActorAddr,
		Method_:     sysactors.Method_CronActor_EpochTick,
		Params_:     nil,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

func _msgCID(msg msg.UnsignedMessage) ipld.CID {
	panic("TODO")
}

vm/interpreter/registry

package interpreter

import "errors"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import market "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
import spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
import storage_market "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
import sysactors "github.com/filecoin-project/specs/systems/filecoin_vm/sysactors"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

var (
	ErrActorNotFound = errors.New("Actor Not Found")
)

// CodeIDs for system actors
var (
	InitActorCodeID           = actor.CodeID_Make_Builtin(actor.BuiltinActorID_Init)
	CronActorCodeID           = actor.CodeID_Make_Builtin(actor.BuiltinActorID_Cron)
	AccountActorCodeID        = actor.CodeID_Make_Builtin(actor.BuiltinActorID_Account)
	StoragePowerActorCodeID   = actor.CodeID_Make_Builtin(actor.BuiltinActorID_StoragePower)
	StorageMinerActorCodeID   = actor.CodeID_Make_Builtin(actor.BuiltinActorID_StorageMiner)
	StorageMarketActorCodeID  = actor.CodeID_Make_Builtin(actor.BuiltinActorID_StorageMarket)
	PaymentChannelActorCodeID = actor.CodeID_Make_Builtin(actor.BuiltinActorID_PaymentChannel)
)

var staticActorCodeRegistry = &actorCodeRegistry{}

type actorCodeRegistry struct {
	code map[actor.CodeID]vmr.ActorCode
}

func (r *actorCodeRegistry) _registerActor(id actor.CodeID, actor vmr.ActorCode) {
	r.code[id] = actor
}

func (r *actorCodeRegistry) _loadActor(id actor.CodeID) (vmr.ActorCode, error) {
	a, ok := r.code[id]
	if !ok {
		return nil, ErrActorNotFound
	}
	return a, nil
}

func RegisterActor(id actor.CodeID, actor vmr.ActorCode) {
	staticActorCodeRegistry._registerActor(id, actor)
}

func LoadActor(id actor.CodeID) (vmr.ActorCode, error) {
	return staticActorCodeRegistry._loadActor(id)
}

// init is called in Go during initialization of a program.
// this is an idiomatic way to do this. Implementations should approach this
// howevery they wish. The point is to initialize a static registry with
// built in pure types that have the code for each actor. Once we have
// a way to load code from the StateTree, use that instead.
func init() {
	_registerBuiltinActors()
}

func _registerBuiltinActors() {
	// TODO

	cron := &sysactors.CronActorCode_I{}

	RegisterActor(InitActorCodeID, &sysactors.InitActorCode_I{})
	RegisterActor(CronActorCodeID, cron)
	RegisterActor(AccountActorCodeID, &sysactors.AccountActorCode_I{})
	RegisterActor(StoragePowerActorCodeID, &spc.StoragePowerActorCode_I{})
	RegisterActor(StorageMarketActorCodeID, &market.StorageMarketActorCode_I{})

	// wire in CRON actions.
	// TODO: there's probably a better place to put this, but for now, do it here.
	cron.Entries_ = append(cron.Entries_, &sysactors.CronTableEntry_I{
		ToAddr_:    addr.StoragePowerActorAddr,
		MethodNum_: spc.Method_StoragePowerActor_EpochTick,
	})

	cron.Entries_ = append(cron.Entries_, &sysactors.CronTableEntry_I{
		ToAddr_:    addr.StorageMarketActorAddr,
		MethodNum_: storage_market.Method_StorageMarketActor_EpochTick,
	})
}

Blockchain

The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol. It is the main interface linking various actors in the Filecoin system.

It includes:

  • A Message Pool subsystem that nodes use to track and propagate messages related to the storage market throughout a gossip network.
  • A VM - Virtual Machine subsystem used to interpret and execute messages in order to update system state.
  • A subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
  • A susbystem that tracks and propagates validated message blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
  • A Storage Power Consensus subsystem which tracks storage state for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.

And also:

  • A {{ }} – which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
  • A {{ }} – which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.

At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards. Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.

Most of the functions of the Filecoin blockchain system are detailed in the code below.

Blocks

Block

Block

The Block is a unit of the Filecoin blockchain.

A block header contains information relevant to a particular point in time over which the network may achieve consensus.

Note: A block is functionally the same as a block header in the Filecoin protocol. While a block header contains Merkle links to the full system state, messages, and message receipts, a block can be thought of as the full set of this information (not just the Merkle roots, but rather the full data of the state tree, message tree, receipts tree, etc.). Because a full block is quite large, our chain consists of block headers rather than full blocks. We often use the terms block and block header interchangeably.
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"

import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type ChainWeight UVarint
type ChainEpoch UVarint
type MessageReceipt util.Bytes

// On-chain representation of a block header.
type BlockHeader struct {
    // Chain linking
    Parents                [&BlockHeader]
    ParentWeight           ChainWeight
    // State
    ParentState            &st.StateTree
    ParentMessageReceipts  &[&MessageReceipt]  // array-mapped trie ref

    // Consensus things
    Epoch                  ChainEpoch
    Timestamp              clock.UnixTime
    Ticket

    Miner                  addr.Address
    ElectionPoStOutput     ElectionPoStVerifyInfo

    // Proposed update
    Messages               &TxMeta
    BLSAggregate           filcrypto.Signature

    // Signatures
    Signature              filcrypto.Signature

    //	SerializeSigned()            []byte
    //	ComputeUnsignedFingerprint() []
}

type TxMeta struct {
    BLSMessages   &[&msg.UnsignedMessage]  // array-mapped trie
    SECPMessages  &[&msg.SignedMessage]  // array-mapped trie
}

// Internal representation of a full block, with all messages.
type Block struct {
    Header        BlockHeader
    BLSMessages   [msg.UnsignedMessage]
    SECPMessages  [msg.SignedMessage]
}

// HACK: Duplicated from posting.id (actual source)
type ElectionPoStVerifyInfo struct {
    Candidates  [PoStCandidate]
    Randomness  PoStRandomness
    Proof       PoStProof
}

type ChallengeTicketsCommitment struct {}  // see sector
type PoStCandidate struct {}  // see sector
type PoStRandomness struct {}  // see sector
type PoStProof struct {}
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type Ticket struct {
    VRFResult  filcrypto.VRFResult

    Output     Bytes                @(cached)
    DrawRandomness(round ChainEpoch) Bytes
    ValidateSyntax() bool
    Verify(
        input           Bytes
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool
}

type BytesAmount UVarint
type StoragePower BytesAmount

type TicketProductionSeedInput struct {
    PastTicket  util.Randomness
    MinerAddr   addr.Address
}

type TicketDrawingSeedInput struct {
    PastTicket  util.Randomness
    Epoch       ChainEpoch
}
Block syntax validation

Syntax validation refers to validation that may be performed on a block and its messages without reference to outside information such as the parent state tree.

An invalid block must not be transmitted or referenced as a parent.

A syntactically valid block header must decode into fields matching the type definition below.

A syntactically valid header must have:

  • between 1 and 5*ec.ExpectedLeaders Parents CIDs if Epoch is greater than zero (else empty Parents),
  • a non-negative ParentWeight,
  • a Miner address which is an ID-address,
  • a non-negative Epoch,
  • a positive Timestamp,
  • a Ticket with non-empty VRFResult,
  • ElectionPoStOutput containing:
    • a Candidates array with between 1 and EC.ExpectedLeaders values (inclusive),
    • a non-empty PoStRandomness field,
    • a non-empty Proof field.

A syntactically valid full block must have:

  • all referenced messages syntactically valid,
  • all referenced parent receipts syntactically valid,
  • the sum of the serialized sizes of the block header and included messages is no greater than block.BlockMaxSize,
  • the sum of the gas limit of all explicit messages is no greater than block.BlockGasLimit.

Note that validation of the block signature requires access to the miner worker address and public key from the parent tipset state, so signature validation forms part of semantic validation. Similarly, message signature validation requires lookup of the public key associated with each message’s From account actor in the block’s parent state.

Block semantic validation

Semantic validation refers to validation that requires reference to information outside the block header and messages themselves, in particular the parent tipset and state on which the block is built.

A semantically valid block must have:

  • Parents listed in lexicographic order of their header’s Ticket,
  • Parents all reference valid blocks and form a valid ,
  • ParentState matching the state tree produced by executing the parent tipset’s messages (as defined by the VM interpreter) against that tipset’s parent state,
  • ParentMessageReceipts identifying the receipt list produced by parent tipset execution, with one receipt for each unique message from the parent tipset,
  • ParentWeight matching the weight of the chain up to and including the parent tipset,
  • Epoch greater than that of its parents, and not in the future according to the node’s local clock reading of the current epoch,
  • Miner that is active in the storage power table in the parent tipset state,
  • a Ticket derived from the minimum ticket from the parent tipset’s block headers,
    • Ticket.VRFResult validly signed by the Miner actor’s worker account public key,
  • ElectionPoStOutput yielding winning partial tickets that were generated validly,
    • ElectionPoSt.Randomness is well formed and appropriately drawn from a past tipset according to the PoStLookback,
    • ElectionPoSt.Proof is a valid proof verifying the generation of the ElectionPoSt.Candidates from the Miner’s eligible sectors,
    • ElectionPoSt.Candidates contains well formed PoStCandidates each of which has a PartialTicket yielding a winning ChallengeTicket in Expected Consensus.
  • a Timestamp in seconds lying within the quantized epoch window implied by the genesis block’s timestamp and the block’s Epoch,
  • all SECP messages correctly signed by their sending actor’s worker account key,
  • a BLSAggregate signature that signs the array of CIDs of the BLS messages referenced by the block with their sending actor’s key.
  • a valid Signature over the block header’s fields from the block’s Miner actor’s worker account public key.

There is no semantic validation of the messages included in a block beyond validation of their signatures. If all messages included in a block are syntactically valid then they may be executed and produce a receipt.

A chain sync system may perform syntactic and semantic validation in stages in order to minimize unnecessary resource expenditure.

Tipset

Expected Consensus probabilistically elects multiple leaders in each epoch meaning a Filecoin chain may contain zero or multiple blocks at each epoch (one per elected miner). Blocks from the same epoch are assembled into tipsets. The modifies the Filecoin state tree by executing all messages in a tipset (after de-duplication of identical messages included in more than one block).

Each block references a parent tipset and validates that tipset’s state, while proposing messages to be included for the current epoch. The state to which a new block’s messages apply cannot be known until that block is incorporated into a tipset. It is thus not meaningful to execute the messages from a single block in isolation: a new state tree is only known once all messages in that block’s tipset are executed.

A valid tipset contains a non-empty collection of blocks that have distinct miners and all specify identical:

  • Epoch
  • Parents
  • ParentWeight
  • StateRoot
  • ReceiptsRoot

The blocks in a tipset are canonically ordered by the lexicographic ordering of the bytes in each block’s ticket, breaking ties with the bytes of the CID of the block itself.

Due to network propagation delay, it is possible for a miner in epoch N+1 to omit valid blocks mined at epoch N from their parent tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.

Block producers are expected to coordinate how they select messages for inclusion in blocks in order to avoid duplicates and thus maximize their expected earnings from transaction fees (see Message Pool).

import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

type Tipset struct {
    BlockCIDs           [&block.BlockHeader]
    Blocks              [block.BlockHeader]

    Has(b block.Block)  bool                  @(cached)
    Parents             Tipset                @(cached)
    StateTree           st.StateTree          @(cached)
    Weight              block.ChainWeight     @(cached)
    Epoch               block.ChainEpoch      @(cached)

    // Returns the largest timestamp fom the tipset's blocks.
    LatestTimestamp()   clock.UnixTime        @(cached)
    // Returns the lexicographically smallest ticket from the tipset's blocks.
    MinTicket()         block.Ticket          @(cached)
}

Chain

A Chain is a sequence of tipsets, linked together. It is a single history of execution in the Filecoin blockchain.

import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type Chain struct {
    HeadTipset Tipset

    TipsetAtEpoch(epoch block.ChainEpoch) Tipset
    RandomnessAtEpoch(epoch block.ChainEpoch) Bytes

    LatestCheckpoint() block.ChainEpoch
}

// Checkpoint represents a particular block to use as a trust anchor
// in Consensus and ChainSync
//
// Note: a Block uniquely identifies a tipset (the parents)
// from here, we may consider many tipsets that _include_ Block
// but we must indeed include t and not consider tipsets that
// fork from Block.Parents, but do not include Block.
type Checkpoint &block.BlockHeader

// SoftCheckpoint is a checkpoint that Filecoin nodes may use as they
// gain confidence in the blockchain. It is a unilateral checkpoint,
// and derived algorithmically from notions of probabilistic consensus
// and finality.
type SoftCheckpoint Checkpoint

// TrustedCheckpoint is a Checkpoint that is trusted by the broader
// Filecoin Network. These TrustedCheckpoints are arrived at through
// the higher level economic consensus that surrounds Filecoin.
// TrustedCheckpoints:
// - MUST be at least 200,000 blocks old (>1mo)
// - MUST be at least
// - MUST be widely known and accepted
// - MAY ship with Filecoin software implementations
// - MAY be propagated through other side-channel systems
// For more, see the Checkpoints section.
// TODO: consider renaming as EconomicCheckpoint
type TrustedCheckpoint Checkpoint
package chain

import (
	"github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	"github.com/filecoin-project/specs/util"
)

// Returns the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) TipsetAtEpoch(epoch block.ChainEpoch) Tipset {
	current := chain.HeadTipset()
	for current.Epoch() > epoch {
		current = current.Parents()
	}

	return current
}

// Draws randomness from the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) RandomnessAtEpoch(epoch block.ChainEpoch) util.Bytes {
	ts := chain.TipsetAtEpoch(epoch)
	return ts.MinTicket().DrawRandomness(epoch)
}

Chain Manager

The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.

In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.

The chain manager interfaces and functions are included here, but we expand on important details below for clarity.

Chain Expansion
Incoming block reception

Once a block has been received and passes syntactic and semantic validation it must be added to the local datastore, regardless whether it is understood as the best tip at this point. Future blocks from other miners may be mined on top of it and in that case we will want to have it around to avoid refetching.

To make certain validation checks simpler, blocks should be indexed by height and by parent set. That way sets of blocks with a given height and common parents may be quickly queried. It may also be useful to compute and cache the resultant aggregate state of blocks in these sets, this saves extra state computation when checking which state root to start a block at when it has multiple parents.

Chain selection is a crucial component of how the Filecoin blockchain works. Every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. It is always preferable to mine atop a heavier Tipset rather than a lighter one. While a miner may be foregoing block rewards earned in the past, this lighter chain is likely to be abandoned by other miners forfeiting any block reward earned as miners converge on a final chain. For more on this, see chain selection in the Expected Consensus spec.

However, ahead of finality, a given subchain may be abandoned in order of another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.

That is, for every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either: - it is able to do so, through the reception of another block in that subchain - it is able to discard it, as that block was mined before finality

We give an example of how this could work in the block reception algorithm.

ChainTipsManager

The Chain Tips Manager is a subcomponent of Filecoin consensus that is technically up to the implementer, but since the pseudocode in previous sections reference it, it is documented here for clarity.

The Chain Tips Manager is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.

// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}

// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}

// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()

// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)

Block Producer

Mining Blocks

A miner registered with the storage power actor may begin generating and checking election tickets if it has proven storage meeting the minimum miner size threshold requirement.

In order to do so, the miner must be running chain validation, and be keeping track of the most recent blocks received. A miner’s new block will be based on parents from a previous epoch.

For additional details around how consensus works in Filecoin, see Expected Consensus. For the purposes of this section, there is a consensus protocol (Expected Consensus) that guarantees a fair process for determining what blocks have been generated in a round, whether a miner is eligible to mine a block itself, and other rules pertaining to the production of some artifacts required of valid blocks (e.g. Tickets, ElectionPoSt).

Mining Cycle

At any height H, there are three possible situations:

  • The miner is eligible to mine a block: they produce their block and propagate it. They then resume mining at the next height H+1.
  • The miner is not eligible to mine a block but has received blocks: they form a Tipset with them and resume mining at the next height H+1.
  • The miner is not eligible to mine a block and has received no blocks: prompted by their clock they run leader election again, incrementing the epoch number.

This process is repeated until either a winning ticket is found (and block published) or a new valid Tipset comes in from the network.

Let’s illustrate this with an example.

Miner M is mining at epoch H. Heaviest tipset at H-1 is {B0}

  • New Epoch:
    • M produces a ticket at H, from B0’s ticket (the min ticket at H-1)
    • M draws the ticket from height H-K to generate a set of ElectionPoSt partial Tickets and uses them to run leader election
    • If M has no winning tickets
    • M has not heard about other blocks on the network.
  • New Epoch:
    • Height is incremented to H + 1.
    • M generates a new ElectionProof with this new epoch number.
    • If M has winning tickets
    • M generates a block B1 using the new ElectionProof and the ticket drawn last epoch.
    • M has received blocks B2, B3 from the network with the same parents and same height.
    • M forms a tipset {B1, B2, B3}

Anytime a miner receives new valid blocks, it should evaluate what is the heaviest Tipset it knows about and mine atop it.

Timing
Mining Cycle Timing (open in new tab)

The mining cycle relies on receiving and producing blocks concurrently. The sequence of these events in time is given by the timing diagram above. The upper row represents the conceptual consumption channel consisting of successive receiving periods Rx during which nodes validate and select blocks as chain heads. The lower row is the conceptual production channel made up of a period of mining M followed by a period of transmission Tx. The lengths of the periods are not to scale.

Blocks are received and validated during Rx up to the end of the epoch. At the beginning of the next epoch, the heaviest tipset is computed from the blocks received during Rx, used as the head to build on during M. If mining is successful a block is transmitted during Tx. The epoch boundaries are as shown.

In a fully synchronized network most of period Rx does not see any network traffic, only the period lined up with Tx. In practice we expect blocks from previous epochs to propagate during the remainder of Rx. We also expect differences in operator mining time to cause additional variance.

This sequence of events applies only when the node is in the CHAIN_FOLLOW syncing mode. Nodes in other syncing modes do not mine blocks.

Block Creation

Producing a block for epoch H requires computing a tipset for epoch H-1 (or possibly a prior epoch, if no blocks were received for that epoch). Using the state produced by this tipset, a miner can scratch winning ElectionPoSt ticket(s). Armed with the requisite ElectionPoStOutput, as well as a new randomness ticket generated in this epoch, a miner can produce a new block.

See VM Interpreter - Message Invocation (Outside VM) for details of parent tipset evaluation, and Block for constraints on valid block header values.

To create a block, the eligible miner must compute a few fields:

  • Parents - the CIDs of the parent tipset’s blocks.
  • ParentWeight - the parent chain’s weight (see Chain Selection).
  • ParentState - the CID of the state root from the parent tipset state evaluation (see the VM Interpreter - Message Invocation (Outside VM)).
  • ParentMessageReceipts - the CID of the root of an AMT containing receipts produced while computing ParentState.
  • Epoch - the block’s epoch, derived from the Parents epoch and the number of epochs it took to generate this block.
  • Timestamp - a Unix timestamp, in seconds, generated at block creation.
  • Ticket - a new ticket generated from that in the prior epoch (see Ticket Generation).
  • Miner - the block producer’s miner actor address.
  • ElectionPoStVerifyInfo - The byproduct of running an ElectionPoSt yielding requisite on-chain information (see Election PoSt), namely:
    • An array of PoStCandidate objects, all of which include a winning partial ticket used to run leader election.
    • PoStRandomness used to challenge the miner’s sectors and generate the partial tickets.
    • A PoStProof snark output to prove that the partial tickets were correctly generated.
  • Messages - The CID of a TxMeta object containing message proposed for inclusion in the new block:
    • Select a set of messages from the mempool to include in the block, satisfying block size and gas limits
    • Separate the messages into BLS signed messages and secpk signed messages
    • TxMeta.BLSMessages: The CID of the root of an AMT comprising the bare UnsignedMessages
    • TxMeta.SECPMessages: the CID of the root of an AMT comprising the SignedMessages
  • BLSAggregate - The aggregated signature of all messages in the block that used BLS signing.
  • Signature - A signature with the miner’s worker account private key (must also match the ticket signature) over the the block header’s serialized representation (with empty signature).

Note that the messages to be included in a block need not be evaluated in order to produce a valid block. A miner may wish to speculatively evaluate the messages anyway in order to optimize for including messages which will succeed in execution and pay the most gas.

The block reward is not evaluated when producing a block. It is paid when the block is included in a tipset in the following epoch.

The block’s signature ensures integrity of the block after propagation, since unlike many PoW blockchains, a winning ticket is found independently of block generation.

Block Broadcast

An eligible miner broadcasts the completed block to the network and, assuming everything was done correctly, the network will accept it and other miners will mine on top of it, earning the miner a block reward!

Miners should output their valid block as soon as it is produced, otherwise they risk other miners receiving the block after the EPOCH_CUTOFF and not including them.

Block Rewards

TODO: Rework this.

Over the entire lifetime of the protocol, 1,400,000,000 FIL (TotalIssuance) will be given out to miners. The rate at which the funds are given out is set to halve every six years, smoothly (not a fixed jump like in Bitcoin). These funds are initially held by the reward actor, and are transferred to miners in blocks that they mine. Over time, the reward will eventually become close zero as the fractional amount given out at each step shrinks the network account’s balance to 0.

The equation for the current block reward is of the form:

Reward = (IV * RemainingInNetworkActor) / TotalIssuance

IV is the initial value, and is set to:

IV = 153856861913558700202 attoFIL // 153.85 FIL

IV was derived from:

// Given one block every 30 seconds, this is how many blocks are in six years
HalvingPeriodBlocks = 6 * 365 * 24 * 60 * 2 = 6,307,200 blocks
Ξ» = ln(2) / HalvingPeriodBlocks
IV = TotalIssuance * (1-e^(-Ξ»)) // Converted to attoFIL (10e18)

Each of the miners who produced a block in a tipset will receive a block reward.

Note: Due to jitter in EC, and the gregorian calendar, there may be some error in the issuance schedule over time. This is expected to be small enough that it’s not worth correcting for. Additionally, since the payout mechanism is transferring from the network account to the miner, there is no risk of minting too much FIL.

TODO: Ensure that if a miner earns a block reward while undercollateralized, then min(blockReward, requiredCollateral-availableBalance) is garnished (transfered to the miner actor instead of the owner).

Message Pool

The Message Pool is a subsystem in the Filecoin blockchain system. The message pool is acts as the interface between Filecoin nodes and a peer-to-peer network used for off-chain message transmission. It is used by nodes to maintain a set of messages to transmit to the Filecoin VM (for “on-chain” execution).

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type MessagePoolSubsystem struct {
    // needs access to:
    // - BlockchainSubsystem
    //   - needs access to StateTree
    //   - needs access to Messages mined into blocks (probably past finality)
    //     to remove from the MessagePool
    // - NetworkSubsystem
    //   - needs access to MessagePubsub
    //
    // Important remaining questions:
    // - how does BlockchainSubsystem.BlockReceiver handle asking for messages?
    // - how do we note messages are now part of the blockchain
    //   - how are they cleared from the mempool
    // - do we need to have some sort of purge?

    // AddNewMessage is called to add messages created at this node,
    // or to be propagated by this node. All messages enter the network
    // through one of these calls, in at least one filecoin node. They
    // are then propagated to other filecoin nodes via the MessagePool
    // subsystem. Other nodes receive and propagate Messages via their
    // own MessagePools.
    AddNewMessage(m msg.SignedMessage)

    // Stats returns information about the MessagePool contents.
    Stats() MessagePoolStats

    // FindMessage receives a descriptor query q, and returns a set of
    // messages currently in the mempool that match the Query constraints.
    // q may have all, any, or no constraints specified.
    // FindMessage(q MessageQuery) union {
    //  [base.Message],
    //  Error
    // }

    // MostProfitableMessages returns messages that are most profitable
    // to mine for this miner.
    //
    // Note: This is where algorithms about chosing best messages given
    //       many leaders should go.
    GetMostProfitableMessages(miner addr.Address) [msg.SignedMessage]
}

type MessagePoolStats struct {
    // Size is the amount of messages in the MessagePool
    Size UInt
}

// MessageQuery is a descriptor used to find messages matching one or more
// of the constraints specified.
type MessageQuery struct {
    /*
  From   base.Address
  To     base.Address
  Method ActorMethodId
  Params ActorMethodParams

  ValueMin    TokenAmount
  ValueMax    TokenAmount
  GasPriceMin TokenAmount
  GasPriceMax TokenAmount
  GasLimitMin TokenAmount
  GasLimitMax TokenAmount
  */
}

Clients that use a message pool include:

  • storage market provider and client nodes - for transmission of deals on chain
  • storage miner nodes - for transmission of PoSts, sector commitments, deals, and other operations tracked on chain
  • verifier nodes - for transmission of potential faults on chain
  • relayer nodes - for forwarding and discarding messages appropriately.

The message pool subsystem is made of two components:

TODOs:

  • discuss how messages are meant to propagate slowly/async
  • explain algorithms for choosing profitable txns

Message Syncer

TODO:

  • explain message syncer works
  • include the message syncer code
Message Propagation

Messages are propagated over the libp2p pubsub channel /fil/messages. On this channel, every serialised SignedMessage is announced (see Message).

Upon receiving the message, its validity must be checked: the signature must be valid, and the account in question must have enough funds to cover the actions specified. If the message is not valid it should be dropped and must not be forwarded.

discuss checking signatures and account balances, some tricky bits that need consideration. Does the fund check cause improper dropping? E.g. I have a message sending funds then use the newly constructed account to send funds, as long as the previous wasn’t executed the second will be considered “invalid” … though it won’t be at the time of execution.

Message Storage

TODO:

  • give sample algorithm for miner message selection in block production (to avoid dups)
  • give sample algorithm for message storage caching/purging policies.

ChainSync - synchronizing the Blockchain

What is blockchain synchronization?

Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and transactions (messages), and thus in charge of distributed state replication. This process is security critical – problems here can be catastrophic to the operation of a blockchain.

What is ChainSync?

ChainSync is the protocol Filecoin uses to synchronize its blockchain. It is specific to Filecoin’s choices in state representation and consensus rules, but is general enough that it can serve other blockchains. ChainSync is a group of smaller protocols, which handle different parts of the sync process.

Terms and Concepts

  • LastCheckpoint the last hard social-consensus oriented checkpoint that ChainSync is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on. ChainSync takes LastCheckpoint on faith, and builds on it, never switching away from its history.
  • TargetHeads a list of BlockCIDs that represent blocks at the fringe of block production. These are the newest and best blocks ChainSync knows about. They are “target” heads because ChainSync will try to sync to them. This list is sorted by “likelihood of being the best chain” (eg for now, simply ChainWeight)
  • BestTargetHead the single best chain head BlockCID to try to sync to. This is the first element of TargetHeads

ChainSync Summary

At a high level, ChainSync does the following:

  • Part 1: Verify internal state (INIT state below)
    • SHOULD verify data structures and validate local chain
    • Resource expensive verification MAY be skipped at nodes’ own risk
  • Part 2: Bootstrap to the network (BOOTSTRAP)
    • Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
    • Step 2. Bootstrap to the BlockPubsub channels
    • Step 3. Listen and serve on Graphsync
  • Part 3: Synchronize trusted checkpoint state (SYNC_CHECKPOINT)
    • Step 1. Start with a TrustedCheckpoint (defaults to GenesisCheckpoint).
    • Step 2. Get the block it points to, and that block’s parents
    • Step 3. Graphsync the StateTree
  • Part 4: Catch up to the chain (CHAIN_CATCHUP)
    • Step 1. Maintain a set of TargetHeads (BlockCIDs), and select the BestTargetHead from it
    • Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
    • Step 3. As validation progresses, TargetHeads and BestTargetHead will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate.
    • Step 4. Finish when node has “caught up” with BestTargetHead (retrieved all the state, linked to local chain, validated all the blocks, etc).
  • Part 5: Stay in sync, and participate in block propagation (CHAIN_FOLLOW)
    • Step 1. If security conditions change, go back to Part 4 (CHAIN_CATCHUP)
    • Step 2. Receive, validate, and propagate received Blocks
    • Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.

libp2p Network Protocols

As a networking-heavy protocol, ChainSync makes heavy use of libp2p. In particular, we use three sets of protocols:

  • libp2p.PubSub a family of publish/subscribe protocols to propagate recent Blocks. The concrete protocol choice impacts ChainSync’s effectiveness, efficiency, and security dramatically. For Filecoin v1.0 we will use libp2p.Gossipsub, a recent libp2p protocol that combines features and learnings from many excellent PubSub systems. In the future, Filecoin may use other PubSub protocols. Important Note: is entirely possible for Filecoin Nodes to run multiple versions simultaneously. That said, this specification requires that filecoin nodes MUST connect and participate in the main channel, using libp2p.Gossipsub.
  • libp2p.PeerDiscovery a family of discovery protocols, to learn about peers in the network. This is especially important for security because network “Bootstrap” is a difficult problem in peer-to-peer networks. The set of peers we initially connect to may completely dominate our awareness of other peers, and therefore all state. We use a union of PeerDiscovery protocols as each by itself is not secure or appropriate for users’ threat models. The union of these provides a pragmatic and effective solution. Discovery protocols marked as required MUST be included in implementations and will be provided by implementation teams. Protocols marked as optional MAY be provided by implementation teams but can be built independently by third parties to augment bootstrap security.
  • libp2p.DataTransfer a family of protocols for transfering data Filecoin Nodes must run libp2p.Graphsync.

More concretely, we use these protocols:

  • libp2p.PeerDiscovery
    • (required) libp2p.BootstrapList a protocol that uses a persistent and user-configurable list of semi-trusted bootstrap peers. The default list includes a set of peers semi-trusted by the Filecoin Community.
    • (optional) libp2p.KademliaDHT a dht protocol that enables random queries across the entire network
    • (required) libp2p.Gossipsub a pub/sub protocol that includes “prune peer exchange” by default, disseminating peer info as part of operation
    • (optional) libp2p.PersistentPeerstore a connectivity component that keeps persistent information about peers observed in the network throughout the lifetime of the node. This is useful because we resume and continually improve Bootstrap security.
    • (optional) libp2p.DNSDiscovery to learn about peers via DNS lookups to semi-trusted peer aggregators
    • (optional) libp2p.HTTPDiscovery to learn about peers via HTTP lookups to semi-trusted peer aggregators
    • (optional) libp2p.PEX a general use peer exchange protocol distinct from pubsub peer exchange for 1:1 adhoc peer exchange
  • libp2p.PubSub
    • (required) libp2p.Gossipsub the concrete libp2p.PubSub protocol ChainSync uses
  • libp2p.DataTransfer
    • (required) libp2p.Graphsync the data transfer protocol nodes must support for providing blockchain and user data
    • (optional) BlockSync a blockchain data transfer protocol that can be used by some nodes

Subcomponents

Aside from libp2p, ChainSync uses or relies on the following components:

  • Libraries:
    • ipld data structures, selectors, and protocols
    • ipld.Store local persistent storage for chain datastructures
    • ipld.Selector a way to express requests for chain data structures
    • ipfs.GraphSync a general-purpose ipld datastructure syncing protocol
  • Data Structures:
    • Data structures in the chain package: Block, Tipset, Chain, Checkpoint ...
    • chainsync.BlockCache a temporary cache of blocks, to constrain resource expended
    • chainsync.AncestryGraph a datastructure to efficiently link Blocks, Tipsets, and PartialChains
    • chainsync.ValidationGraph a datastructure for efficient and secure validation of Blocks and Tipsets
Graphsync in ChainSync

ChainSync is written in terms of Graphsync. ChainSync adds blockchain and filecoin-specific synchronization functionality that is critical for Filecoin security.

Rate Limiting Graphsync responses (SHOULD)

When running Graphsync, Filecoin nodes must respond to graphsync queries. Filecoin requires nodes to provide critical data structures to others, otherwise the network will not function. During ChainSync, it is in operators’ interests to provide data structures critical to validating, following, and participating in the blockchain they are on. However, this has limitations, and some level of rate limiting is critical for maintaining security in the presence of attackers who might issue large Graphsync requests to cause DOS.

We recommend the following:

  • Set and enforce batch size rate limits. Force selectors to be shaped like: LimitedBlockIpldSelector(blockCID, BatchSize) for a single constant BatchSize = 1000. Nodes may push for this equilibrium by only providing BatchSize objects in responses, even for pulls much larger than BatchSize. This forces subsequent pulls to be run, re-rooted appropriately, and hints at other parties that they should be requesting with that BatchSize.
  • Force all Graphsync queries for blocks to be aligned along cacheable bounderies. In conjunction with a BatchSize, implementations should aim to cache the results of Graphsync queries, so that they may propagate them to others very efficiently. Aligning on certain boundaries (eg specific ChainEpoch limits) increases the likelihood many parties in the network will request the same batches of content. Another good cacheable boundary is the entire contents of a Block (BlockHeader, Messages, Signatures, etc).
  • Maintain per-peer rate-limits. Use bandwidth usage to decide whether to respond and how much on a per-peer basis. Libp2p already tracks bandwidth usage in each connection. This information can be used to impose rate limits in Graphsync and other Filecoin protocols.
  • Detect and react to DOS: restrict operation. The safest implementations will likely detect and react to DOS attacks. Reactions could include:
    • Smaller Graphsync.BatchSize limits
    • Fewer connections to other peers
    • Rate limit total Graphsync bandwidth
    • Assign Graphsync bandwidth based on a peer priority queue
    • Disconnect from and do not accept connections from unknown peers
    • Introspect Graphsync requests and filter/deny/rate limit suspicious ones
Previous BlockSync protocol

Prior versions of this spec recommended a BlockSync protocol. This protocol definition is available here. Filecoin nodes are libp2p nodes, and therefore may run a variety of other protocols, including this BlockSync protocol. As with anything else in Filecoin, nodes MAY opt to use additional protocols to achieve the results. That said, Nodes MUST implement the version of ChainSync as described in this spec in order to be considered implementations of Filecoin. Test suites will assume this protocol.

ChainSync State Machine

ChainSync uses the following conceptual state machine. Since this is a conceptual state machine, implementations MAY deviate from implementing precisely these states, or dividing them strictly. Implementations MAY blur the lines between the states. If so, implementations MUST ensure security of the altered protocol.

State Machine:

ChainSync State Machine (open in new tab)
ChainSync FSM: INIT
  • beginning state. no network connections, not synchronizing.
  • local state is loaded: internal data structures (eg chain, cache) are loaded
  • LastTrustedCheckpoint is set the latest network-wide accepted TrustedCheckpoint
  • FinalityTipset is set to finality achieved in a prior protocol run.
    • Default: If no later FinalityTipset has been achieved, set FinalityTipset to LastTrustedCheckpoint
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint)
  • security conditions to transition out:
    • local state and data structures SHOULD be verified to be correct
    • this means validating any parts of the chain or StateTree the node has, from LastTrustedCheckpoint on.
    • LastTrustedCheckpoint is well-known across the Filecoin Network to be a true TrustedCheckpoint
    • this SHOULD NOT be verified in software, it SHOULD be verified by operators
    • Note: we ALWAYS have at least one TrustedCheckpoint, the GenesisCheckpoint.
  • transitions out:
    • once done verifying things: move to BOOTSTRAP
ChainSync FSM: BOOTSTRAP
  • network.Bootstrap(): establish connections to peers until we satisfy security requirement
    • for better security, use many different libp2p.PeerDiscovery protocols
  • BlockPubsub.Bootstrap(): establish connections to BlockPubsub peers
    • The subscription is for both peer discovery and to start selecting best heads. Listing on pubsub from the start keeps the node informed about potential head changes.
  • Graphsync.Serve(): set up a Graphsync service, that responds to others’ queries
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint).
  • security conditions to transition out:
    • Network connectivity MUST have reached the security level acceptable for ChainSync
    • BlockPubsub connectivity MUST have reached the security level acceptable for ChainSync
    • “on time” blocks MUST be arriving through BlockPubsub
  • transitions out:
    • once bootstrap is deemed secure enough:
    • if node does not have the Blocks or StateTree corresponding to LastTrustedCheckpoint: move to SYNC_CHECKPOINT
    • otherwise: move to CHAIN_CATCHUP
ChainSync FSM: SYNC_CHECKPOINT
  • While in this state:
    • ChainSync is well-bootstrapped, but does not yet have the Blocks or StateTree for LastTrustedCheckpoint
    • ChainSync issues Graphsync requests to its peers randomly for the Blocks and StateTree for LastTrustedCheckpoint:
    • ChainSync’s counterparts in other peers MUST provide the state tree.
    • It is only semi-rational to do so, so ChainSync may have to try many peers.
    • Some of these requests MAY fail.
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is the available Blocks and StateTree for LastTrustedCheckpoint.
  • Important Notes:
    • ChainSync needs to fetch several blocks: the Block pointed at by LastTrustedCheckpoint, and its direct Block.Parents.
    • Nodes only need hashing to validate these Blocks and StateTrees – no block validation or state machine computation is needed.
    • The initial value of LastTrustedCheckpoint is GenesisCheckpoint, but it MAY be a value later in Chain history.
    • LastTrustedCheckpoint enables efficient syncing by making the implicit economic consensus of chain history explicit.
    • By allowing fetching of the StateTree of LastTrustedCheckpoint via Graphsync, ChainSync can yield much more efficient syncing than comparable blockchain synchronization protocols, as syncing and validation can start there.
    • Nodes DO NOT need to validate the chain from GenesisCheckpoint. LastTrustedCheckpoint MAY be a value later in Chain history.
    • Nodes DO NOT need to but MAY sync earlier StateTrees than LastTrustedCheckpoint as well.
  • Pseudocode 1: a basic version of SYNC_CHECKPOINT:

    func (c *ChainSync) SyncCheckpoint() {
        while !c.HasCompleteStateTreeFor(c.LastTrustedCheckpoint) {
            selector := ipldselector.SelectAll(c.LastTrustedCheckpoint)
            c.Graphsync.Pull(c.Peers, sel, c.IpldStore)
            // Pull SHOULD NOT pull what c.IpldStore already has (check first)
            // Pull SHOULD pull from different peers simultaneously
            // Pull SHOULD be efficient (try different parts of the tree from many peers)
            // Graphsync implementations may not offer these features. These features
            // can be implemented on top of a graphsync that only pulls from a single
            // peer and does not check local store first.
        }
        c.ChainCatchup() // on to CHAIN_CATCHUP
    }
  • security conditions to transition out:

    • StateTree for LastTrustedCheckpoint MUST be stored locally and verified (hashing is enough)
  • transitions out:

    • once node receives and verifies complete StateTree for LastTrustedCheckpoint: move to CHAIN_CATCHUP
ChainSync FSM: CHAIN_CATCHUP
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync is receiving latest Blocks from BlockPubsub
    • ChainSync starts fetching and validating blocks
    • ChainSync has unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has:
    • FinalityTipset does not change.
    • No new blocks are reported to consumers/users of ChainSync yet.
    • The chain state provided is the available Blocks and StateTree for all available epochs, specially the FinalityTipset.
    • finality must not move forward here because there are serious attack vectors where a node can be forced to end up on the wrong fork if finality advances before validation is complete up to the block production fringe.
    • Validation must advance, all the way to the block production fringe:
    • Validate the whole chain, from FinalityTipset to BestTargetHead
    • The node can reach BestTargetHead only to find out it was invalid, then has to update BestTargetHead with next best one, and sync to it (without having advanced FinalityTipset yet, as otherwise we may end up on the wrong fork)
  • security conditions to transition out:
    • Gaps between ChainSync.FinalityTipset ... ChainSync.BestTargetHead have been closed:
    • All Blocks and their content MUST be fetched, stored, linked, and validated locally. This includes BlockHeaders, Messages, etc.
    • Bad heads have been expunged from ChainSync.TargetHeads. Bad heads include heads that initially seemed good but turned out invalid, or heads that ChainSync has failed to connect (ie. cannot fetch ancestors connecting back to ChainSync.FinalityTipset within a reasonable amount of time).
    • All blocks between ChainSync.FinalityTipset ... ChainSync.TargetHeads have been validated This means all blocks before the best heads.
    • Not under a temporary network partition
  • transitions out:
    • once gaps between ChainSync.FinalityTipset ... ChainSync.TargetHeads are closed: move to CHAIN_FOLLOW
    • (Perhaps moving to CHAIN_FOLLOW when 1-2 blocks back in validation may be ok.
    • we dont know we have the right head until we validate it, so if other heads of similar height are right/better, we wont know till then.)
ChainSync FSM: CHAIN_FOLLOW
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync fetches and validates blocks.
    • ChainSync is receiving and validating latest Blocks from BlockPubsub
    • ChainSync DOES NOT have unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
    • ChainSync MUST drop back to another state if security conditions change.
    • Keep a set of gap measures:
    • BlockGap is the number of remaining blocks to validate between the Validated blocks and BestTargetHead.
      • (ie how many epochs do we need to validate to have validated BestTargetHead. does not include null blocks)
    • EpochGap is the number of epochs between the latest validated block, and BestTargetHead (includes null blocks).
    • MaxBlockGap = 2, which means how many blocks may ChainSync fall behind on before switching back to CHAIN_CATCHUP (does not include null blocks)
    • MaxEpochGap = 10, which means how many epochs may ChainSync fall behind on before switching back to CHAIN_CATCHUP (includes null blocks)
  • Chain State and Finality:
    • In this state, the chain MUST advance as all the blocks up to BestTargetHead are validated.
    • New blocks are finalized as they cross the finality threshold (ValidG.Heads[0].ChainEpoch - FinalityLookback)
    • New finalized blocks are reported to consumers.
    • The chain state provided includes the Blocks and StateTree for the Finality epoch, as well as candidate Blocks and StateTrees for unfinalized epochs.
  • security conditions to transition out:
    • Temporary network partitions (see Detecting Network Partitions).
    • Encounter gaps of >MaxBlockGap or >MaxEpochGap between Validated set and a new ChainSync.BestTargetHead
  • transitions out:
    • if a temporary network partition is detected: move to CHAIN_CATCHUP
    • if BlockGap > MaxBlockGap: move to CHAIN_CATCHUP
    • if EpochGap > MaxEpochGap: move to CHAIN_CATCHUP
    • if node is shut down: move to INIT

Block Fetching, Validation, and Propagation

Notes on changing TargetHeads while syncing
  • TargetHeads is changing, as ChainSync must be aware of the best heads at any time. reorgs happen, and our first set of peers could’ve been bad, we keep discovering others.
    • Hello protocol is good, but it’s polling. unless node is constantly polllng, wont see all the heads.
    • BlockPubsub gives us the realtime view into what’s actually going on.
    • weight can also be close between 2+ possible chains (long-forked), and ChainSync must select the right one (which, we may not be able to distinguish until validating all the way)
  • fetching + validation are strictly faster per round on average than blocks produced/block time (if they’re not, will always fall behind), so we definitely catch up eventually (and even quickly). the last couple rounds can be close (“almost got it, almost got it, there”).
General notes on fetching Blocks
  • ChainSync selects and maintains a set of the most likely heads to be correct from among those received via BlockPubsub. As more blocks are received, the set of TargetHeads is reevaluated.
  • ChainSync fetches Blocks, Messages, and StateTree through the Graphsync protocol.
  • ChainSync maintains sets of Blocks/Tipsets in Graphs (see ChainSync.id)
  • ChainSync gathers a list of TargetHeads from BlockPubsub, sorted by likelihood of being the best chain (see below).
  • ChainSync makes requests for chains of BlockHeaders to close gaps between TargetHeads
  • ChainSync forms partial unvalidated chains of BlockHeaders, from those received via BlockPubsub, and those requested via Graphsync.
  • ChainSync attempts to form fully connected chains of BlockHeaders, parting from StateTree, toward observed Heads
  • ChainSync minimizes resource expenditures to fetch and validate blocks, to protect against DOS attack vectors. ChainSync employs Progressive Block Validation, validating different facets at different stages of syncing.
  • ChainSync delays syncing Messages until they are needed. Much of the structure of the partial chains can be checked and used to make syncing decisions without fetching the Messages.
Progressive Block Validation
  • Blocks may be validated in progressive stages, in order to minimize resource expenditure.
  • Validation computation is considerable, and a serious DOS attack vector.
  • Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.
  • ChainSync SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed by FinalityTipset, or when ChainSync is under significant resource load.
  • These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.

  • Progressive Stages of Block Validation

    • BV0 - Syntax: Serialization, typing, value ranges.
    • BV1 - Plausible Consensus: Plausible miner, weight, and epoch values (e.g from chain state at b.ChainEpoch - consensus.LookbackParameter).
    • BV2 - Block Signature
    • BV3 - ElectionPoSt: Correct PoSt with a winning ticket.
    • BV4 - Chain ancestry and finality: Verify block links back to trusted chain, not prior to finality.
    • BV4 - Message Signatures:
    • BV5 - State tree: Parent tipset message execution produces the claimed state tree root and receipts.

Notes: - in CHAIN_CATCHUP, if a node is receiving/fetching hundreds/thousands of BlockHeaders, validating signatures can be very expensive, and can be deferred in favor of other validation. (ie lots of BlockHeaders coming in through network pipe, dont want to bound on sig verification, other checks can help dump blocks on the floor faster (BV0, BV2) - in CHAIN_FOLLOW, we’re not receiving thousands, we’re receiving maybe a dozen or 2 dozen packets in a few seconds. We receive cid w/ Sig and addr first (ideally fits in 1 packet), and can afford to (a) check if we already have the cid (if so done, cheap), or (b) if not, check if sig is correct before fetching header (expensive computation, but checking 1 sig is way faster than checking a ton). In practice likely that which one to do is dependent on miner tradeoffs. we’ll recommend something but let miners decide, because one strat or the other may be much more effective depending on their hardware, on their bandwidth limitations, or their propensity to getting DOSed

Progressive Block Propagation (or BlockSend)
  • In order to make Block propagation more efficient, we trade off network round trips for bandwidth usage.
  • Motivating observations:
    • Block propagation is one of the most security critical points of the whole protocol.
    • Bandwidth usage during Block propagation is the biggest rate limiter for network scalability.
    • The time it takes for a Block to propagate to the whole network is a critical factor in determining a secure BlockTime
    • Blocks propagating through the network should take as few sequential roundtrips as possible, as these roundtrips impose serious block time delays. However, interleaved roundtrips may be fine. Meaning that block.CIDs may be propagated on their own, without the header, then the header without the messages, then the messages.
    • Blocks will propagate over a libp2p.PubSub. libp2p.PubSub.Messages will most likely arrive multiple times at a node. Therefore, using only the block.CID here could make this very cheap in bandwidth (more expensive in round trips)
    • Blocks in a single epoch may include the same Messages, and duplicate transfers can be avoided
    • Messages propagate through their own MessagePubsub, and nodes have a significant probability of already having a large fraction of the messages in a block. Since messages are the bulk of the size of a Block, this can present great bandwidth savings.
  • Progressive Steps of Block Propagation
    • IMPORTANT NOTES:
      • these can be effectively pipelined. The receiver is in control of what to pull, and when. It is up them to decide when to trade-off RTTs for Bandwidth.
      • If the sender is propagating the block at all to receiver, it is in their interest to provide the full content to receiver when asked. Otherwise the block may not get included at all.
      • Lots of security assumptions here – this needs to be hyper verified, in both spec and code.
      • sender is a filecoin node running ChainSync, propagating a block via Gossipsub (as the originator, as another peer in the network, or just a Gossipsub router).
      • receiver is the local filecoin node running ChainSync, trying to get the blocks.
      • for receiver to Pull things from sender, receivermust conntect to sender. Usually sender is sending to receiver because of the Gossipsub propagation rules. receiver could choose to Pull from any other node they are connected to, but it is most likely sender will have the needed information. They usually may be more well-connected in the network.
    • Step 1. (sender) Push BlockHeader:
      • sender sends block.BlockHeader to receiver via Gossipsub:
        • bh := Gossipsub.Send(h block.BlockHeader)
        • This is a light-ish object (<4KB).
      • receiver receives bh.
        • This has many fields that can be validated before pulling the messages. (See Progressive Block Validation).
        • BV0, BV1, BV2, and BV3 validation takes place before propagating bh to other nodes.
        • receiver MAY receive many advertisements for each winning block in an epoch in quick succession. this is because (a) many want propagation as fast as possible, (b) many want to make those network advertisements as light as reasonable, © we want to enable receiver to choose who to ask it from (usually the first party to advertise it, and that’s what spec will recommend), and (d) want to be able to fall back to asking others if that fails (fail = dont get it in 1s or so)
    • Step 2. (receiver) Pull MessageCids:
      • upon receiving bh, receiver checks whether it already has the full block for bh.BlockCID. if not:
        • receiver requests bh.MessageCids from sender:
          • bm := Graphsync.Pull(sender, SelectAMTCIDs(b.Messages))
    • Step 3. (receiver) Pull Messages:
      • if receiver DOES NOT already have the all messages for b.BlockCID, then:
        • if receiver has some of the messages:
          • receiver requests missing Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bm[3], bm[10], bm[50], ...)) or
            • for m in bm {
              Graphsync.Pull(sender, SelectAll(m))
              }
              
        • if receiver does not have any of the messages (default safe but expensive thing to do):
          • receiver requests all Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bh.Messages))
        • (This is the largest amount of stuff)
    • Step 4. (receiver) Validate Block:
      • the only remaining thing to do is to complete Block Validation.

Calculations

Security Parameters
  • Peers >= 32 – direct connections
    • ideally Peers >= {64, 128} -
Pubsub Bandwidth

These bandwidth calculations are used to motivate choices in ChainSync.

If you imagine that you will receive the header once per gossipsub peer (or if lucky, half of them), and that there is EC.E_LEADERS=10 blocks per round, then we’re talking the difference between:

16 peers, 1 pkt  -- 1 * 16 * 10 = 160 dup pkts (256KB) in <5s
16 peers, 4 pkts -- 4 * 16 * 10 = 640 dup pkts (1MB)   in <5s

32 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (512KB) in <5s
32 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (2MB)   in <5s

64 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (1MB) in <5s
64 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (4MB)   in <5s

2MB in <5s may not be worth saving– and maybe gossipsub can be much better about supressing dups.

Notes (TODO: move elsewhere)

Checkpoints
  • A checkpoint is the CID of a block (not a tipset list of CIDs, or StateTree)
  • The reason a block is OK is that it uniquely identifies a tipset.
  • using tipsets directly would make Checkpoints harder to communicate. we want to make checkpoints a single hash, as short as we can have it. They will be shared in tweets, URLs, emails, printed into newspapers, etc. Compactness, ease of copy-paste, etc matters.
  • we’ll make human readable lists of checkpoints, and making “lists of lists” is more annoying.
  • When we have EC.E_PARENTS > 5 or = 10, tipsets will get annoyingly large.
  • the big quirk/weirdness with blocks it that it also must be in the chain. (if you relaxed that constraint you could end up in a weird case where a checkpoint isnt in the chain and that’s weird/violates assumptions).

Bootstrap chain stub
  • the mainnet filecoin chain will need to start with a small chain stub of blocks.
  • we must include some data in different blocks.
  • we do need a genesis block – we derive randomness from the ticket there. Rather than special casing, it is easier/less complex to ensure a well-formed chain always, including at the beginning
  • A lot of code expects lookbacks, especially actor code. Rather than introducing a bunch of special case logic for what happens ostensibly once in network history (special case logic which adds complexity and likelihood of problems), it is easiest to assume the chain is always at least X blocks long, and the system lookback parameters are all fine and dont need to be scaled in the beginning of network’s history.
PartialGraph

The PartialGraph of blocks.

Is a graph necessarily connected, or is this just a bag of blocks, with each disconnected subgraph being reported in heads/tails?

The latter. the partial graph is a DAG fragment– including disconnected components. here’s a visual example, 4 example PartialGraphs, with Heads and Tails. (note they aren’t tipsets)

Storage Power Consensus

The Storage Power Consensus subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. SPC accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.

Succinctly, the SPC subsystem offers the following services: - Access to the Power Table for every subchain, accounting for individual storage miner power and total power on-chain. - Access to Expected Consensus for individual storage miners, enabling: - Access to verifiable randomness Tickets as needed in the rest of the protocol. - Running Secret Leader Election to produce new blocks. - Running Chain Selection across subchains using EC’s weighting function. - Identification of the most recently finalized tipset, for use by all protocol participants.

Much of the Storage Power Consensus’ subsystem functionality is detailed in the code below but we touch upon some of its behaviors in more detail.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"

type StoragePowerConsensusSubsystem struct {//(@mutable)
    ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]

    ec          ExpectedConsensus
    blockchain  blockchain.BlockchainSubsystem

    // call by BlockchainSubsystem during block reception
    ValidateBlock(block block.Block) error

    IsWinningPartialTicket(
        st                 st.StateTree
        partialTicket      sector.PartialTicket
        sectorUtilization  block.StoragePower
        numSectors         util.UVarint
    ) bool

    _getStoragePowerActorState(stateTree st.StateTree) StoragePowerActorState

    validateTicket(
        tix             block.Ticket
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool

    computeChainWeight(tipset chain.Tipset) block.ChainWeight

    StoragePowerConsensusError() StoragePowerConsensusError

    // Randomness methods

    // call by StorageMiningSubsystem during block production
    GetTicketProductionRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    // call by StorageMiningSubsystem in sealing sector
    GetSealRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    // call by StorageMiningSubsystem after sealing
    GetPoStChallengeRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    GetFinality()     block.ChainEpoch
    FinalizedEpoch()  block.ChainEpoch
}

type StoragePowerConsensusError struct {}
Distinguishing between storage miners and block miners

There are two ways to earn Filecoin tokens in the Filecoin network: - By participating in the Storage Market as a storage provider and being paid by clients for file storage deals. - By mining new blocks on the network, helping modify system state and secure the Filecoin consensus mechanism.

We must distinguish between both types of “miners” (storage and block miners). Secret Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.

However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage (PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.

On Power

Per the above, we also clearly distinguish putting storage on-chain from gaining power in consensus (sometimes called “Storage Power”) as follows:

Consensus power in Filecoin is defined as the intersection between: - Proven Storage as of the PoSts verification (i.e. storage in the Proving Set since it will all be active by the time the PoSt is computed) - In-deal storage. Put another way consensus power is in-deal storage in the Proving Set. For instance, if a miner had two 32GB sector, each with 20 GB of in-deal storage; the miner would have 40GB worth of storage power for SPC.

Read more in Storage Miner.

Tickets

Tickets are used across the Filecoin protocol as sources of randomness: - The Sector Sealer uses tickets as SealSeeds to bind sector commitments to a given subchain. - The Storage Miner likewise uses tickets as PoStChallenges to prove sectors remain committed as of a given block. - They are drawn by the Storage Power subsystem as randomness in Secret Leader Election to determine their eligibility to mine a block - They are drawn by the Storage Power subsystem in order to generate new tickets for future use.

Each of these ticket uses may require drawing tickets at different chain epochs, according to the security requirements of the particular protocol making use of tickets. Specifically, the ticket output (which is a SHA256 output) is used for randomness.

In Filecoin, every block header contains a single ticket.

You can find the Ticket data structure here.

Comparing Tickets in a Tipset

Whenever comparing tickets is evoked in Filecoin, for instance when discussing selecting the “min ticket” in a Tipset, the comparison is that of the little endian representation of the ticket’s VFOutput bytes.

The Ticket chain and drawing randomness

While each Filecoin block header contains a ticket field (see Tickets), it is useful to think of a ticket chain abstraction. Due to the nature of Filecoin’s Tipsets and the possibility of using tickets from epochs that did not yield leaders to produce randomness at a given epoch, tracking the canonical ticket of a subchain at a given height can be arduous to reason about in terms of blocks. To that end, it is helpful to create a ticket chain abstraction made up of only those tickets to be used for randomness generation at a given height.

To read more about specifically how tickets are processed for randomness, see Randomness.

To sample a ticket for a given epoch n:

Set referenceTipsetOffset = 0
While true:
    Set referenceTipsetHeight = n - referenceTipsetOffset
    If blocks were mined at referenceTipsetHeight:
        ReferenceTipset = TipsetAtHeight(referenceTipsetHeight)
        Select the block in ReferenceTipset with the smallest final ticket, return its value (pastTicket).
    If no blocks were mined at referenceTipsetHeight:
        Increment referenceTipsetOffset
        (Repeat)
newRandomness = H(TicketDrawDST || index || Serialization(epoch || pastTicketOutput))

In english, this means two things: - Choose the smallest ticket in the Tipset if it contains multiple blocks. - When sampling a ticket from an epoch with no blocks, draw the min ticket from the prior epoch with blocks and concatenate it with - the wanted epoch number - hash this concatenation for a usable ticket value

See the RandomnessAtEpoch method below:

package chain

import (
	"github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	"github.com/filecoin-project/specs/util"
)

// Returns the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) TipsetAtEpoch(epoch block.ChainEpoch) Tipset {
	current := chain.HeadTipset()
	for current.Epoch() > epoch {
		current = current.Parents()
	}

	return current
}

// Draws randomness from the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) RandomnessAtEpoch(epoch block.ChainEpoch) util.Bytes {
	ts := chain.TipsetAtEpoch(epoch)
	return ts.MinTicket().DrawRandomness(epoch)
}

The above means that ticket randomness is reseeded with every new block, but can indeed be derived by any miner for an arbitrary epoch number using a past epoch.

Randomness Ticket generation

This section discusses how tickets are generated by EC for the Ticket field in every block header.

At round N, a new ticket is generated using tickets drawn from the Tipset at round N-1 (as shown below).

The miner runs the prior ticket through a Verifiable Random Function (VRF) to get a new unique ticket which can later be derived for randomness (as shown above). The prior ticket is prepended with the ticket domain separation tag and concatenated with the miner actor address (to ensure miners using the same worker keys get different randomness).

To generate a ticket for a given epoch n:

LastTicket = MinTicketValueAtEpoch(n-1)
newRandomness = VRF_miner(H(TicketProdDST || index || Serialization(pastTicket, minerActorAddress)))

The VRF’s deterministic output adds entropy to the ticket chain, limiting a miner’s ability to alter one block to influence a future ticket (given a miner does not know who will win a given round in advance).

We use the VRF from Verifiable Random Function for ticket generation in EC (see the PrepareNewTicket method below).

package storage_mining

// import sectoridx "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"
// import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import (
	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	libp2p "github.com/filecoin-project/specs/libraries/libp2p"
	spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
	node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
)

type Serialization = util.Serialization

func (sms *StorageMiningSubsystem_I) CreateMiner(
	ownerAddr addr.Address,
	workerAddr addr.Address,
	sectorSize util.UInt,
	peerId libp2p.PeerID,
) addr.Address {
	// ownerAddr := sms.generateOwnerAddress(workerPubKey)
	// var pledgeAmt actor.TokenAmount TODO: unclear how to pass the amount/pay
	// TODO compute PledgeCollateral for 0 bytes
	// return sms.StoragePowerActor().CreateStorageMiner(ownerAddr, workerPubKey, sectorSize, peerId)
	// TODO: access this from runtime
	// return sms.StoragePowerActor().CreateStorageMiner(ownerAddr, workerAddr, peerId)
	var minerAddr addr.Address
	return minerAddr
}

func (sms *StorageMiningSubsystem_I) HandleStorageDeal(deal deal.StorageDeal) {
	sms.SectorIndex().AddNewDeal(deal)
	// stagedDealResponse := sms.SectorIndex().AddNewDeal(deal)
	// TODO: way within a node to notify different components
	// markeet.StorageProvider().NotifyStorageDealStaged(&storage_provider.StorageDealStagedNotification_I{
	// 	Deal_:     deal,
	// 	SectorID_: stagedDealResponse.SectorID(),
	// })
}

func (sms *StorageMiningSubsystem_I) _generateOwnerAddress(workerPubKey filcrypto.PublicKey) addr.Address {
	panic("TODO")
}

func (sms *StorageMiningSubsystem_I) CommitSectorError() deal.StorageDeal {
	panic("TODO")
}

// triggered by new block reception and tipset assembly
func (sms *StorageMiningSubsystem_I) OnNewBestChain() {
	sms._tryLeaderElection()
}

// triggered by wall clock
func (sms *StorageMiningSubsystem_I) OnNewRound() {
	sms._tryLeaderElection()
}

func (sms *StorageMiningSubsystem_I) _tryLeaderElection() {

	// Randomness for ElectionPoSt
	randomnessK := sms._consensus().GetPoStChallengeRand(sms._blockchain().BestChain(), sms._blockchain().LatestEpoch())

	input := sms._preparePoStChallengeSeed(randomnessK, sms._keyStore().MinerAddress())
	postRandomness := sms._keyStore().WorkerKey().Impl().Generate(input).Output()

	// TODO: add how sectors are actually stored in the SMS proving set
	util.TODO()
	provingSet := make([]sector.SectorID, 0)

	candidates := sms.StorageProving().Impl().GenerateElectionPoStCandidates(postRandomness, provingSet)

	if len(candidates) <= 0 {
		return // fail to generate post candidates
	}

	// TODO Fix
	util.TODO()
	var currState stateTree.StateTree
	winningCandidates := make([]sector.PoStCandidate, 0)
	st := sms._getStorageMinerActorState(currState, sms._keyStore().MinerAddress())

	numMinerSectors := uint64(len(st.SectorTable().Impl().ActiveSectors_.SectorsOn()))
	for _, candidate := range candidates {
		sectorNum := candidate.SectorID().Number()
		sectorPower, ok := st._getSectorPower(sectorNum)
		if !ok {
			// panic(err)
			return
		}
		if sms._consensus().IsWinningPartialTicket(currState, candidate.PartialTicket(), sectorPower, numMinerSectors) {
			winningCandidates = append(winningCandidates, candidate)
		}
	}

	if len(winningCandidates) <= 0 {
		return
	}

	// Randomness for ticket generation in block production
	randomness1 := sms._consensus().GetTicketProductionRand(sms._blockchain().BestChain(), sms._blockchain().LatestEpoch())
	newTicket := sms.PrepareNewTicket(randomness1, sms._keyStore().MinerAddress())

	postProof := sms.StorageProving().Impl().CreateElectionPoStProof(postRandomness, winningCandidates)
	chainHead := sms._blockchain().BestChain().HeadTipset()

	sms._blockProducer().GenerateBlock(postProof, winningCandidates, newTicket, chainHead, sms._keyStore().MinerAddress())
}

func (sms *StorageMiningSubsystem_I) _preparePoStChallengeSeed(randomness util.Randomness, minerAddr addr.Address) util.Randomness {

	randInput := Serialize_PoStChallengeSeedInput(&PoStChallengeSeedInput_I{
		ticket_:    randomness,
		minerAddr_: minerAddr,
	})
	input := filcrypto.DomainSeparationTag_PoSt.DeriveRand(randInput)
	return input
}

func (sms *StorageMiningSubsystem_I) PrepareNewTicket(randomness util.Randomness, minerActorAddr addr.Address) block.Ticket {
	// run it through the VRF and get deterministic output

	// take the VRFResult of that ticket as input, specifying the personalization (see data structures)
	// append the miner actor address for the miner generifying this in order to prevent miners with the same
	// worker keys from generating the same randomness (given the VRF)
	randInput := block.Serialize_TicketProductionSeedInput(&block.TicketProductionSeedInput_I{
		PastTicket_: randomness,
		MinerAddr_:  minerActorAddr,
	})
	input := filcrypto.DomainSeparationTag_TicketProduction.DeriveRand(randInput)

	// run through VRF
	vrfRes := sms._keyStore().WorkerKey().Impl().Generate(input)

	newTicket := &block.Ticket_I{
		VRFResult_: vrfRes,
		Output_:    vrfRes.Output(),
	}

	return newTicket
}

// TODO: fix linking here
var node node_base.FilecoinNode

func (sms *StorageMiningSubsystem_I) _getStorageMinerActorState(stateTree stateTree.StateTree, minerAddr addr.Address) StorageMinerActorState {
	actorState, ok := stateTree.GetActor(minerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, err := node.LocalGraph().Get(ipld.CID(substateCID))
	if err != nil {
		panic("TODO")
	}
	// TODO fix conversion to bytes
	panic(substate)
	var serializedSubstate Serialization
	st, err := Deserialize_StorageMinerActorState(serializedSubstate)

	if err == nil {
		panic("Deserialization error")
	}
	return st
}

func (sms *StorageMiningSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) spc.StoragePowerActorState {
	powerAddr := addr.StoragePowerActorAddr
	actorState, ok := stateTree.GetActor(powerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, err := node.LocalGraph().Get(ipld.CID(substateCID))
	if err != nil {
		panic("TODO")
	}

	// TODO fix conversion to bytes
	panic(substate)
	var serializedSubstate util.Serialization
	st, err := spc.Deserialize_StoragePowerActorState(serializedSubstate)

	if err == nil {
		panic("Deserialization error")
	}
	return st
}

func (sms *StorageMiningSubsystem_I) GetWorkerKeyByMinerAddress(minerAddr addr.Address) filcrypto.VRFPublicKey {
	panic("TODO")
}

func (sms *StorageMiningSubsystem_I) VerifyElectionPoSt(header block.BlockHeader, onChainInfo sector.OnChainPoStVerifyInfo) bool {

	sma := sms._getStorageMinerActorState(header.ParentState(), header.Miner())
	spa := sms._getStoragePowerActorState(header.ParentState())

	// 1. Check that the miner in question is currently allowed to run election
	// Note that this is two checks, namely:
	// On SMA --> can the miner be elected per electionPoSt rules?
	// On SPA --> Does the miner's power meet the consensus minimum requirement?
	// we could bundle into a single call here for convenience
	if !sma._canBeElected(header.Epoch()) {
		return false
	}

	pow, err := sma._getActivePower()
	if err != nil {
		// TODO: better error handling
		return false
	}

	if !spa.ActivePowerMeetsConsensusMinimum(pow) {
		return false
	}

	// 2. Verify appropriate randomness
	// TODO: fix away from BestChain()... every block should track its own chain up to its own production.
	randomness := sms._consensus().GetPoStChallengeRand(sms._blockchain().BestChain(), header.Epoch())
	postRandomnessInput := sector.PoStRandomness(sms._preparePoStChallengeSeed(randomness, header.Miner()))

	postRand := &filcrypto.VRFResult_I{
		Output_: onChainInfo.Randomness(),
	}

	if !postRand.Verify(postRandomnessInput, sms.GetWorkerKeyByMinerAddress(header.Miner())) {
		return false
	}

	// A proof must be a valid snark proof with the correct public inputs
	// 3. Get public inputs
	info := sma.Info()
	sectorSize := info.SectorSize()

	postCfg := sector.PoStCfg_I{
		Type_:        sector.PoStType_ElectionPoSt,
		SectorSize_:  sectorSize,
		WindowCount_: info.WindowCount(),
		Partitions_:  info.ElectionPoStPartitions(),
	}

	pvInfo := sector.PoStVerifyInfo_I{
		OnChain_:    onChainInfo,
		PoStCfg_:    &postCfg,
		Randomness_: onChainInfo.Randomness(),
	}

	sdr := filproofs.WinSDRParams(&filproofs.SDRCfg_I{ElectionPoStCfg_: &postCfg})

	// 5. Verify the PoSt Proof
	isPoStVerified := sdr.VerifyElectionPoSt(&pvInfo)
	return isPoStVerified
}

func (sms *StorageMiningSubsystem_I) VerifySurprisePoSt(header block.BlockHeader, onChainInfo sector.OnChainPoStVerifyInfo, posterAddr addr.Address) bool {

	st := sms._getStorageMinerActorState(header.ParentState(), header.Miner())

	// 1. Check that the miner in question is currently being challenged
	if !st._isChallenged() {
		// TODO: determine proper error here and error-handling machinery
		// rt.Abort("cannot SubmitSurprisePoSt when not challenged")
		return false
	}

	// 2. Check that the challenge has not expired
	// Check that miner can still submit (i.e. that the challenge window has not passed)
	// This will prevent miner from submitting a Surprise PoSt past the challenge period
	if st._challengeHasExpired(header.Epoch()) {
		return false
	}

	// A proof must be a valid snark proof with the correct public inputs

	// 3. Verify appropriate randomness
	randomnessEpoch := st.ChallengeStatus().LastChallengeEpoch()
	// TODO: fix away from BestChain()... every block should track its own chain up to its own production.
	randomness := sms._consensus().GetPoStChallengeRand(sms._blockchain().BestChain(), randomnessEpoch)
	postRandomnessInput := sms._preparePoStChallengeSeed(randomness, posterAddr)

	postRand := &filcrypto.VRFResult_I{
		Output_: onChainInfo.Randomness(),
	}

	if !postRand.Verify(postRandomnessInput, sms.GetWorkerKeyByMinerAddress(posterAddr)) {
		return false
	}

	// 4. Get public inputs
	info := st.Info()
	sectorSize := info.SectorSize()

	postCfg := sector.PoStCfg_I{
		Type_:        sector.PoStType_SurprisePoSt,
		SectorSize_:  sectorSize,
		WindowCount_: info.WindowCount(),
		Partitions_:  info.SurprisePoStPartitions(),
	}

	pvInfo := sector.PoStVerifyInfo_I{
		OnChain_:    onChainInfo,
		PoStCfg_:    &postCfg,
		Randomness_: onChainInfo.Randomness(),
	}

	sdr := filproofs.WinSDRParams(&filproofs.SDRCfg_I{SurprisePoStCfg_: &postCfg})

	// 5. Verify the PoSt Proof
	isPoStVerified := sdr.VerifySurprisePoSt(&pvInfo)
	return isPoStVerified
}

func (sms *StorageMiningSubsystem_I) VerifyElection(header block.BlockHeader, onChainInfo sector.OnChainPoStVerifyInfo) bool {
	st := sms._getStorageMinerActorState(header.ParentState(), header.Miner())
	numMinerSectors := uint64(len(st.SectorTable().Impl().ActiveSectors_.SectorsOn()))

	for _, info := range onChainInfo.Candidates() {
		sectorNum := info.SectorID().Number()
		sectorPower, ok := st._getSectorPower(sectorNum)
		if !ok {
			// panic(err)
			return false
		}
		if !sms._consensus().IsWinningPartialTicket(header.ParentState(), info.PartialTicket(), sectorPower, numMinerSectors) {
			return false
		}
	}
	return true
}

// func (sms *StorageMiningSubsystem_I) submitPoStMessage(postSubmission poster.PoStSubmission) error {
// 	var workerAddress addr.Address
// 	var workerKeyPair filcrypto.SigKeyPair
// 	panic("TODO") // TODO: get worker address and key pair

// 	// TODO: is this just workerAddress, or is there a separation here
// 	// (worker is AccountActor, workerMiner is StorageMinerActor)?
// 	var workerMinerActorAddress addr.Address
// 	panic("TODO")

// 	var gasPrice msg.GasPrice
// 	var gasLimit msg.GasAmount
// 	panic("TODO") // TODO: determine gas price and limit

// 	var callSeqNum actor.CallSeqNum
// 	panic("TODO") // TODO: retrieve CallSeqNum from worker

// 	messageParams := actor.MethodParams([]actor.MethodParam{
// 		actor.MethodParam(poster.Serialize_PoStSubmission(postSubmission)),
// 	})

// 	unsignedMessage := msg.UnsignedMessage_Make(
// 		workerAddress,
// 		workerMinerActorAddress,
// 		Method_StorageMinerActor_SubmitPoSt,
// 		messageParams,
// 		callSeqNum,
// 		actor.TokenAmount(0),
// 		gasPrice,
// 		gasLimit,
// 	)

// 	signedMessage, err := msg.Sign(unsignedMessage, workerKeyPair)
// 	if err != nil {
// 		return err
// 	}

// 	err = sms.FilecoinNode().SubmitMessage(signedMessage)
// 	if err != nil {
// 		return err
// 	}

// 	return nil
// }
Ticket Validation

Each Ticket should be generated from the prior one in the ticket-chain and verified accordingly as shown in validateTicket below.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"

type StoragePowerConsensusSubsystem struct {//(@mutable)
    ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]

    ec          ExpectedConsensus
    blockchain  blockchain.BlockchainSubsystem

    // call by BlockchainSubsystem during block reception
    ValidateBlock(block block.Block) error

    IsWinningPartialTicket(
        st                 st.StateTree
        partialTicket      sector.PartialTicket
        sectorUtilization  block.StoragePower
        numSectors         util.UVarint
    ) bool

    _getStoragePowerActorState(stateTree st.StateTree) StoragePowerActorState

    validateTicket(
        tix             block.Ticket
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool

    computeChainWeight(tipset chain.Tipset) block.ChainWeight

    StoragePowerConsensusError() StoragePowerConsensusError

    // Randomness methods

    // call by StorageMiningSubsystem during block production
    GetTicketProductionRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    // call by StorageMiningSubsystem in sealing sector
    GetSealRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    // call by StorageMiningSubsystem after sealing
    GetPoStChallengeRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness

    GetFinality()     block.ChainEpoch
    FinalizedEpoch()  block.ChainEpoch
}

type StoragePowerConsensusError struct {}
package storage_power_consensus

import (
	"math"

	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
	node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
)

const FINALITY = 500

const (
	SPC_LOOKBACK_RANDOMNESS = 300      // this is EC.K maybe move it there. TODO
	SPC_LOOKBACK_TICKET     = 1        // we chain blocks together one after the other
	SPC_LOOKBACK_POST       = 1        // cheap to generate, should be set as close to current TS as possible
	SPC_LOOKBACK_SEAL       = FINALITY // should be set to finality
)

// Storage Power Consensus Subsystem

func (spc *StoragePowerConsensusSubsystem_I) ValidateBlock(block block.Block_I) error {
	panic("")
}

func (spc *StoragePowerConsensusSubsystem_I) validateTicket(ticket block.Ticket, pk filcrypto.VRFPublicKey, minerActorAddr addr.Address) bool {
	randomness1 := spc.GetTicketProductionRand(spc.blockchain().BestChain(), spc.blockchain().LatestEpoch())

	return ticket.Verify(randomness1, pk, minerActorAddr)
}

func (spc *StoragePowerConsensusSubsystem_I) ComputeChainWeight(tipset chain.Tipset) block.ChainWeight {
	return spc.ec().ComputeChainWeight(tipset)
}

func (spc *StoragePowerConsensusSubsystem_I) StoragePowerConsensusError(errMsg string) StoragePowerConsensusError {
	panic("TODO")
}

func (spc *StoragePowerConsensusSubsystem_I) IsWinningPartialTicket(stateTree stateTree.StateTree, partialTicket sector.PartialTicket, sectorUtilization block.StoragePower, numSectors util.UVarint) bool {

	// finalize the partial ticket
	challengeTicket := filcrypto.SHA256(partialTicket)

	st := spc._getStoragePowerActorState(stateTree)
	networkPower := st._getActivePower()

	// TODO: pull from constants
	EPOST_SAMPLE_RATE_NUM := util.UVarint(1)
	EPOST_SAMPLE_RATE_DENOM := util.UVarint(25)
	sectorsSampled := uint64(math.Ceil(float64(EPOST_SAMPLE_RATE_NUM/EPOST_SAMPLE_RATE_DENOM) * float64(numSectors)))

	return spc.ec().IsWinningChallengeTicket(challengeTicket, sectorUtilization, networkPower, sectorsSampled, numSectors)
}

// TODO: fix linking here
var node node_base.FilecoinNode

func (spc *StoragePowerConsensusSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) StoragePowerActorState {
	powerAddr := addr.StoragePowerActorAddr
	actorState, ok := stateTree.GetActor(powerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, err := node.LocalGraph().Get(ipld.CID(substateCID))
	if err != nil {
		panic("TODO")
	}

	// TODO fix conversion to bytes
	panic(substate)
	var serializedSubstate util.Serialization
	st, err := Deserialize_StoragePowerActorState(serializedSubstate)

	if err == nil {
		panic("Deserialization error")
	}
	return st
}

func (spc *StoragePowerConsensusSubsystem_I) GetTicketProductionRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness {
	return chain.RandomnessAtEpoch(epoch - SPC_LOOKBACK_TICKET)
}

func (spc *StoragePowerConsensusSubsystem_I) GetSealRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness {
	return chain.RandomnessAtEpoch(epoch - SPC_LOOKBACK_SEAL)
}

func (spc *StoragePowerConsensusSubsystem_I) GetPoStChallengeRand(chain chain.Chain, epoch block.ChainEpoch) util.Randomness {
	return chain.RandomnessAtEpoch(epoch - SPC_LOOKBACK_POST)
}

func (spc *StoragePowerConsensusSubsystem_I) GetFinality() block.ChainEpoch {
	panic("")
	// return FINALITY
}

func (spc *StoragePowerConsensusSubsystem_I) FinalizedEpoch() block.ChainEpoch {
	panic("")
	// currentEpoch := rt.HeadEpoch()
	// return currentEpoch - spc.GetFinality()
}

Repeated Leader Election attempts

In the case that no miner is eligible to produce a block in a given round of EC, the storage power consensus subsystem will be called by the block producer to attempt another leader election by incrementing the nonce appended to the ticket drawn from the past in order to attempt to find a new winning PartialTicket and trying again. Note that a miner may attempt to grind through tickets by incrementing the nonce repeatedly until they find a winning ticket. However, any block so generated in the future will be rejected by other miners (with synchronized clocks) until that epoch’s appropriate time.

Minimum Miner Size

In order to secure Storage Power Consensus, the system defines a minimum miner size required to participate in consensus.

Specifically, miners must have either at least MIN_MINER_SIZE_STOR of power (i.e. storage power currently used in storage deals) or MIN_MINER_SIZE_PERC of the network’s active storage power to participate in leader election.

Miners smaller than this cannot mine blocks and earn block rewards in the network. Their power will not be counted as part of total network power. However, it is important to note that such miners can still have their power faulted and be penalized accordingly.

Accordingly, to bootstrap the network, the genesis block must include miners taking part in valid storage deals along with appropriate committed storage.

The MIN_MINER_SIZE_PERC condition will not be used in a network with more than MIN_MINER_SIZE_STOR/MIN_MINER_SIZE_PERC of power. It is nonetheless defined to ensure liveness in small networks (e.g. close to genesis or after large power drops). Simply, a single miner can maintain network liveness for networks with less than MIN_MINER_SIZE_STOR/MIN_MINER_SIZE_PERC of active storage.

The below values are currently placeholders.

We currently set: - MIN_MINER_SIZE_STOR = 1 << 40 Bytes (100 TiB) - MIN_MINER_SIZE_PERC = .33

Storage Power Actor

StoragePowerActor interface
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"

type PowerTableEntry struct {
    ActivePower    block.StoragePower
    InactivePower  block.StoragePower
}

type PowerReport struct {
    ActivePower    block.StoragePower  // set value
    InactivePower  block.StoragePower  // set value
}

// type PowerTableHAMT {actor.ActorID: PowerTableEntry}
type PowerTableHAMT {addr.Address: PowerTableEntry}  // TODO: convert address to ActorID

type StoragePowerActorState struct {
    PowerTable   PowerTableHAMT
    EscrowTable  actor.BalanceTableHAMT

    _slashPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount) actor.TokenAmount
    _lockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount)
    _unlockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount)
    _getPowerTotalForMiner(minerAddr addr.Address) (power block.StoragePower, ok bool)
    _getPledgeCollateralReq(power block.StoragePower) actor.TokenAmount
    _getPledgeCollateralReqForMiner(minerAddr addr.Address) actor.TokenAmount
    _selectMinersToSurprise(challengeCount int, randomness util.Randomness) [addr.Address]
    _shouldChallenge(rt Runtime, networkPower block.StoragePower) bool

    _safeGetPowerEntry(rt Runtime, minerID addr.Address) PowerTableEntry
    _getAffectedPledge(
        rt             Runtime
        minerID        addr.Address
        affectedPower  block.StoragePower
    ) actor.TokenAmount
    _getActivePower() block.StoragePower
    ActivePowerMeetsConsensusMinimum(minPower block.StoragePower) bool
}

type StoragePowerActorCode struct {
    AddBalance(rt Runtime)
    WithdrawBalance(rt Runtime, amount actor.TokenAmount)

    // call by StorageMiningSubsytem on miner creation
    CreateStorageMiner(
        // TODO: document differences in Addr, Key and ID accross spec
        rt          Runtime
        ownerAddr   addr.Address
        workerAddr  addr.Address
        peerId      libp2p.PeerID  // TODO: will be removed likely (see: https://github.com/filecoin-project/specs/pull/555#pullrequestreview-300991681)
    ) addr.Address

    RemoveStorageMiner(rt Runtime, addr addr.Address)

    // PowerTable Operations
    GetTotalPower(rt Runtime) block.StoragePower
    GetSectorPower(rt Runtime) block.StoragePower

    EnsurePledgeCollateralSatisfied(rt Runtime)

    ProcessPowerReport(rt Runtime, report PowerReport)
    SlashPledgeForStorageFault(
        rt             Runtime
        affectedPower  block.StoragePower
        faultType      sector.StorageFaultType
    )

    ReportConsensusFault(
        // slasherAddr  addr.Address TODO: fromActor
        rt         Runtime
        faultType  ConsensusFaultType
        proof      [block.Block]
    )

    Surprise(rt Runtime, ticket block.Ticket) [addr.Address]

    PowerMeetsConsensusMinimum(rt Runtime, minerPower block.StoragePower)

    // this should call ReportConsensusFault, numSectors should be all sectors
    // ReportUncommittedPowerFault(cheaterAddr addr.Address, numSectors UVarint)
}
StoragePowerActorState implementation
package storage_power_consensus

import (
	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	util "github.com/filecoin-project/specs/util"
)

func (st *StoragePowerActorState_I) ActivePowerMeetsConsensusMinimum(minerPower block.StoragePower) bool {
	totPower := st._getActivePower()

	// TODO import from consts
	MIN_MINER_SIZE_STOR := block.StoragePower(0)
	MIN_MINER_SIZE_PERC := 0

	// if miner smaller than both min size in bytes and min percentage
	if (int(minerPower)*MIN_MINER_SIZE_PERC < int(totPower)*100) && minerPower < MIN_MINER_SIZE_STOR {
		return false
	}
	return true
}

func (st *StoragePowerActorState_I) _getActivePower() block.StoragePower {
	activePower := block.StoragePower(0)

	for _, miner := range st.PowerTable() {
		// only count miner power if they are larger than MIN_MINER_SIZE
		if st.ActivePowerMeetsConsensusMinimum(miner.ActivePower()) {
			activePower = activePower + miner.ActivePower()
		}
	}

	return activePower
}

func (st *StoragePowerActorState_I) _slashPledgeCollateral(
	minerAddr addr.Address, slashAmountRequested actor.TokenAmount) actor.TokenAmount {

	if slashAmountRequested < 0 {
		panic("_slashPledgeCollateral: error: negative amount specified")
	}

	newTable, amountSlashed, ok := actor.BalanceTable_WithSubtractPreservingNonnegative(
		st.EscrowTable(), minerAddr, slashAmountRequested)
	// TODO: extra handling of not having enough pledge collateral to be slashed?
	if !ok {
		panic("_slashPledgeCollateral: error: miner address not found")
	}

	st.Impl().EscrowTable_ = newTable

	return amountSlashed
}

func (st *StoragePowerActorState_I) _getPledgeCollateralReq(power block.StoragePower) actor.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (st *StoragePowerActorState_I) _getPledgeCollateralReqForMiner(minerAddr addr.Address) actor.TokenAmount {
	minerPowerTotal, ok := st._getPowerTotalForMiner(minerAddr)
	if !ok {
		panic("Power entry not found for miner")
	}

	return st._getPledgeCollateralReq(minerPowerTotal)
}

func addrInArray(a addr.Address, list []addr.Address) bool {
	for _, b := range list {
		if b == a {
			return true
		}
	}
	return false
}

// _selectMinersToSurprise implements the PoSt-Surprise sampling algorithm
func (st *StoragePowerActorState_I) _selectMinersToSurprise(challengeCount int, randomness util.Randomness) []addr.Address {
	// this wont quite work -- a.PowerTable() is a HAMT by actor address, doesn't
	// support enumerating by int index. maybe we need that as an interface too,
	// or something similar to an iterator (or iterator over the keys)
	// or even a seeded random call directly in the HAMT: myhamt.GetRandomElement(seed []byte, idx int) using the ticket as a seed

	ptSize := len(st.PowerTable())
	allMiners := make([]addr.Address, len(st.PowerTable()))
	index := 0

	for address, _ := range st.PowerTable() {
		allMiners[index] = address
		index++
	}

	selectedMiners := make([]addr.Address, 0)
	for chall := 0; chall < challengeCount; chall++ {
		minerIndex := filcrypto.RandomInt(randomness, chall, ptSize)
		potentialChallengee := allMiners[minerIndex]
		// skip dups
		for addrInArray(potentialChallengee, selectedMiners) {
			minerIndex := filcrypto.RandomInt(randomness, chall, ptSize)
			potentialChallengee = allMiners[minerIndex]
		}
		selectedMiners = append(selectedMiners, potentialChallengee)
	}

	return selectedMiners
}

func (st *StoragePowerActorState_I) _safeGetPowerEntry(rt Runtime, minerID addr.Address) PowerTableEntry {
	powerEntry, found := st.PowerTable()[minerID]

	if !found {
		rt.AbortStateMsg("sm._safeGetPowerEntry: miner not found in power table.")
	}

	return powerEntry
}

func (st *StoragePowerActorState_I) _getTotalPower() block.StoragePower {
	// TODO (optimization): cache this as a counter in the actor state,
	// and update it for relevant operations.

	totalPower := block.StoragePower(0)
	for _, minerEntry := range st.PowerTable() {
		totalPower = totalPower + minerEntry.ActivePower() + minerEntry.InactivePower()
	}
	return totalPower
}

func (st *StoragePowerActorState_I) _getPowerTotalForMiner(minerAddr addr.Address) (
	power block.StoragePower, ok bool) {

	IMPL_FINISH()
	panic("")
}

func (st *StoragePowerActorState_I) _getAffectedPledge(
	rt Runtime, minerAddr addr.Address, affectedPower block.StoragePower) actor.TokenAmount {

	// TODO: revisit this calculation
	minerPowerTotal, ok := st._getPowerTotalForMiner(minerAddr)
	Assert(ok)
	pledgeRequired := st._getPledgeCollateralReq(minerPowerTotal)
	affectedPledge := actor.TokenAmount(uint64(pledgeRequired) * uint64(affectedPower) / uint64(minerPowerTotal))

	return affectedPledge
}
StoragePowerActorCode implementation
package storage_power_consensus

import (
	"math"

	ipld "github.com/filecoin-project/specs/libraries/ipld"
	libp2p "github.com/filecoin-project/specs/libraries/libp2p"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
	util "github.com/filecoin-project/specs/util"
)

var Assert = util.Assert
var IMPL_FINISH = util.IMPL_FINISH
var PARAM_FINISH = util.PARAM_FINISH
var TODO = util.TODO

const (
	Method_StoragePowerActor_EpochTick = actor.MethodPlaceholder + iota
	Method_StoragePowerActor_ProcessPowerReport
	Method_StoragePowerActor_ProcessFaultReport
	Method_StoragePowerActor_SlashPledgeForStorageFault
	Method_StoragePowerActor_EnsurePledgeCollateralSatisfied
)

func _storageFaultSlashPledgePercent(faultType sector.StorageFaultType) int {
	PARAM_FINISH() // TODO: instantiate these placeholders
	panic("")

	// these are the scaling constants for percentage pledge collateral to slash
	// given a miner's affected power and its total power
	switch faultType {
	case sector.DeclaredFault:
		return 1 // placeholder
	case sector.DetectedFault:
		return 10 // placeholder
	case sector.TerminatedFault:
		return 100 // placeholder
	default:
		panic("Case not supported")
	}
}

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime
type Bytes = util.Bytes
type State = StoragePowerActorState

func (a *StoragePowerActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.AbortAPI("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StoragePowerActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (a *StoragePowerActorCode_I) AddBalance(rt Runtime) {
	rt.ValidateImmediateCallerAcceptAnyOfType(actor.BuiltinActorID_Account)

	ownerAddr := rt.ImmediateCaller()
	msgValue := rt.ValueReceived()

	h, st := a.State(rt)
	newTable, ok := actor.BalanceTable_WithAdd(st.EscrowTable(), ownerAddr, msgValue)
	if !ok {
		rt.AbortStateMsg("spa.AddBalance: Escrow operation failed.")
	}
	st.Impl().EscrowTable_ = newTable
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) WithdrawBalance(rt Runtime, amountRequested actor.TokenAmount) {
	rt.ValidateImmediateCallerAcceptAnyOfType(actor.BuiltinActorID_Account)

	if amountRequested < 0 {
		rt.AbortArgMsg("spa.WithdrawBalance: negative amount.")
	}

	minerAddr := rt.ImmediateCaller()

	var ownerAddr addr.Address
	TODO() // Determine owner address from miner

	h, st := a.State(rt)

	minerPowerTotal, ok := st._getPowerTotalForMiner(minerAddr)
	if !ok {
		rt.AbortArgMsg("spa.WithdrawBalance: Miner not found.")
	}

	minBalance := st._getPledgeCollateralReq(minerPowerTotal)
	newTable, amountExtracted, ok := actor.BalanceTable_WithExtractPartial(
		st.EscrowTable(), minerAddr, amountRequested, minBalance)
	if !ok {
		rt.AbortStateMsg("spa.WithdrawBalance: Escrow operation failed.")
	}
	st.Impl().EscrowTable_ = newTable

	UpdateRelease(rt, h, st)

	// send funds to miner
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:    ownerAddr,
		Value_: amountExtracted,
	})
}

func (a *StoragePowerActorCode_I) CreateStorageMiner(
	rt Runtime, workerAddr addr.Address, peerId libp2p.PeerID) addr.Address {

	rt.ValidateImmediateCallerAcceptAnyOfType(actor.BuiltinActorID_Account)

	// ownerAddr := rt.ImmediateCaller()
	msgValue := rt.ValueReceived()

	var newMinerAddr addr.Address
	TODO() // TODO: call InitActor::Exec to create the StorageMinerActor
	panic("")

	// TODO: anything to check here?
	newMinerEntry := &PowerTableEntry_I{
		ActivePower_:   block.StoragePower(0),
		InactivePower_: block.StoragePower(0),
	}

	h, st := a.State(rt)
	newTable, ok := actor.BalanceTable_WithNewAddressEntry(st.EscrowTable(), newMinerAddr, msgValue)
	if !ok {
		panic("Internal error: newMinerAddr (result of InitActor::Exec) already exists in escrow table")
	}
	st.Impl().EscrowTable_ = newTable
	st.PowerTable()[newMinerAddr] = newMinerEntry
	UpdateRelease(rt, h, st)

	return newMinerAddr
}

func (a *StoragePowerActorCode_I) RemoveStorageMiner(rt Runtime) {
	rt.ValidateImmediateCallerAcceptAnyOfType(actor.BuiltinActorID_StorageMiner)

	minerAddr := rt.ImmediateCaller()

	h, st := a.State(rt)

	minerPowerTotal, ok := st._getPowerTotalForMiner(minerAddr)
	if !ok {
		rt.AbortArgMsg("spa.RemoveStorageMiner: miner entry not found")
	}
	if minerPowerTotal > 0 {
		// TODO: manually remove the power entries here (and update relevant counters),
		// instead of throwing a runtime error?
		rt.AbortStateMsg("power still remains.")

		TODO()
		// TODO: also fail if funds still remaining in escrow
	}

	delete(st.PowerTable(), minerAddr)

	newTable, ok := actor.BalanceTable_WithDeletedAddressEntry(st.EscrowTable(), minerAddr)
	if !ok {
		panic("Internal error: miner entry in escrow table does not exist")
	}
	st.Impl().EscrowTable_ = newTable

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) EnsurePledgeCollateralSatisfied(rt Runtime) {
	rt.ValidateImmediateCallerAcceptAnyOfType(actor.BuiltinActorID_StorageMiner)

	TODO() // TODO: principal access control
	minerAddr := rt.ImmediateCaller()

	h, st := a.State(rt)
	pledgeReq := st._getPledgeCollateralReqForMiner(minerAddr)
	UpdateRelease(rt, h, st)

	balanceSufficient, ok := actor.BalanceTable_IsEntrySufficient(st.EscrowTable(), minerAddr, pledgeReq)
	Assert(ok)
	if !balanceSufficient {
		rt.AbortFundsMsg("exitcode.InsufficientPledgeCollateral")
	}

}

// slash pledge collateral for Declared, Detected and Terminated faults
func (a *StoragePowerActorCode_I) SlashPledgeForStorageFault(rt Runtime, affectedPower block.StoragePower, faultType sector.StorageFaultType) {

	minerID := rt.ImmediateCaller()

	h, st := a.State(rt)

	affectedPledge := st._getAffectedPledge(rt, minerID, affectedPower)

	TODO() // BigInt arithmetic
	amountToSlash := actor.TokenAmount(
		_storageFaultSlashPledgePercent(faultType) * int(affectedPledge) / 100)

	amountSlashed := st._slashPledgeCollateral(rt, minerID, amountToSlash)
	UpdateRelease(rt, h, st)

	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:    addr.BurntFundsActorAddr,
		Value_: amountSlashed,
	})

}

// @param PowerReport with ActivePower and InactivePower
// update miner power based on the power report
func (a *StoragePowerActorCode_I) ProcessPowerReport(rt Runtime, report PowerReport) {

	minerID := rt.ImmediateCaller()

	h, st := a.State(rt)

	powerEntry := st._safeGetPowerEntry(rt, minerID)
	powerEntry.Impl().ActivePower_ = report.ActivePower()
	powerEntry.Impl().InactivePower_ = report.InactivePower()
	st.Impl().PowerTable_[minerID] = powerEntry

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) ReportConsensusFault(rt Runtime, slasherAddr addr.Address, faultType ConsensusFaultType, proof []block.Block) {
	panic("TODO")

	// Use EC's IsValidConsensusFault method to validate the proof
	// slash block miner's pledge collateral
	// reward slasher

	// include ReportUncommittedPowerFault(cheaterAddr addr.Address, numSectors util.UVarint) as case
	// Quite a bit more straightforward since only called by the cron actor (ie publicly verified)
	// slash cheater pledge collateral accordingly based on num sectors faulted

}

// Surprise is in the storage power actor because it is a singleton actor and surprise helps miners maintain power
// TODO: add Surprise to the cron actor
func (a *StoragePowerActorCode_I) Surprise(rt Runtime) {

	PROVING_PERIOD := 0 // defined in storage_mining, TODO: move constants somewhere else

	// sample the actor addresses
	h, st := a.State(rt)

	randomness := rt.Randomness(rt.CurrEpoch(), 0)
	challengeCount := math.Ceil(float64(len(st.PowerTable())) / float64(PROVING_PERIOD))
	surprisedMiners := st._selectMinersToSurprise(int(challengeCount), randomness)

	UpdateRelease(rt, h, st)

	// now send the messages
	for _, addr := range surprisedMiners {
		// For each miner here send message
		panic(addr) // hack coz of import cycle
		// rt.SendPropagatingErrors(&vmr.InvocInput_I{
		// 	To_:     addr,
		// 	Method_: sms.Method_StorageMinerActor_NotifyOfSurprisePoStChallenge,
		// })
	}
}

func (a *StoragePowerActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	panic("TODO")
}
The Power Table

The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Power Fraction over time. That is, a miner whose storage represents 1% of total storage on the network should mine 1% of blocks on expectation.

SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), when PoSts fail to be put on-chain (decrementing miner power) or for other storage and consensus faults.

An invariant of the storage power consensus subsystem is that all storage in the power table must be verified. That is, miners can only derive power from storage they have already proven to the network.

In order to achieve this, Filecoin delays updating power for new sector commitments until the first valid PoSt in the next proving period corresponding to that sector. (TODO: potential delay this further in order to ensure that any power cut goes undetected at most as long as the shortest power delay on new sector commitments).

For instance, say a miner X does the following: - In epoch 100: commits 10 TB - In epoch 110: publishes a PoSt for their storage - In epoch 120: commits another 10TB - In epoch 135: publishes a new PoSt for their storage

Querying the power table for this miner at different rounds should yield (using the following shorthand as an illustration only): - Power(X, 90) == 0 - Power(X, 100) == 0 - Power(X, 110) == 0 - Power(X, 111) == 10 - Power(X, 120) == 10 - Power(X, 135) == 10 - Power(x, 136) == 20

Conversely, storage faults only lead to power loss once they are detected (up to one proving period after the fault) so miners will mine with no more power than they have used to store data over time.

Put another way, power accounting in the SPC is delayed between storage being proven or faulted, and power being updated in the power table (and so for leader election). This ensures fairness over time.

The Miner lifecycle in the power table should be roughly as follows: - MinerRegistration: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker). - UpdatePower: These power increments and decrements are called by various storage actor (and must thus be verified by every full node on the network). Specifically: - Power is incremented to account for a new SectorCommitment at the first PoSt past the first ProvingPeriod. - All Power is decremented immediately after a missed PoSt. - Power is decremented immediately after faults are declared, proportional to the faulty sector size. - Power is incremented after a PoSt recovering from a fault. - Power is definitively removed from the Power Table past the sector failure timeout (see Faults) To summarize, only sectors in the Active state will command power. A Sector becomes Active after their first PoSt from Committed and Recovering stages. Power is immediately decremented when an Active Sector enters the Failing state (through DeclareFaults or Cron) and when an Active Sector expires.

Pledge Collateral

Consensus in Filecoin is secured in part by economic incentives enforced by Pledge Collateral.

Pledge collateral amount is committed based on power pledged to the system (i.e. proportional to number of sectors committed and sector size for a miner). It is a system-wide parameter and is committed to the StoragePowerActor. TODO: define parameter value. Pledge Collateral submission methods take on storage deals to determine the appropriate amount of collateral to be pledged. Pledge collateral can be posted by the StorageMinerActor at any time by a miner up to sector commitments. A sector commitment without the requisite posted pledge collateral will be deemed invalid.

Pledge Collateral will be slashed when Consensus Faults are reported to the StoragePowerActor’s ReportConsensusFault method or when the CronActor calls the StoragePowerActor’s ReportUncommittedPowerFault method.

Pledge Collateral is slashed for any fault affecting storage-power consensus, these include: - faults to expected consensus in particular (see Consensus Faults) which will be reported by a slasher to the StoragePowerActor in exchange for a reward. - faults affecting consensus power more generally, specifically uncommitted power faults (i.e. Faults) which will be reported by the CronActor automatically.

Token

FIL Wallet

Payments

Payment Channels

Payment Channel Actor

(You can see the old Payment Channel Actor here )

type Voucher struct {}
type VouchersApprovalResponse struct {}
type PieceInclusionProof struct {}

type PaymentChannelActor struct {
    RedeemVoucherWithApproval(voucher Voucher)
    RedeemVoucherWithPIP(voucher Voucher, pip PieceInclusionProof)
}

Multisig - Wallet requiring multiple signatures

Multisig Actor

(You can see the old Multisig Actor here )

import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

type TxSeqNo UVarint
type NumRequired UVarint
type EpochDuration UVarint
type Epoch UVarint

type MultisigActor struct {
    signers         [address.Address]
    required        NumRequired
    nextTxId        TxSeqNo
    initialBalance  actor.TokenAmount
    startingBlock   Epoch
    unlockDuration  EpochDuration
    // transactions    {TxSeqNo: Transaction} // TODO Transaction type does not exist

    Construct(
        signers         [address.Address]
        required        NumRequired
        unlockDuration  EpochDuration
    )
    Propose(
        to      address.Address
        value   actor.TokenAmount
        method  string
        params  Bytes
    ) TxSeqNo
    Approve(txid TxSeqNo)
    Cancel(txid TxSeqNo)
    ClearCompleted()
    AddSigner(signer address.Address, increaseReq bool)
    RemoveSigner(signer address.Address, decreaseReq bool)
    SwapSigner(old address.Address, new address.Address)
    ChangeRequirement(req NumRequired)
}

Storage Mining System - proving storage for producing blocks

The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data, producing proof artifacts that demonstrate correct storage behavior, and managing the work involved.

Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable choices. Miners should explore the design space to arrive at something that (a) satisfies protocol and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in Deals), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for various optimizations for implementers, miners, and users to make. In some parts, we describe algorithms that could be replaced by other, more optimized versions, but in those cases it is important that the protocol constraints are satisfied. The protocol constraints are spelled out in clear detail (an unclear, unmentioned constraint is a “spec error”). It is up to implementers who deviate from the algorithms presented here to ensure their modifications satisfy those constraints, especially those relating to protocol security.

Storage Miner

Filecoin Storage Mining Subsystem

The Filecoin Storage Mining Subsystem ensures a storage miner can effectively commit storage to the Filecoin protocol in order to both:

  • participate in the Filecoin Storage Market by taking on client data and participating in storage deals.
  • participate in Filecoin Storage Power Consensus, verifying and generating blocks to grow the Filecoin blockchain and earning block rewards and fees for doing so.

The above involves a number of steps to putting on and maintaining online storage, such as:

  • Committing new storage (see Sealing and PoRep)
  • Continously proving storage ({see {}})
  • Declaring storage faults and recovering from them.
Storage States

When managing their storage sectors as part of Filecoin mining, storage providers will account for where in the mining cycle their sectors are. For instance, has a sector been committed? Does it need a new PoSt? Most of these operations happen as part of cycles of chain epochs called Proving Periods each of which yield high confidence that every miner in the chain has proven their power (see Election PoSt).

Through most of this cycle, sectors will be part of the miner’s Proving Set. This is the set of sectors that the miners are supposed to generate proofs against. It includes: - Committed Sectors which have an associated PoRep but have but yet to generate a PoSt, - Active Sectors which have successfully been proven and are used in Storage Power Consensus (SPC), and - Recovering Sectors which were faulted and are poised to recover through a new PoSt.

Sectors in the Proving Set can be faulted and marked as Failing by the miner itself declaring a fault (Declared Faults), or through automatic fault detection by the network using on-chain data (Detected Faults). Note that storage in the Proving Set is used in SPC (specifically in-deal storage from the Proving Set counts toward Storage Power). All storage is tracked by the system and affect a miner’s ability to serve as a storage provider in the Filecoin Storage Market.

A sector that is in the Failing state for three consecutive Proving Periods will be terminated (Terminated Faults) meaning its data will be deemed unrecoverable by PoSt and the sector will have to be SEALed once more (as part of PoRep).

Conversely RecoverFaults() can be called any time by the miner on a failing sector to return it to the ProvingSet and attempt to prove data is being stored once more. For instance an Active sector might move into the failing state during a power outage (through a declared or detected fault). At the end of the outage, the miner may call RecoverFaults to transition the state to Recovering before proving it once more and returning it to Active.

We illustrate these states below.

import sectoridx "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"
import spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import blockproducer "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block_producer"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import storage_proving "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/key_store"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

type StorageMiningSubsystem struct {
    FilecoinNode       node_base.FilecoinNode

    // TODO: constructor
    // InitStorageMiningSubsystem(node node_base.FilecoinNode) struct{}

    // Component subsystems
    // StorageProvider    storage_provider.StorageProvider
    StoragePowerActor  spc.StoragePowerActorCode
    MinerActor         StorageMinerActorCode
    SectorIndex        sectoridx.SectorIndexerSubsystem
    StorageProving     storage_proving.StorageProvingSubsystem

    // Need access to block producer in order to publish blocks
    _blockProducer     blockproducer.BlockProducer

    // Need access to SPC in order to mine a block
    _consensus         spc.StoragePowerConsensusSubsystem

    // Need access to the blockchain system in order to query for things in the chain
    _blockchain        blockchain.BlockchainSubsystem

    // Need access to the key store in order to generate tickets and election proofs
    _keyStore          key_store.KeyStore

    // TODO: why are these here? remove?
    StartMining()
    StopMining()

    // call by StorageMiningSubsystem itself to create miner
    CreateMiner(
        ownerPubKey   filcrypto.PublicKey
        workerPubKey  filcrypto.PublicKey
        pledgeAmt     actor.TokenAmount
    )

    // get miner key by address
    GetWorkerKeyByMinerAddress(minerAddr addr.Address) filcrypto.PublicKey

    // call by StorageMarket.StorageProvider at the start of a deal.
    // Triggers AddNewDeal on SectorIndexer
    // StorageDeal contains DealCID
    HandleStorageDeal(deal deal.StorageDeal)

    // call by StorageMinerActor when error in sealing
    CommitSectorError()

    // call by StorageMiningSubsystem itself in BlockProduction
    PrepareNewTicket(
        randomness  util.Randomness
        minerAddr   addr.Address
    ) block.Ticket

    _preparePoStChallengeSeed(randomness util.Randomness, minerAddr addr.Address) util.Randomness
    _getStorageMinerActorState(stateTree stateTree.StateTree, minerAddr addr.Address) StorageMinerActorState

    // call by BlockChain when a new block is produced
    OnNewBestChain()

    // call by clock during BlockProduction
    // TODO: define clock better
    OnNewRound()

    _tryLeaderElection()

    VerifyElectionPoSt(
        header       block.BlockHeader
        onChainInfo  sector.OnChainPoStVerifyInfo
    ) bool
    VerifySurprisePoSt(
        header       block.BlockHeader
        onChainInfo  sector.OnChainPoStVerifyInfo
    ) bool
    IsValidElection(onChainInfo sector.OnChainPoStVerifyInfo) bool
}

type PoStChallengeSeedInput struct {
    ticket     util.Randomness
    minerAddr  addr.Address
}
Sector in StorageMiner State Machine (new one)
Sector State (new one) (open in new tab)
Sector State Legend (new one) (open in new tab)
Sector in StorageMiner State Machine (both)
Sector State Machine (both) (open in new tab)

Storage Mining Cycle

Block miners should constantly be performing Proofs of SpaceTime, and also checking if they have a winning ticket to propose a block at each height/in each round. Rounds are currently set to take around 30 seconds, in order to account for network propagation around the world. The details of both processes are defined here.

The Miner Actor

After successfully calling CreateStorageMiner, a miner actor will be created on-chain, and registered in the storage market. This miner, like all other Filecoin State Machine actors, has a fixed set of methods that can be used to interact with or control it.

Owner Worker distinction

The miner actor has two distinct ‘controller’ addresses. One is the worker, which is the address which will be responsible for doing all of the work, submitting proofs, committing new sectors, and all other day to day activities. The owner address is the address that created the miner, paid the collateral, and has block rewards paid out to it. The reason for the distinction is to allow different parties to fulfil the different roles. One example would be for the owner to be a multisig wallet, or a cold storage key, and the worker key to be a ‘hot wallet’ key.

Storage Mining Cycle

Storage miners must continually produce Proofs of SpaceTime over their storage to convince the network that they are actually storing the sectors that they have committed to. Each PoSt covers a miner’s entire storage.

Step 0: Registration

To initially become a miner, a miner first register a new miner actor on-chain. This is done through the storage power actor’s CreateStorageMiner method. The call will then create a new miner actor instance and return its address.

The next step is to place one or more storage market asks on the market. This is done off-chain as part of storagee market functions. A miner may create a single ask for their entire storage, or partition their storage up in some way with multiple asks (at potentially different prices).

After that, they need to make deals with clients and begin filling up sectors with data. For more information on making deals, see the Storage Market.

When they have a full sector, they should seal it. This is done by invoking the Sector Sealer.

Changing Worker Addresses

Note that any change to worker keys after registration (TODO: spec how this works) must be appropriately delayed in relation to randomness lookback for SEALing data (see this issue).

Step 1: Commit

When the miner has completed their first seal, they should post it on-chain using the Storage Miner Actor’s ProveCommitSector function. If the miner had zero committed sectors prior to this call, this begins their proving period.

The proving period is a fixed amount of time in which the miner must submit a Proof of Space Time to the network.

During this period, the miner may also commit to new sectors, but they will not be included in proofs of space time until the next proving period starts. For example, if a miner currently PoSts for 10 sectors, and commits to 20 more sectors. The next PoSt they submit (i.e. the one they’re currently proving) will be for 10 sectors again, the subsequent one will be for 30.

TODO: sectors need to be globally unique. This can be done either by having the seal proof prove the sector is unique to this miner in some way, or by having a giant global map on-chain is checked against on each submission. As the system moves towards sector aggregation, the latter option will become unworkable, so more thought needs to go into how that proof statement could work.

Step 2: Proving Storage (PoSt creation)
func ProveStorage(sectorSize BytesAmount, sectors []commR) PoStProof {
    challengeBlockHeight := miner.ProvingPeriodEnd - POST_CHALLENGE_TIME

    // Faults to be used are the currentFaultSet for the miner.
    faults := miner.currentFaultSet
    seed := GetRandFromBlock(challengeBlockHeight)
    return GeneratePoSt(sectorSize, sectors, seed, faults)
}

Note: See ‘Proof of Space Time’ for more details.

The proving set remains consistent during the proving period. Any sectors added in the meantime will be included in the next proving set, at the beginning of the next proving period.

Step 3: PoSt Submission

When the miner has completed their PoSt, they must submit it to the network by calling SubmitPoSt. There are two different times that this could be done.

  1. Standard Submission: A standard submission is one that makes it on-chain before the end of the proving period. The length of time it takes to compute the PoSts is set such that there is a grace period between then and the actual end of the proving period, so that the effects of network congestion on typical miner actions is minimized.
  2. Penalized Submission: A penalized submission is one that makes it on-chain after the end of the proving period, but before the generation attack threshold. These submissions count as valid PoSt submissions, but the miner must pay a penalty for their late submission. (See ‘Faults’ for more information)
    • Note: In this case, the next PoSt should still be started at the beginning of the proving period, even if the current one is not yet complete. Miners must submit one PoSt per proving period.

Along with the PoSt submission, miners may also submit a set of sectors that they wish to remove from their proving set. This is done by selecting the sectors in the ‘done’ bitfield passed to SubmitPoSt.

Stop Mining

In order to stop mining, a miner must complete all of its storage contracts, and remove them from their proving set during a PoSt submission. A miner may then call DePledge() to retrieve their collateral. DePledge must be called twice, once to start the cooldown, and once again after the cooldown to reclaim the funds. The cooldown period is to allow clients whose files have been dropped by a miner to slash them before they get their money back and get away with it.

Faults

Faults are described in the faults document.

On Being Slashed (WIP, needs discussion)

If a miner is slashed for failing to submit their PoSt on time, they currently lose all their pledge collateral. They do not necessarily lose their storage collateral. Storage collateral is lost when a miner’s clients slash them for no longer having the data. Missing a PoSt does not necessarily imply that a miner no longer has the data. There should be an additional timeout here where the miner can submit a PoSt, along with ‘refilling’ their pledge collateral. If a miner does this, they can continue mining, their mining power will be reinstated, and clients can be assured that their data is still there.

TODO: disambiguate the two collaterals across the entire spec

Review Discussion Note: Taking all of a miners collateral for going over the deadline for PoSt submission is really really painful, and is likely to dissuade people from even mining filecoin in the first place (If my internet going out could cause me to lose a very large amount of money, that leads to some pretty hard decisions around profitability). One potential strategy could be to only penalize miners for the amount of sectors they could have generated in that timeframe.

Future Work

There are many ideas for improving upon the storage miner, here are ideas that may be potentially implemented in the future.

  • Sector Resealing: Miners should be able to ’re-seal’ sectors, to allow them to take a set of sectors with mostly expired pieces, and combine the not-yet-expired pieces into a single (or multiple) sectors.
  • Sector Transfer: Miners should be able to re-delegate the responsibility of storing data to another miner. This is tricky for many reasons, and will not be implemented in the initial release of Filecoin, but could provide interesting capabilities down the road.

Storage Miner Actor

(You can see the old Storage Miner Actor here )

StorageMinerActor interface
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import sealing "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"

type SectorExpirationQueueItem struct {
    SectorNumber  sector.SectorNumber
    Expiration    block.ChainEpoch
}

type SectorExpirationQueue struct {
    Add(i SectorExpirationQueueItem)
    Pop() SectorExpirationQueueItem
    Peek() SectorExpirationQueueItem
    Remove(n sector.SectorNumber)
}

type SectorStateTable struct {
    SectorSize         sector.SectorSize
    ActiveSectors      sector.CompactSectorSet
    CommittedSectors   sector.CompactSectorSet
    RecoveringSectors  sector.CompactSectorSet
    FailingSectors     sector.CompactSectorSet

    // transient State that get reset on every constructPowerReport
    TerminatedFaults   sector.CompactSectorSet
}

type SectorOnChainInfo struct {
    SealCommitment  sector.SealCommitment
    State           SectorState
    Power           block.StoragePower
    Activation      block.ChainEpoch
    Expiration      block.ChainEpoch
}

type ChallengeStatus struct {
    LastChallengeEpoch     block.ChainEpoch  // get updated by NotifyOfPoStSurpriseChallenge and SubmitElectionPoSt
    _lastPoStFailureEpoch  block.ChainEpoch  // get updated upon _onMissedSurprisePoSt
    _lastPoStSuccessEpoch  block.ChainEpoch  // get updated by successful Election or Suprise PoSt submission

    _getStoragePowerActorState(stateTree stateTree.StateTree) spc.StoragePowerActorState

    OnNewChallenge(currEpoch block.ChainEpoch)
    LastPoStResponseEpoch() block.ChainEpoch
    LastPoStSuccessEpoch() block.ChainEpoch
    OnPoStSuccess(currEpoch block.ChainEpoch)
    OnPoStFailure(currEpoch block.ChainEpoch)

    IsChallenged() bool  // only True when proving SurprisePoSt (implicit because ElectionPoSt completes within a block)
    ChallengeHasExpired(currEpoch block.ChainEpoch) bool
    CanBeElected(currEpoch block.ChainEpoch) bool
    ShouldChallenge(
        currEpoch block.ChainEpoch
    ) bool
}

type PreCommittedSector struct {
    Info           sealing.SectorPreCommitInfo
    ReceivedEpoch  block.ChainEpoch
}

type PreCommittedSectorsAMT {sector.SectorNumber: PreCommittedSector}
type StagedCommittedSectorAMT {sector.SectorNumber: SectorOnChainInfo}
type SectorsAMT {sector.SectorNumber: SectorOnChainInfo}

// Balance of a StorageMinerActor should equal exactly the sum of PreCommit deposits that are not yet returned or burned
type StorageMinerActorState struct {
    PreCommittedSectors     PreCommittedSectorsAMT
    StagedCommittedSectors  StagedCommittedSectorAMT
    Sectors                 SectorsAMT
    ProvingSet              sector.CompactSectorSet

    SectorTable             SectorStateTable
    SectorExpirationQueue
    ChallengeStatus

    // contains mostly static info about this miner
    Info                    &MinerInfo

    // No DeclareFaults and CommitSector can happen when SM is in the isChallenged state
    _isChallenged()         bool
    _canBeElected(currEpoch block.ChainEpoch) bool
    _challengeHasExpired(currEpoch block.ChainEpoch) bool
    _shouldChallenge(
        currEpoch block.ChainEpoch
    ) bool
    _verifySeal(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool
    _assertSectorDidNotExist(rt Runtime, sectorNo sector.SectorNumber)

    _processStagedCommittedSectors(rt Runtime)
    _updateFailSector(
        rt                   Runtime
        sectorNo             sector.SectorNumber
        incrementFaultCount  bool
    )
    _updateExpireSectors(rt Runtime) [sector.SectorNumber]
    _updateClearSector(rt Runtime, sectorNo sector.SectorNumber)
    _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber)

    _getActivePower()    (block.StoragePower, error)
    _getInactivePower()  (block.StoragePower, error)
    _getPreCommitDepositReq(rt Runtime) actor.TokenAmount

    _getSectorOnChainInfo(sectorNo sector.SectorNumber) (info SectorOnChainInfo, ok bool)
    _getSectorPower(sectorNo sector.SectorNumber) (power block.StoragePower, ok bool)
    _getSectorDealIDs(sectorNo sector.SectorNumber) (dealIDs [deal.DealID], ok bool)
}

type StorageMinerActorCode struct {
    NotifyOfSurprisePoStChallenge(rt Runtime)

    PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo)  // TODO: check with Magik on sizes
    ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo)

    ProcessVerifiedSurprisePoSt(rt Runtime)
    ProcessVerifiedElectionPoSt(rt Runtime)

    _checkSurprisePoStSubmissionHappened(rt Runtime)

    DeclareFaults(rt Runtime, failingSet sector.CompactSectorSet)
    RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet)

    _isChallenged(rt Runtime) bool
    _isSealVerificationCorrect(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool
    _onMissedSurprisePoSt(rt Runtime)
    _onSuccessfulPoSt(rt Runtime)

    _slashCollateralForStorageFaults(
        rt          Runtime
        declared    sector.CompactSectorSet  // diff value
        detected    sector.CompactSectorSet  // diff value
        terminated  sector.CompactSectorSet  // diff value
    )
    _slashDealsForStorageFault(
        rt             Runtime
        sectorNumbers  [sector.SectorNumber]
        faultType      sector.StorageFaultType
    )

    ClaimDealPaymentsForSector(
        rt        Runtime
        sectorNo  sector.SectorNumber
        lastPoSt  block.ChainEpoch
    )
    _submitPowerReport(rt Runtime, lastPoSt block.ChainEpoch)
    _expireSectors(rt Runtime)
    _expirePreCommittedSectors(rt Runtime)
    _ensurePledgeCollateralSatisfied(rt Runtime)
}

type MinerInfo struct {
    // Account that owns this miner.
    // - Income and returned collateral are paid to this address.
    // - This address is also allowed to change the worker address for the miner.
    Owner                   address.Address

    // Worker account for this miner.
    // This will be the key that is used to sign blocks created by this miner, and
    // sign messages sent on behalf of this miner to commit sectors, submit PoSts, and
    // other day to day miner activities.
    Worker                  address.Address

    // Libp2p identity that should be used when connecting to this miner.
    PeerId                  libp2p.PeerID

    // Amount of space in each sector committed to the network by this miner.
    SectorSize              sector.SectorSize
    WindowCount             UVarint
    SealPartitions          UVarint
    ElectionPoStPartitions  UVarint
    SurprisePoStPartitions  UVarint
}
StorageMinerActorState implementation
package storage_mining

import (
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
)

var Assert = util.Assert

// Get the owner account address associated to a given miner actor.
func GetMinerOwnerAddress(tree st.StateTree, minerAddr addr.Address) (addr.Address, error) {
	panic("TODO")
}

// Get the owner account address associated to a given miner actor.
func GetMinerOwnerAddress_Assert(tree st.StateTree, a addr.Address) addr.Address {
	ret, err := GetMinerOwnerAddress(tree, a)
	Assert(err == nil)
	return ret
}

func (st *StorageMinerActorState_I) _isChallenged() bool {
	return st.ChallengeStatus().IsChallenged()
}

func (st *StorageMinerActorState_I) _canBeElected(epoch block.ChainEpoch) bool {
	return st.ChallengeStatus().CanBeElected(epoch)
}

func (st *StorageMinerActorState_I) _challengeHasExpired(epoch block.ChainEpoch) bool {
	return st.ChallengeStatus().ChallengeHasExpired(epoch)
}

func (st *StorageMinerActorState_I) _shouldChallenge(currEpoch block.ChainEpoch) bool {
	return st.ChallengeStatus().ShouldChallenge(currEpoch)
}

func (st *StorageMinerActorState_I) _processStagedCommittedSectors(rt Runtime) {
	for sectorNo, stagedInfo := range st.StagedCommittedSectors() {
		st.Sectors()[sectorNo] = stagedInfo
		st.Impl().ProvingSet_.Add(sectorNo)
		st.SectorTable().Impl().CommittedSectors_.Add(sectorNo)
	}

	// empty StagedCommittedSectors
	st.Impl().StagedCommittedSectors_ = make(map[sector.SectorNumber]SectorOnChainInfo)
}

func (st *StorageMinerActorState_I) _getActivePower(rt Runtime) (block.StoragePower, error) {
	activePower := block.StoragePower(0)

	for _, sectorNo := range st.SectorTable().Impl().ActiveSectors_.SectorsOn() {
		sectorPower, found := st._getSectorPower(sectorNo)
		if !found {
			panic("")
		}
		activePower += sectorPower
	}

	return activePower, nil
}

func (st *StorageMinerActorState_I) _getInactivePower() (block.StoragePower, error) {
	var inactivePower = block.StoragePower(0)

	// iterate over sectorNo in CommittedSectors, RecoveringSectors, and FailingSectors
	inactiveProvingSet := st.SectorTable().Impl().CommittedSectors_.Extend(st.SectorTable().RecoveringSectors())
	inactiveSectorSet := inactiveProvingSet.Extend(st.SectorTable().FailingSectors())

	for _, sectorNo := range inactiveSectorSet.SectorsOn() {

		sectorPower, found := st._getSectorPower(sectorNo)
		if !found {
			panic("")
		}
		inactivePower += sectorPower
	}

	return inactivePower, nil
}

// move Sector from Active/Failing
// into Cleared State which means deleting the Sector from state
// remove SectorNumber from all states on chain
// update SectorTable
func (st *StorageMinerActorState_I) _updateClearSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorActiveSN:
		// expiration case
		st.SectorTable().Impl().ActiveSectors_.Remove(sectorNo)
	case SectorFailingSN:
		// expiration and termination cases
		st.SectorTable().Impl().FailingSectors_.Remove(sectorNo)
	default:
		// Committed and Recovering should not go to Cleared directly
		rt.AbortStateMsg("invalid state in clearSector")
	}

	delete(st.Sectors(), sectorNo)
	st.ProvingSet_.Remove(sectorNo)
	st.SectorExpirationQueue().Remove(sectorNo)
}

// move Sector from Committed/Recovering into Active State
// reset FaultCount to zero
// update SectorTable
func (st *StorageMinerActorState_I) _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_.Remove(sectorNo)
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_.Remove(sectorNo)
	default:
		rt.AbortStateMsg("sm._updateActivateSector: invalid state in activateSector")
	}

	st.Sectors()[sectorNo].Impl().State_ = SectorActive()
	st.SectorTable().Impl().ActiveSectors_.Add(sectorNo)
}

// failSector moves Sector from Active/Committed/Recovering into Failing State
// and increments FaultCount if asked to do so (DeclareFaults does not increment faultCount)
// move Sector from Failing to Cleared State if increment results in faultCount exceeds MAX_CONSECUTIVE_FAULTS
// update SectorTable
// remove from ProvingSet
func (st *StorageMinerActorState_I) _updateFailSector(rt Runtime, sectorNo sector.SectorNumber, increment bool) {
	newFaultCount := st.Sectors()[sectorNo].State().FaultCount

	if increment {
		newFaultCount += 1
	}

	state := st.Sectors()[sectorNo].State()
	switch state.StateNumber {
	case SectorActiveSN:
		// wont be terminated from Active
		st.SectorTable().Impl().ActiveSectors_.Remove(sectorNo)
		st.SectorTable().Impl().FailingSectors_.Add(sectorNo)
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_.Remove(sectorNo)
		st.SectorTable().Impl().FailingSectors_.Add(sectorNo)
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_.Remove(sectorNo)
		st.SectorTable().Impl().FailingSectors_.Add(sectorNo)
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorFailingSN:
		// no change to SectorTable but increase in FaultCount
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	default:
		rt.AbortStateMsg("Invalid sector state in CronAction")
	}

	if newFaultCount > MAX_CONSECUTIVE_FAULTS {
		// slashing is done at _slashCollateralForStorageFaults
		st._updateClearSector(rt, sectorNo)
		st.SectorTable().Impl().TerminatedFaults_.Add(sectorNo)
	}
}

func (st *StorageMinerActorState_I) _updateExpireSectors(rt Runtime) []sector.SectorNumber {
	currEpoch := rt.CurrEpoch()

	queue := st.SectorExpirationQueue()
	expiredSectorNos := make([]sector.SectorNumber, 0)

	for queue.Peek().Expiration() <= currEpoch {
		expiredSectorNo := queue.Pop().SectorNumber()

		state := st.Sectors()[expiredSectorNo].State()
		switch state.StateNumber {
		case SectorActiveSN:
			// Note: in order to verify if something was stored in the past, one must
			// scan the chain. SectorNumber can be re-used.
			st._updateClearSector(rt, expiredSectorNo)
			expiredSectorNos = append(expiredSectorNos, expiredSectorNo)
		case SectorFailingSN:
			// TODO: check if there is any fault that we should handle here

			// a failing sector expires, no change to FaultCount
			st._updateClearSector(rt, expiredSectorNo)
			expiredSectorNos = append(expiredSectorNos, expiredSectorNo)
		default:
			// Note: SectorCommittedSN, SectorRecoveringSN transition first to SectorFailingSN, then expire
			rt.AbortStateMsg("Invalid sector state in SectorExpirationQueue")
		}
	}

	return expiredSectorNos
}

func (st *StorageMinerActorState_I) _assertSectorDidNotExist(rt Runtime, sectorNo sector.SectorNumber) {
	_, found := st.Sectors()[sectorNo]
	if found {
		rt.AbortStateMsg("sm._assertSectorDidNotExist: sector already exists.")
	}
}

func (st *StorageMinerActorState_I) _getSectorOnChainInfo(sectorNo sector.SectorNumber) (info SectorOnChainInfo, ok bool) {
	sectorInfo, found := st.Sectors()[sectorNo]
	if !found {
		return nil, false
	}
	return sectorInfo, true
}

func (st *StorageMinerActorState_I) _getSectorPower(sectorNo sector.SectorNumber) (power block.StoragePower, ok bool) {
	sectorInfo, found := st._getSectorOnChainInfo(sectorNo)
	if !found {
		return block.StoragePower(0), false
	}
	return sectorInfo.Power(), true
}

func (st *StorageMinerActorState_I) _getSectorDealIDs(sectorNo sector.SectorNumber) (dealIDs []deal.DealID, ok bool) {
	sectorInfo, found := st._getSectorOnChainInfo(sectorNo)
	if !found {
		return make([]deal.DealID, 0), false
	}
	return sectorInfo.SealCommitment().DealIDs(), true
}

func (st *StorageMinerActorState_I) _getPreCommitDepositReq(rt Runtime) actor.TokenAmount {

	// TODO: move this to Construct
	minerInfo := st.Info()
	sectorSize := minerInfo.SectorSize()
	depositReq := actor.TokenAmount(uint64(PRECOMMIT_DEPOSIT_PER_BYTE) * uint64(sectorSize))

	return depositReq
}
StorageMinerActorCode implementation
package storage_mining

import (
	filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
	market "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
	exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
	sys "github.com/filecoin-project/specs/systems/filecoin_vm/sysactors"
	util "github.com/filecoin-project/specs/util"
)

const (
	Method_StorageMinerActor_ProcessVerifiedSurprisePoSt = actor.MethodPlaceholder + iota
	Method_StorageMinerActor_ProcessVerifiedElectionPoSt
	Method_StorageMinerActor_NotifyOfSurprisePoStChallenge
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type State = StorageMinerActorState
type Any = util.Any
type Bool = util.Bool
type Bytes = util.Bytes
type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime

var TODO = util.TODO

func (a *StorageMinerActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.AbortAPI("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StorageMinerActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (a *StorageMinerActorCode_I) _isChallenged(rt Runtime) bool {
	h, st := a.State(rt)
	ret := st._isChallenged()
	Release(rt, h, st)
	return ret
}

func (a *StorageMinerActorCode_I) _challengeHasExpired(rt Runtime) bool {
	h, st := a.State(rt)
	ret := st._challengeHasExpired(rt.CurrEpoch())
	Release(rt, h, st)
	return ret
}

func (a *StorageMinerActorCode_I) _shouldChallenge(rt Runtime) bool {
	h, st := a.State(rt)
	ret := st._shouldChallenge(rt.CurrEpoch())
	Release(rt, h, st)
	return ret
}

////////////////////////////////////////////////////////////////////////////////
// Surprise PoSt
////////////////////////////////////////////////////////////////////////////////

// called by StoragePowerActor to notify StorageMiner of PoSt Challenge (triggered by Cron)
func (a *StorageMinerActorCode_I) NotifyOfSurprisePoStChallenge(rt Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(addr.StoragePowerActorAddr)

	// check that this is a valid challenge
	if !a._shouldChallenge(rt) {
		return rt.SuccessReturn() // silent return, dont re-challenge
	}

	a._expirePreCommittedSectors(rt)

	h, st := a.State(rt)
	// update challenge start epoch
	st.ChallengeStatus().Impl().OnNewChallenge(rt.CurrEpoch())
	UpdateRelease(rt, h, st)
	return rt.SuccessReturn()
}

// Called by the cron actor at every tick.
func (a *StorageMinerActorCode_I) OnCronTick(rt Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(addr.CronActorAddr)

	a._checkSurprisePoStSubmissionHappened(rt)

	return rt.SuccessReturn()

}

// If the miner fails to respond to a surprise PoSt,
// cron triggers reporting every sector as failing for the current proving period.
func (a *StorageMinerActorCode_I) _checkSurprisePoStSubmissionHappened(rt Runtime) InvocOutput {

	// we can return if miner has not yet been challenged
	if !a._isChallenged(rt) {
		// Miner gets out of a challenge when submit a successful PoSt
		// or when detected by CronActor. Hence, not being in isChallenged means that we are good here
		return rt.SuccessReturn()
	}

	if a._challengeHasExpired(rt) {
		// garbage collection - need to be called by cron once in a while
		a._expirePreCommittedSectors(rt)

		// oh no -- we missed it. rekt
		a._onMissedSurprisePoSt(rt)

	}

	return rt.SuccessReturn()
}

// called by CheckSurprisePoSt above for miner who missed their post
func (a *StorageMinerActorCode_I) _onMissedSurprisePoSt(rt Runtime) {
	h, st := a.State(rt)

	failingSectorNumbers := getSectorNums(st.Sectors())
	for _, sectorNo := range failingSectorNumbers {
		st._updateFailSector(rt, sectorNo, true)
	}

	UpdateRelease(rt, h, st)

	a._expireSectors(rt)

	h, st = a.State(rt)

	newDetectedFaults := st.SectorTable().FailingSectors()
	newTerminatedFaults := st.SectorTable().TerminatedFaults()

	Release(rt, h, st)

	a._submitPowerReport(rt)

	// Note: NewDetectedFaults is now the sum of all
	// previously active, committed, and recovering sectors minus expired ones
	// and any previously Failing sectors that did not exceed MAX_CONSECUTIVE_FAULTS
	// Note: previously declared faults is now treated as part of detected faults
	a._slashCollateralForStorageFaults(
		rt,
		sector.CompactSectorSet(make([]byte, 0)), // NewDeclaredFaults
		newDetectedFaults,
		newTerminatedFaults,
	)

	// end of challenge
	// now that new power and faults are tracked move pointer of last challenge response up
	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnPoStFailure(rt.CurrEpoch())
	st._processStagedCommittedSectors(rt)
	UpdateRelease(rt, h, st)
}

func (a *StorageMinerActorCode_I) _expireSectors(rt Runtime) {

	h, st := a.State(rt)
	expiredSectorNos := st._updateExpireSectors(rt)

	// @param dealIDs
	expireSectorParams := make([]util.Serialization, 1)
	for _, sectorNo := range expiredSectorNos {
		dealIDs, ok := st._getSectorDealIDs(sectorNo)
		if !ok {
			panic("")
		}
		for _, dealID := range dealIDs {
			expireSectorParams = append(expireSectorParams, deal.Serialize_DealID(dealID))
		}
	}

	UpdateRelease(rt, h, st)

	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_ProcessDealExpiration,
		Params_: expireSectorParams,
	})
}

// construct PowerReport from SectorTable
func (a *StorageMinerActorCode_I) _submitPowerReport(rt Runtime) {
	h, st := a.State(rt)
	activePower, err := st._getActivePower()
	if err != nil {
		rt.AbortStateMsg(err.Error())
	}
	inactivePower, err := st._getInactivePower()
	if err != nil {
		rt.AbortStateMsg(err.Error())
	}

	// power report in processPowerReportParam
	_ = &spc.PowerReport_I{
		ActivePower_:   activePower,
		InactivePower_: inactivePower,
	}

	// @param powerReport spc.PowerReport
	processPowerReportParam := make([]util.Serialization, 1)

	Release(rt, h, st)

	// this will go through even if miners do not have the right amount of pledge collateral
	// when _submitPowerReport is called in DeclareFaults and _onMissedSurprisePoSt for power slashing
	// however in SubmitSurprisePoSt EnsurePledgeCollateralSatsified will be called
	// to ensure that miners have the required pledge collateral
	// otherwise, post submission will fail
	// Note: there is no power update in RecoverFaults and hence no EnsurePledgeCollatera or _submitPowerReport
	// Note: ElectionPoSt will always go through and some block rewards will go to LockedBalance in StoragePowerActor
	// if the block winning miner is undercollateralized
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StoragePowerActorAddr,
		Method_: spc.Method_StoragePowerActor_ProcessPowerReport,
		Params_: processPowerReportParam,
	})
}

func (a *StorageMinerActorCode_I) ClaimDealPaymentsForSector(rt Runtime, sectorNo sector.SectorNumber) {
	h, st := a.State(rt)

	dealIDs, ok := st._getSectorDealIDs(sectorNo)
	if !ok {
		rt.AbortArgMsg("sm.ClaimDealPaymentsForSector: invalid sector number.")
	}

	lastPoStEpoch := st.ChallengeStatus().Impl().LastPoStSuccessEpoch()

	// @params dealIDs []deal.DealID activeDealIDs
	// @params lastPoSt block.ChainEpoch
	processDealPaymentParam := make([]util.Serialization, 1)
	for _, dealID := range dealIDs {
		processDealPaymentParam = append(processDealPaymentParam, deal.Serialize_DealID(dealID))
	}
	processDealPaymentParam = append(processDealPaymentParam, block.Serialize_ChainEpoch(lastPoStEpoch))
	Release(rt, h, st)

	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_ProcessDealPayment,
		Params_: processDealPaymentParam,
	})
}

// this method is called by both SubmitElectionPoSt and SubmitSurprisePoSt
// - Process ProvingSet.SectorsOn()
//   - State Transitions
//     - Committed -> Active and credit power
//     - Recovering -> Active and credit power
//   - Process Active Sectors (pay miners)
// - Process ProvingSet.SectorsOff()
//     - increment FaultCount
//     - clear Sector and slash pledge collateral if count > MAX_CONSECUTIVE_FAULTS
// - Process Expired Sectors (settle deals and return storage collateral to miners)
//     - State Transition
//       - Failing / Recovering / Active / Committed -> Cleared
//     - Remove SectorNumber from Sectors, ProvingSet
// - Update ChallengeEndEpoch
func (a *StorageMinerActorCode_I) _onSuccessfulPoSt(rt Runtime) InvocOutput {
	h, st := a.State(rt)

	// The proof is verified, process ProvingSet.SectorsOn():
	// ProvingSet.SectorsOn() contains SectorCommitted, SectorActive, SectorRecovering
	// ProvingSet itself does not store states, states are all stored in Sectors.State
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			rt.AbortStateMsg("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorCommittedSN, SectorRecoveringSN:
			st._updateActivateSector(rt, sectorNo)
		case SectorActiveSN:
			// do nothing, deal payment is made lazily
		default:
			rt.AbortStateMsg("Invalid sector state in ProvingSet.SectorsOn()")
		}
	}

	// committed and recovering sectors are now active

	// Process ProvingSet.SectorsOff()
	// ProvingSet.SectorsOff() contains SectorFailing
	// SectorRecovering is Proving and hence will not be in SectorsOff()
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOff() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			continue
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			// heavy penalty if Failing for more than or equal to MAX_CONSECUTIVE_FAULTS
			// otherwise increment FaultCount in Sectors().State
			st._updateFailSector(rt, sectorNo, true)
		default:
			rt.AbortStateMsg("Invalid sector state in ProvingSet.SectorsOff")
		}
	}

	UpdateRelease(rt, h, st)

	h, st = a.State(rt)
	newTerminatedFaults := st.SectorTable().TerminatedFaults()
	Release(rt, h, st)

	// Process Expiration.
	a._expireSectors(rt)
	a._submitPowerReport(rt)

	a._slashCollateralForStorageFaults(
		rt,
		sector.CompactSectorSet(make([]byte, 0)), // NewDeclaredFaults
		sector.CompactSectorSet(make([]byte, 0)), // NewDetectedFaults
		newTerminatedFaults,
	)

	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnPoStSuccess(rt.CurrEpoch())
	st._processStagedCommittedSectors(rt)
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

// called by verifier to update miner state on successful surprise post
// after it has been verified in the storage_mining_subsystem
func (a *StorageMinerActorCode_I) ProcessVerifiedSurprisePoSt(rt Runtime) InvocOutput {
	TODO() // TODO: validate caller

	// Ensure pledge collateral satisfied
	// otherwise, abort ProcessVerifiedSurprisePoSt
	a._ensurePledgeCollateralSatisfied(rt)

	return a._onSuccessfulPoSt(rt)

}

// Called by StoragePowerConsensus subsystem after verifying the Election proof
// and verifying the PoSt proof in the block header.
// Assume ElectionPoSt has already been successfully verified (both proof and partial ticket
// value) when the function gets called.
// Likewise assume that the rewards have already been granted to the storage miner actor. This only handles sector management.
func (a *StorageMinerActorCode_I) ProcessVerifiedElectionPoSt(rt Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(addr.SystemActorAddr)
	// The receiver must be the miner who produced the block for which this message is created.
	Assert(rt.ToplevelBlockWinner() == rt.CurrReceiver())

	// we do not need to verify post submission here, as this should have already been done
	// outside of the VM, in StoragePowerConsensus Subsystem. Doing so again would waste
	// significant resources, as proofs are expensive to verify.
	//
	// notneeded := a._verifyPoStSubmission(rt)

	// the following will update last challenge response time
	return a._onSuccessfulPoSt(rt)
}

////////////////////////////////////////////////////////////////////////////////
// Faults
////////////////////////////////////////////////////////////////////////////////

// RecoverFaults checks if miners have sufficent collateral
// and adds SectorFailing into SectorRecovering
// - State Transition
//   - Failing -> Recovering with the same FaultCount
// - Add SectorNumber to ProvingSet
// Note that power is not updated until it is active
func (a *StorageMinerActorCode_I) RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	// RecoverFaults is only called when miners are not challenged
	if a._isChallenged(rt) {
		rt.AbortStateMsg("cannot RecoverFaults when sm isChallenged")
	}

	h, st := a.State(rt)

	// for all SectorNumber marked as recovering by recoveringSet
	for _, sectorNo := range recoveringSet.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			rt.AbortStateMsg("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			// pledge collateral is ensured at PoSt submission
			// no need to top up deal collateral because none was slashed during declared/detected faults

			// copy over the same FaultCount
			st.Sectors()[sectorNo].Impl().State_ = SectorRecovering(sectorState.State().FaultCount)
			st.Impl().ProvingSet_.Add(sectorNo)

			st.SectorTable().Impl().FailingSectors_.Remove(sectorNo)
			st.SectorTable().Impl().RecoveringSectors_.Add(sectorNo)

		default:
			// TODO: determine proper error here and error-handling machinery
			// TODO: consider this a no-op (as opposed to a failure), because this is a user
			// call that may be delayed by the chain beyond some other state transition.
			rt.AbortStateMsg("Invalid sector state in RecoverFaults")
		}
	}

	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

// DeclareFaults penalizes miners (slashStorageDealCollateral and remove power)
// - State Transition
//   - Active / Commited / Recovering -> Failing
// - Update State in Sectors()
// - Remove Active / Commited / Recovering from ProvingSet
func (a *StorageMinerActorCode_I) DeclareFaults(rt Runtime, faultSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	if a._isChallenged(rt) {
		rt.AbortStateMsg("cannot DeclareFaults when challenged")
	}

	h, st := a.State(rt)

	// fail all SectorNumber marked as Failing by faultSet
	for _, sectorNo := range faultSet.SectorsOn() {
		st._updateFailSector(rt, sectorNo, false)
	}

	UpdateRelease(rt, h, st)

	a._submitPowerReport(rt)

	a._slashCollateralForStorageFaults(
		rt,
		faultSet,                                 // NewDeclaredFaults
		sector.CompactSectorSet(make([]byte, 0)), // NewDetectedFaults
		sector.CompactSectorSet(make([]byte, 0)), // NewTerminatedFault
	)

	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) _slashDealsForStorageFault(rt Runtime, sectorNumbers []sector.SectorNumber, faultType sector.StorageFaultType) {

	h, st := a.State(rt)

	dealIDs := make([]deal.DealID, 0)
	for _, sectorNo := range sectorNumbers {
		sectorDealIDs, ok := st._getSectorDealIDs(sectorNo)
		if !ok {
			panic("")
		}
		dealIDs = append(dealIDs, sectorDealIDs...)
	}

	// @param dealIDs []deal.DealID
	// @param faultType sector.StorageFaultType
	processDealSlashParam := make([]util.Serialization, 2)
	for _, dealID := range dealIDs {
		processDealSlashParam = append(processDealSlashParam, deal.Serialize_DealID(dealID))
	}
	processDealSlashParam = append(processDealSlashParam, sector.Serialize_StorageFaultType(faultType))
	Release(rt, h, st)

	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_ProcessDealSlash,
		Params_: processDealSlashParam,
	})
}

func (a *StorageMinerActorCode_I) _slashPledgeForStorageFault(rt Runtime, sectorNumbers []sector.SectorNumber, faultType sector.StorageFaultType) {
	h, st := a.State(rt)

	affectedPower := block.StoragePower(0)
	for _, sectorNo := range sectorNumbers {
		sectorPower, ok := st._getSectorPower(sectorNo)
		if !ok {
			panic("")
		}
		affectedPower += sectorPower
	}

	// @param affectedPower block.StoragePower
	// @param faultType sector.StorageFaultType
	slashPledgeParams := make([]util.Serialization, 2)
	slashPledgeParams = append(slashPledgeParams, block.Serialize_StoragePower(affectedPower))
	slashPledgeParams = append(slashPledgeParams, sector.Serialize_StorageFaultType(faultType))

	Release(rt, h, st)

	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StoragePowerActorAddr,
		Method_: spc.Method_StoragePowerActor_SlashPledgeForStorageFault,
		Params_: slashPledgeParams,
	})
}

// reset NewTerminatedFaults
func (a *StorageMinerActorCode_I) _slashCollateralForStorageFaults(
	rt Runtime,
	newDeclaredFaults sector.CompactSectorSet, // diff value
	newDetectedFaults sector.CompactSectorSet, // diff value
	newTerminatedFaults sector.CompactSectorSet, // diff value
) {

	// only terminatedFault will result in collateral deal slashing
	if len(newTerminatedFaults) > 0 {
		a._slashDealsForStorageFault(rt, newTerminatedFaults.SectorsOn(), sector.TerminatedFault)
		a._slashPledgeForStorageFault(rt, newTerminatedFaults.SectorsOn(), sector.TerminatedFault)
	}

	if len(newDetectedFaults) > 0 {
		a._slashPledgeForStorageFault(rt, newDetectedFaults.SectorsOn(), sector.DetectedFault)
	}

	if len(newDeclaredFaults) > 0 {
		a._slashPledgeForStorageFault(rt, newDeclaredFaults.SectorsOn(), sector.DeclaredFault)
	}

	// reset terminated faults
	h, st := a.State(rt)
	st.SectorTable().Impl().TerminatedFaults_ = sector.CompactSectorSet(make([]byte, 0))
	UpdateRelease(rt, h, st)
}

////////////////////////////////////////////////////////////////////////////////
// Sector Commitment
////////////////////////////////////////////////////////////////////////////////

func (a *StorageMinerActorCode_I) _verifySeal(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool {
	h, st := a.State(rt)
	info := st.Info()
	sectorSize := info.SectorSize()

	// @param sectorSize util.UVarint
	// @param dealIDs []deal.DealID onChainInfo.DealIDs()
	getPieceInfoParams := make([]util.Serialization, 2)
	getPieceInfoParams = append(getPieceInfoParams, sector.Serialize_SectorSize(sectorSize))
	for _, dealID := range onChainInfo.DealIDs() {
		getPieceInfoParams = append(getPieceInfoParams, deal.Serialize_DealID(dealID))
	}

	Release(rt, h, st)

	getPieceInfosReceipt := rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_GetPieceInfosForDealIDs,
		Params_: getPieceInfoParams,
	})

	getPieceInfosRet := getPieceInfosReceipt.ReturnValue()
	pieceInfos := sector.PieceInfosFromBytes(getPieceInfosRet)

	// Unless we enforce a minimum padding amount, this totalPieceSize calculation can be removed.
	// Leaving for now until that decision is entirely finalized.
	var totalPieceSize util.UInt
	for _, pieceInfo := range pieceInfos {
		pieceSize := (*pieceInfo).Size()
		totalPieceSize += pieceSize
	}

	unsealedCID, _ := filproofs.ComputeUnsealedSectorCIDFromPieceInfos(sectorSize, pieceInfos)

	sealCfg := sector.SealCfg_I{
		SectorSize_:  sectorSize,
		WindowCount_: info.WindowCount(),
		Partitions_:  info.SealPartitions(),
	}

	// @param address addr.Address info.Worker()
	getActorIDParams := make([]util.Serialization, 1)
	getActorIDParams = append(getActorIDParams, addr.Serialize_Address(info.Worker()))

	getActorIDReceipt := rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.InitActorAddr,
		Method_: sys.Method_InitActor_GetActorIDForAddress,
		Params_: getActorIDParams,
	})

	getActorIDRet := getActorIDReceipt.ReturnValue()
	minerID, err := addr.Deserialize_Address(getActorIDRet)
	if err != nil {
		rt.AbortStateMsg("sm.verifySeal: failed to get actor id")
	}

	svInfo := sector.SealVerifyInfo_I{
		SectorID_: &sector.SectorID_I{
			MinerID_: minerID,
			Number_:  onChainInfo.SectorNumber(),
		},
		OnChain_: onChainInfo,

		// TODO: Make SealCfg sector.SealCfg from miner configuration (where is that?)
		SealCfg_: &sealCfg,

		Randomness_:            sector.SealRandomness(rt.Randomness(onChainInfo.SealEpoch(), 0)),
		InteractiveRandomness_: sector.InteractiveSealRandomness(rt.Randomness(onChainInfo.InteractiveEpoch(), 0)),
		UnsealedCID_:           unsealedCID,
	}

	sdr := filproofs.WinSDRParams(&filproofs.SDRCfg_I{SealCfg_: &sealCfg})
	return sdr.VerifySeal(&svInfo)
}

// Deals must be posted on chain via sma.PublishStorageDeals before PreCommitSector
// Optimization: PreCommitSector could contain a list of deals that are not published yet.
func (a *StorageMinerActorCode_I) PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo) InvocOutput {
	TODO() // TODO: validate caller
	// can be called regardless of Challenged status

	// TODO: might be a good place for Treasury

	h, st := a.State(rt)

	msgValue := rt.ValueReceived()
	depositReq := st._getPreCommitDepositReq(rt)

	if msgValue < depositReq {
		rt.AbortFundsMsg("sm.PreCommitSector: insufficient precommit deposit.")
	}

	_, found := st.PreCommittedSectors()[info.SectorNumber()]

	if found {
		// no burn funds since miners can't do repeated precommit
		rt.AbortStateMsg("Sector already pre committed.")
	}

	st._assertSectorDidNotExist(rt, info.SectorNumber())

	// @param dealIDs []deal.DealID info.DealIDs()
	params := make([]util.Serialization, 1)
	for _, dealID := range info.DealIDs() {
		params = append(params, deal.Serialize_DealID(dealID))
	}

	Release(rt, h, st)

	// verify every DealID has been published and will not expire
	// before the MAX_PROVE_COMMIT_SECTOR_EPOCH + CurrEpoch
	// abort otherwise
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_VerifyPublishedDealIDs,
		Params_: params,
	})

	h, st = a.State(rt)

	precommittedSector := &PreCommittedSector_I{
		Info_:          info,
		ReceivedEpoch_: rt.CurrEpoch(),
	}
	st.PreCommittedSectors()[info.SectorNumber()] = precommittedSector

	UpdateRelease(rt, h, st)
	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo) InvocOutput {
	TODO() // TODO: validate caller

	msgSender := rt.ImmediateCaller()

	h, st := a.State(rt)

	preCommitSector, precommitFound := st.PreCommittedSectors()[info.SectorNumber()]

	if !precommitFound {
		rt.AbortStateMsg("Sector not pre committed.")
	}

	st._assertSectorDidNotExist(rt, info.SectorNumber())

	// check if ProveCommitSector comes too late after PreCommitSector
	elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()

	// if more than MAX_PROVE_COMMIT_SECTOR_EPOCH has elapsed
	if elapsedEpoch > sector.MAX_PROVE_COMMIT_SECTOR_EPOCH {
		// PreCommittedSectors is cleaned up at _expirePreCommitSectors triggered by Cron
		rt.Abort(
			exitcode.UserDefinedError(exitcode.DeadlineExceeded),
			"more than MAX_PROVE_COMMIT_SECTOR_EPOCH has elapsed")
	}

	onChainInfo := &sector.OnChainSealVerifyInfo_I{
		SealedCID_:        preCommitSector.Info().SealedCID(),
		SealEpoch_:        preCommitSector.Info().SealEpoch(),
		InteractiveEpoch_: info.InteractiveEpoch(),
		Proof_:            info.Proof(),
		DealIDs_:          preCommitSector.Info().DealIDs(),
		SectorNumber_:     preCommitSector.Info().SectorNumber(),
	}

	isSealVerified := st._verifySeal(rt, onChainInfo)
	if !isSealVerified {
		rt.Abort(
			exitcode.UserDefinedError(exitcode.SealVerificationFailed),
			"Seal verification failed")
	}

	minerInfo := st.Info()
	sectorSize := minerInfo.SectorSize()
	sectorPower := block.StoragePower(sectorSize)

	// Activate storage deals, abort if activation failed
	// @param expiration block.ChainEpoch info.Expiration()
	// @param sectorSize block.StoragePower sectorSize
	// @param dealIDs []deal.DealID onChainInfo.DealIDs()
	activateDealsParams := make([]util.Serialization, 1)
	activateDealsParams = append(activateDealsParams, block.Serialize_ChainEpoch(info.Expiration()))
	activateDealsParams = append(activateDealsParams, block.Serialize_StoragePower(sectorPower))
	for _, dealID := range onChainInfo.DealIDs() {
		activateDealsParams = append(activateDealsParams, deal.Serialize_DealID(dealID))
	}

	UpdateRelease(rt, h, st)

	activateDealsReceipt := rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StorageMarketActorAddr,
		Method_: market.Method_StorageMarketActor_ActivateDeals,
		Params_: activateDealsParams,
	})

	h, st = a.State(rt)

	activateDealsRet := activateDealsReceipt.ReturnValue()
	sectorActivePower, err := block.Deserialize_StoragePower(activateDealsRet)
	if err != nil {
		rt.AbortStateMsg("Failed to deserialize sector power")
	}

	// add sector expiration to SectorExpirationQueue
	st.SectorExpirationQueue().Add(&SectorExpirationQueueItem_I{
		SectorNumber_: onChainInfo.SectorNumber(),
		Expiration_:   info.Expiration(),
	})

	// no need to store the proof and randomseed in the state tree
	// verify and drop, only SealCommitment{CommR, DealIDs} on chain
	sealCommitment := &sector.SealCommitment_I{
		SealedCID_: onChainInfo.SealedCID(),
		DealIDs_:   onChainInfo.DealIDs(),
	}

	// add SectorNumber and SealCommitment to Sectors
	// set Sectors.State to SectorCommitted
	// Note that SectorNumber will only become Active at the next successful PoSt
	sealOnChainInfo := &SectorOnChainInfo_I{
		SealCommitment_: sealCommitment,
		State_:          SectorCommitted(),
		Power_:          block.StoragePower(sectorActivePower),
		Activation_:     rt.CurrEpoch(),
		Expiration_:     info.Expiration(),
	}

	if a._isChallenged(rt) {
		// move PreCommittedSector to StagedCommittedSectors if in Challenged status
		st.StagedCommittedSectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
	} else {
		// move PreCommittedSector to CommittedSectors if not in Challenged status
		st.Sectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
		st.Impl().ProvingSet_.Add(onChainInfo.SectorNumber())
		st.SectorTable().Impl().CommittedSectors_.Add(onChainInfo.SectorNumber())
	}

	// now remove SectorNumber from PreCommittedSectors (processed)
	delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
	depositReq := st._getPreCommitDepositReq(rt)
	UpdateRelease(rt, h, st)

	// return deposit requirement to sender
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:    msgSender,
		Value_: depositReq,
	})

	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) _ensurePledgeCollateralSatisfied(rt Runtime) {
	emptyParams := make([]util.Serialization, 0)
	rt.SendPropagatingErrors(&vmr.InvocInput_I{
		To_:     addr.StoragePowerActorAddr,
		Method_: spc.Method_StoragePowerActor_EnsurePledgeCollateralSatisfied,
		Params_: emptyParams,
	})
}

func (a *StorageMinerActorCode_I) _expirePreCommittedSectors(rt Runtime) {

	h, st := a.State(rt)

	expiredSectorNum := 0
	inactiveDealIDs := make([]deal.DealID, 0)

	for _, preCommitSector := range st.PreCommittedSectors() {

		elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()

		if elapsedEpoch > sector.MAX_PROVE_COMMIT_SECTOR_EPOCH {
			delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
			expiredSectorNum += 1
			inactiveDealIDs = append(inactiveDealIDs, preCommitSector.Info().DealIDs()...)
		}
	}

	depositToBurn := st._getPreCommitDepositReq(rt)
	UpdateRelease(rt, h, st)

	// send funds to BurntFundsActor
	if depositToBurn > 0 {
		rt.SendPropagatingErrors(&vmr.InvocInput_I{
			To_:    addr.BurntFundsActorAddr,
			Value_: depositToBurn,
		})
	}

	// clear inactive deals
	if len(inactiveDealIDs) > 0 {
		// @param []deal.DealID inactiveDealIDs
		dealsToClearParams := make([]util.Serialization, 1)
		for _, dealID := range inactiveDealIDs {
			dealsToClearParams = append(dealsToClearParams, deal.Serialize_DealID(dealID))
		}

		rt.SendPropagatingErrors(&vmr.InvocInput_I{
			To_:     addr.StorageMarketActorAddr,
			Method_: market.Method_StorageMarketActor_ClearInactiveDealIDs,
			Params_: dealsToClearParams,
		})
	}
}

func getSectorNums(m map[sector.SectorNumber]SectorOnChainInfo) []sector.SectorNumber {
	var l []sector.SectorNumber
	for i, _ := range m {
		l = append(l, i)
	}
	return l
}

Sector

The Sector is a fundamental “storage container” abstraction used in Filecoin Storage Mining. It is the basic unit of storage, and serves to make storage conform to a set of expectations.

New sectors are empty upon creation. As the miner receives client data, they fill or “pack” the piece(s) into an unsealed sector.

Once a sector is full, the unsealed sector is combined by a proving tree into a single root UnsealedSectorCID. The sealing process then encodes (using CBOR) an unsealed sector into a sealed sector, with the root SealedSectorCID.

This diagram shows the composition of an unsealed sector and a sealed sector.

Unsealed Sectors and Sealed Sectors (open in new tab)
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type Bytes32 Bytes
type MinerID addr.Address
type Commitment Bytes32  // TODO
type UnsealedSectorCID ipld.CID
type SealedSectorCID ipld.CID

// SectorNumber is a numeric identifier for a sector. It is usually
// relative to a Miner.
type SectorNumber UInt

type FaultSet CompactSectorSet
type StorageFaultType int

// SectorSize indicates one of a set of possible sizes in the network.
type SectorSize UInt

// Ideally, SectorSize would be an enum
// type SectorSize enum {
//   1KiB = UInt 1024
//   1MiB = Uint 1048576
//   1GiB = Uint 1073741824
//   1TiB = Uint 1099511627776
//   1PiB = Uint 1125899906842624
// }

// TODO make sure this is globally unique
type SectorID struct {
    MinerID
    Number SectorNumber
}

// SectorInDetail describes all the bits of information associated
// with each sector.
// - ID   - a unique identifier assigned once the Sector is registered on chain
// - Size - the size of the sector. there are a set of allowable sizes
//
// NOTE: do not use this struct. It is for illustrative purposes only.
type SectorInDetail struct {
    ID    SectorID
    Size  SectorSize

    Unsealed struct {
        CID     UnsealedSectorCID
        Deals   [deal.StorageDeal]
        Pieces  [piece.Piece]
        // Pieces Tree<Piece> // some tree for proofs
        Bytes
    }

    Sealed struct {
        CID SealedSectorCID
        Bytes
        SealCfg
    }
}

// SectorInfo is an object that gathers all the information miners know about their
// sectors. This is meant to be used for a local index.
type SectorInfo struct {
    ID                  SectorID
    UnsealedInfo        UnsealedSectorInfo
    SealedInfo          SealedSectorInfo
    SealVerifyInfo
    PersistentProofAux
}

// UnsealedSectorInfo is an object that tracks the relevant data to keep in a sector
type UnsealedSectorInfo struct {
    UnsealedCID  UnsealedSectorCID  // CommD
    Size         SectorSize
    PieceCount   UVarint  // number of pieces in this sector (can get it from len(Pieces) too)
    Pieces       [piece.PieceInfo]  // wont get externalized easy, -- it's big
    SealCfg  // this will be here as well. it's determined.
    // Deals       [deal.StorageDeal]
}

// SealedSectorInfo keeps around information about a sector that has been sealed.
type SealedSectorInfo struct {
    SealedCID  SealedSectorCID
    Size       SectorSize
    SealCfg
    SealArgs   SealArguments
}

TODO:

  • describe sizing ranges of sectors
  • describe “storage/shipping container” analogy

Sector Set

// sector sets
type SectorSet [SectorID]
type UnsealedSectorSet SectorSet
type SealedSectorSet SectorSet

// compact sector sets
type Bitfield Bytes
type RLEpBitfield Bitfield
type CompactSectorSet RLEpBitfield

Sector Sealing

import file "github.com/filecoin-project/specs/systems/filecoin_files/file"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type Path struct {}  // TODO

type SealRandomness Bytes
type InteractiveSealRandomness Bytes

// SealSeed is unique to each Sector
// SealSeed is:
//    SealSeedHash(MinerID, SectorNumber, SealRandomness, UnsealedSectorCID)
type SealSeed Bytes

type SealCfg struct {
    SectorSize
    WindowCount  UInt
    Partitions   UInt
}

// SealVerifyInfo is the structure of all the information a verifier
// needs to verify a Seal.
type SealVerifyInfo struct {
    SectorID
    OnChain                OnChainSealVerifyInfo
    SealCfg
    Randomness             SealRandomness
    InteractiveRandomness  InteractiveSealRandomness
    UnsealedCID            UnsealedSectorCID  // CommD
}

// OnChainSealVerifyInfo is the structure of information that must be sent with
// a message to commit a sector. Most of this information is not needed in the
// state tree but will be verified in sm.CommitSector. See SealCommitment for
// data stored on the state tree for each sector.
type OnChainSealVerifyInfo struct {
    SealedCID         SealedSectorCID  // CommR
    SealEpoch         block.ChainEpoch
    InteractiveEpoch  block.ChainEpoch
    Proof             SealProof
    DealIDs           [deal.DealID]
    SectorNumber
}

// SealCommitment is the information kept in the state tree about a sector.
// SealCommitment is a subset of OnChainSealVerifyInfo.
type SealCommitment struct {
    SealedCID   SealedSectorCID  // CommR
    DealIDs     [deal.DealID]
    Expiration  block.ChainEpoch
}

type SectorPreCommitInfo struct {
    SectorNumber
    SealedCID     SealedSectorCID  // CommR
    SealEpoch     block.ChainEpoch
    DealIDs       [deal.DealID]
}

type SectorProveCommitInfo struct {
    SectorNumber
    Proof             SealProof
    InteractiveEpoch  block.ChainEpoch
    Expiration        block.ChainEpoch
}

// PersistentProofAux is meta data required to generate certain proofs
// for a sector, for example PoSt.
// These should be stored and indexed somewhere by CommR.
type PersistentProofAux struct {
    CommC              Commitment
    CommQ              Commitment
    CommRLast          Commitment

    // TODO: This may be a partially-cached tree.
    // this may be empty
    CommRLastTreePath  file.Path
}

type ProofAuxTmp struct {
    PersistentAux  PersistentProofAux  // TODO: Move this to sealer.SealOutputs.

    SectorID
    CommD          Commitment
    CommR          SealedSectorCID
    CommDTreePath  file.Path
    CommCTreePath  file.Path
    CommQTreePath  file.Path

    Seed           SealSeed
    KeyLayers      [Bytes]
}

type SealArguments struct {
    Algorithm        SealAlgorithm
    OutputArtifacts  SealOutputArtifacts
}

type SealProof struct {//<curve, system> {
    Config      SealProofConfig
    ProofBytes  Bytes
}

type SealProofConfig struct {// TODO
}

// TODO: move into proofs lib
type FilecoinSNARKProof struct {}  //<bls12-381, Groth16>

type SealAlgorithm enum {
    StackedDRG
}

// TODO
type SealOutputArtifacts struct {}

// TODO: PieceInfo is the name used by proofs for this struct, but there also exists a piece.PieceInfo type, which is different.
// We should probably rename one of them, since this is quite confusing. Or, if we can put `PieceCID` into pieces.PieceInfo, we can just use that.
type PieceInfo struct {
    Size      UVarint  // Size in nodes. For BLS12-381 (capacity 254 bits), must be >= 16. (16 * 8 = 128)
    PieceCID  piece.PieceCID

    // Data returns the serialized representation of the PieceInfo.
    Data()    Bytes
}
Drawing randomness for sector commitments

Tickets are used as input to the SEAL above in order to tie Proofs-of-Replication to a given chain, thereby preventing long-range attacks (from another miner in the future trying to reuse SEALs).

The ticket has to be drawn from a finalized block in order to prevent the miner from potential losing storage (in case of a chain reorg) even though their storage is intact.

Verification should ensure that the ticket was drawn no farther back than necessary by the miner. We note that tickets can uniquely be associated to a given round in the protocol (lest a hash collision be found), but that the round number is explicited by the miner in commitSector.

We present precisely how ticket selection and verification should work. In the below, we use the following notation:

  • F– Finality (number of rounds)
  • X– round in which SEALing starts
  • Z– round in which the SEAL appears (in a block)
  • Y– round announced in the SEAL commitSector (should be X, but a miner could use any Y <= X), denoted by the ticket selection
    • T– estimated time for SEAL, dependent on sector size
    • G = T + variance– necessary flexibility to account for network delay and SEAL-time variance.

We expect Filecoin will be able to produce estimates for sector commitment time based on sector sizes, e.g.: (estimate, variance) <--- SEALTime(sectors) G and T will be selected using these.

Picking a Ticket to Seal

When starting to prepare a SEAL in round X, the miner should draw a ticket from X-F with which to compute the SEAL.

Verifying a Seal’s ticket

When verifying a SEAL in round Z, a verifier should ensure that the ticket used to generate the SEAL is found in the range of rounds [Z-T-F-G, Z-T-F+G].

In Detail
                               Prover
           ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
          β”‚

          β–Ό
         X-F ◀───────F────────▢ X ◀──────────T─────────▢ Z
     -G   .  +G                 .                        .
  ───(β”Œβ”€β”€β”€β”€β”€β”€β”€β”)───────────────( )──────────────────────( )────────▢
      β””β”€β”€β”€β”€β”€β”€β”€β”˜                 '                        '        time
 [Z-T-F-G, Z-T-F+G]
          β–²

          β”” ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                              Verifier

Note that the prover here is submitting a message on chain (i.e. the SEAL). Using an older ticket than necessary to generate the SEAL is something the miner may do to gain more confidence about finality (since we are in a probabilistically final system). However it has a cost in terms of securing the chain in the face of long-range attacks (specifically, by mixing in chain randomness here, we ensure that an attacker going back a month in time to try and create their own chain would have to completely regenerate any and all sectors drawing randomness since to use for their fork’s power).

We break this down as follows:

  • The miner should draw from X-F.
  • The verifier wants to find what X-F should have been (to ensure the miner is not drawing from farther back) even though Y (i.e. the round of the ticket actually used) is an unverifiable value.
  • Thus, the verifier will need to make an inference about what X-F is likely to have been based on:
    • (known) round in which the message is received (Z)
    • (known) finality value (F)
    • (approximate) SEAL time (T)
  • Because T is an approximate value, and to account for network delay and variance in SEAL time across miners, the verifier allows for G offset from the assumed value of X-F: Z-T-F, hence verifying that the ticket is drawn from the range [Z-T-F-G, Z-T-F+G].

Sector Index

import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

// TODO import this from StorageMarket
type SectorIndex struct {
    BySectorID     {sector.SectorID: sector.SectorInfo}
    ByUnsealedCID  {sector.UnsealedSectorCID: sector.SectorInfo}
    BySealedCID    {sector.SealedSectorCID: sector.SectorInfo}
    ByPieceID      {piece.PieceID: sector.SectorInfo}
    ByDealID       {deal.DealID: sector.SectorInfo}
}

type SectorIndexerSubsystem struct {
    Index    SectorIndex
    Store    SectorStore
    Builder  SectorBuilder

    // AddNewDeal is called by StorageMiningSubsystem after the StorageMarket
    // has made a deal. AddNewDeal returns an error when:
    // - there is no capacity to store more deals and their pieces
    AddNewDeal(deal deal.StorageDeal) StageDealResponse

    // bring back if needed.
    // OnNewTipset(chain Chain, epoch blockchain.Epoch) struct {}

    // SectorsExpiredAtEpoch returns the set of sectors that expire
    // at a particular epoch.
    SectorsExpiredAtEpoch(epoch block.ChainEpoch) [sector.SectorID]

    // removeSectors removes the given sectorIDs from storage.
    removeSectors(sectorIDs [sector.SectorID])
}

Sector Builder

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
// import smkt "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

// SectorBuilder accumulates deals, keeping track of their
// sector configuration requirements and the piece sizes.
// Once there is a sector ready to be sealed, NextSector
// will return a sector.

type StageDealResponse struct {
    SectorID sector.SectorID
}

type SectorBuilder struct {
    // DealsToSeal keeps a set of StorageDeal objects.
    // These include the info for the relevant pieces.
    // This builder just accumulates deals, keeping track of their
    // sector configuration requirements, and the piece sizes.
    DealsToSeal [deal.StorageDeal]

    // StageDeal adds a deal to be packed into a sector.
    StageDeal(d deal.StorageDeal) StageDealResponse

    // NextSector returns an UnsealedSectorInfo, which includes the (ordered) set of
    // pieces, and the SealCfg. An error may be returned if SectorBuilder is not
    // ready to produce a Sector.
    //
    // TODO: use go channels? or notifications?
    NextSector() union {i sector.UnsealedSectorInfo, err error}
}

SectorStore

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"

type SectorStore struct {
    // FileStore stores all the unsealed and sealed sectors.
    FileStore   file.FileStore

    // PieceStore is shared with DataTransfer, and is a way to store or read
    // pieces temporarily. This may or may not be backed by the FileStore above.
    PieceStore  piece.PieceStore

    // GetSectorFile returns the file for a given sector id.
    // If the SectorID does not have any sector files associated yet, GetSectorFiles
    // returns an error.
    GetSectorFiles(id sector.SectorID) union {f SectorFiles, err error}

    // Get information, including a merkle tree file/path, needed to generate PoSt for a sector.
    GetSectorPersistentProofAux(id sector.SectorID) sector.PersistentProofAux

    StoreSectorPersistentProofAux(id sector.SectorID, proofAux sector.PersistentProofAux) union {proofAux sector.PersistentProofAux, err error}

    // CreateSectorFiles allocates two sector files, one for unsealed and one for
    // sealed sector.
    CreateSectorFiles(id sector.SectorID) union {f SectorFiles, err error}
}

// SectorFiles is a datastructure that groups two file objects and a sectorID.
// These files are where unsealed and sealed sectors should go.
type SectorFiles struct {
    SectorID  sector.SectorID
    Unsealed  file.File
    Sealed    file.File
}

TODO

  • talk about how sectors are stored

Storage Proving

Filecoin Poving Subsystem

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"
import sealer "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/sealer"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type StorageProvingSubsystem struct {
    SectorSealer   sealer.SectorSealer
    PoStGenerator  poster.PoStGenerator

    VerifySeal(sv sector.SealVerifyInfo, pieceInfos [sector.PieceInfo]) union {ok bool, err error}
    ComputeUnsealedSectorCID(sectorSize UInt, pieceInfos [sector.PieceInfo]) union {unsealedSectorCID sector.UnsealedSectorCID, err error}

    ValidateBlock(block block.Block)

    // TODO: remove this?
    // GetPieceInclusionProof(pieceRef CID) union { PieceInclusionProofs, error }

    GenerateElectionPoStCandidates(
        challengeSeed  sector.PoStRandomness
        sectorIDs      [sector.SectorID]
    ) [sector.PoStCandidate]

    GenerateSurprisePoStCandidates(
        challengeSeed  sector.PoStRandomness
        sectorIDs      [sector.SectorID]
    ) [sector.PoStCandidate]

    CreateElectionPoStProof(
        challengeSeed  sector.PoStRandomness
        candidates     [sector.PoStCandidate]
    ) sector.PoStProof

    CreateSurprisePoStProof(
        challengeSeed  sector.PoStRandomness
        candidates     [sector.PoStCandidate]
    ) sector.PoStProof

    VerifyPoStProof(
        challengeSeed  sector.PoStRandomness
        proof          [sector.PoStProof]
    ) bool
}

Sector Sealer

Sector Sealer

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

type SealInputs struct {
    SectorID      sector.SectorID
    SealCfg       sector.SealCfg
    MinerID       addr.Address
    RandomSeed    sector.SealRandomness
    UnsealedPath  file.Path
    SealedPath    file.Path
    DealIDs       [deal.DealID]
}

type CreateSealProofInputs struct {
    SectorID               sector.SectorID
    SealCfg                sector.SealCfg
    InteractiveRandomSeed  sector.InteractiveSealRandomness
    SealedPaths            [file.Path]
    SealOutputs
}

type SealOutputs struct {
    ProofAuxTmp sector.ProofAuxTmp
}

type CreateSealProofOutputs struct {
    SealInfo  sector.SealVerifyInfo
    ProofAux  sector.PersistentProofAux
}

type SectorSealer struct {
    SealSector() union {so SealOutputs, err error}
    CreateSealProof(si CreateSealProofInputs) union {so CreateSealProofOutputs, err error}

    MaxUnsealedBytesPerSector(SectorSize UInt) UInt
}

Sector Poster

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sector_index "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type UInt64 UInt

// TODO: move this to somewhere the blockchain can import
// candidates:
// - filproofs - may have to learn about Sectors (and if we move Seal stuff, Deals)
// - "blockchain/builtins" or something like that - a component in the blockchain that handles storage verification
type PoStSubmission struct {
    PostProof   sector.PoStProof
    ChainEpoch  block.ChainEpoch
}

type PoStGenerator struct {
    PoStCfg      sector.PoStCfg
    SectorStore  sector_index.SectorStore

    GeneratePoStCandidates(
        challengeSeed   sector.PoStRandomness
        candidateCount  UInt
        sectors         [sector.SectorID]
    ) [sector.PoStCandidate]

    CreateElectionPoStProof(
        witness sector.PoStWitness
    ) sector.PoStProof

    CreateSurprisePoStProof(
        witness sector.PoStWitness
    ) sector.PoStProof

    // FIXME: Verification shouldn't require a PoStGenerator. Move this.
    VerifyPoStProof(
        Proof          sector.PoStProof
        challengeSeed  sector.PoStRandomness
    ) bool
}
package poster

import (
	filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"

	util "github.com/filecoin-project/specs/util"
)

type Serialization = util.Serialization

// See "Proof-of-Spacetime Parameters" Section
// TODO: Unify with orient model.
const POST_CHALLENGE_DEADLINE = uint(480)

func (pg *PoStGenerator_I) GeneratePoStCandidates(challengeSeed sector.PoStRandomness, candidateCount int, sectors []sector.SectorID) []sector.PoStCandidate {
	// Question: Should we pass metadata into FilProofs so it can interact with SectorStore directly?
	// Like this:
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetatada);

	// Question: Or should we resolve + manifest trees here and pass them in?
	// Like this:
	// trees := sectorsMetadata.map(func(md) { SectorStorage.GetMerkleTree(md.MerkleTreePath) });
	// Done this way, we redundantly pass the tree paths in the metadata. At first thought, the other way
	// seems cleaner.
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetadata, trees);

	// For now, dodge this by passing the whole SectorStore. Once we decide how we want to represent this, we can narrow the call.

	sdr := makeStackedDRGForPoSt(pg.PoStCfg())
	return sdr.GenerateElectionPoStCandidates(challengeSeed, sectors, candidateCount, pg.SectorStore())
}

func (pg *PoStGenerator_I) CreateElectionPoStProof(postCfg sector.