Filecoin Specification

Introduction

Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.

The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).

Architecture Diagrams

Filecoin Systems

Status Legend:

  • πŸ›‘ Bare - Very incomplete at this time.
    • Implementors: This is far from ready for you.
  • ⚠️ Rough – work in progress, heavy changes coming, as we put in place key functionality.
    • Implementors: This will be ready for you soon.
  • πŸ” Refining - Key functionality is there, some small things expected to change. Some big things may change.
    • Implementors: Almost ready for you. You can start building these parts, but beware there may be changes still.
  • βœ… Stable - Mostly complete, minor things expected to change, no major changes expected.
    • Implementors: Ready for you. You can build these parts.

Note that the status relates to the state of the spec either written out either in english or in code. The goal is for the spec to eventually be fleshed out in both language-sets.

[Show / Hide ] status indicators

Overview Diagram

TODO:

  • cleanup / reorganize
    • this diagram is accurate, and helps lots to navigate, but it’s still a bit confusing
    • the arrows and lines make it a bit hard to follow. We should have a much cleaner version (maybe based on C4)
  • reflect addition of Token system
    • move data_transfers into Token
Protocol Overview Diagram (open in new tab)

Protocol Flow Diagram – deals on chain

Protocol Sequence Diagram - Deals on Chain (open in new tab)

Parameter Calculation Dependency Graph

This is a diagram of the model for parameter calculation. This is made with orient, our tool for modeling and solving for constraints.

Parameter Calculation Dependency Graph (open in new tab)

Key Concepts

For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:

  • Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).

  • Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).

  • Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the ChainSync - synchronizing the Blockchain is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.

  • APIs are messages that can be sent to components. A client’s view of a given sub-protocol, such as a request to a miner node’s Storage Provider component to store files in the storage market, may require the execution of a series of APIs.

  • Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components, and supports all of the APIs detailed in the spec.

  • Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.

  • Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.

Filecoin VM

The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages, and a checkpoint of the current global state after the application of those messages.

The global state here consists of a set of actors, each with their own private state.

An actor is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state pointer, a code CID which tells the system what type of actor it is, and a nonce which tracks the number of messages sent by this actor. (TODO: the nonce is really only needed for external user interface actors, AKA account actors. Maybe we should find a way to clean that up?)

There are two routes to calling a method on an actor. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message to the network, and pay a fee to the miner that includes your message. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the messages execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).

Second, an actor may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).

For full implementation details, see the VM subsystem.

Filecoin Spec Process (v1)

πŸš€ Pre-launch mode

Until we launch, we are making lots of changes to the spec to finish documenting the current version of the protocol. Changes will be made to the spec by a simple PR process, with approvals by key stakeholders. Some refinements are still to happen and testnet is expected to bring a few significant fixes/improvements. Most changes now are changing the document, NOT changing the protocol, at least not in a major way.

Until we launch, if something is missing, PR it in. If something is wrong, PR a fix. If something needs to be elaborated, PR in updates. What is in the top level of this repo, in master, is the spec, is the Filecoin Protocol. Nothing else matters (ie. no other documents, issues contain “the protocol”).

New Proposals -> Drafts -> Spec

For anything that is not part of the currently speced systems (like ‘repair’, for example) the process we will use is:

  • (1) First, discuss the problem(s) and solution(s) in an issue
    • Or several issues, if the space is large and multithreaded enough.
    • Work out all the details required to make this proposal work.
  • (2) Write a draft with all the details.
    • When you feel like a solution is near, write up a draft document that contains all the details, and includes what changes would need to happen to the spec
    • E.g. “Add a System called X with …", or “Add a library called Y, …", or “Modify vm/state_tree to include …”
    • Place this document inside the src/drafts/ directory.
    • Anybody is welcome to contribute well-reasoned and detailed drafts.
    • (Note: these drafts will give way to FIPs in the future)
  • (3) Seek approval to merge this into the specification.
    • To seek approval, open an issue and discuss it.
    • If the draft approved by the owners of the filecoin-spec, then the changes to the spec will need to be made in a PR.
    • Once changes make it into the spec, remove the draft.

It is acceptable for a PR for a draft to stay open for quite a while, as thought and discussion on the topic happens. At some point, if the reviewers and the author feel that the current state of the draft is stable enough (though not ‘done’) then it should be merged into the repo. Further changes to the draft are additional PRs, which may generate more discussion. Comments on these drafts are welcome from anyone, but if you wish to be involved in the actual research process, you will need to devote very considerable time and energy to the process.

On merging

For anything in the drafts or notes folder, merge yourself after a review from a relevant person. For anything in the top level (canonical spec), @zixuanzh, @anorth, @whyrusleeping or @jbenet will merge after proper review.

Issues

Issues in the specs repo will be high signal. They will either be proposals, or issues directly relating to problems in the spec. More speculative research questions and discussion will happen in the research repo.

About this specification

TODO

FIPs - Filecoin Improvement Proposals

TODO

Contributing to the Filecoin spec

TODO

Change Log - Version History

v1.1 - 2019-10-30 - c3f6a6dd

  • Deals on chain
    • Storage Deals
    • Full StorageMarketActor logic:
      • client and miner balances: deposits, locking, charges, and withdrawls
      • collateral slashing
    • Full StorageMinerActor logic:
      • sector states, state transitions, state accounting, power accounting
      • DeclareFaults + RecoverSectors flow
      • CommitSector flow
      • SubmitElectionPost or SubmitSurprisePoSt flow
        • Sector proving, faults, recovery, and expiry
      • OnMissedSurprisePost flow
        • Fault sectors, drop power, expiry, and more
    • StoragePowerActor
      • power accounting based on StorageMinerActor state changes
      • Collaterals: deposit, locking, withdrawal
      • Slashing collaerals
    • Interactive-Post
      • StorageMinerActor: PrecommitSector and CommitSector
    • Surprise-Post
      • Challenge flow through CronActor -> StoragePowerActor -> StorageMiner
  • Virtual Machine
    • Extracted VM system out of blockchain
    • Addresses
    • Actors
      • Separation of code and state
    • Messages
      • Method invocation representation
    • Runtime
      • Slimmed down interface
      • Safer state Acquire, Release, Commit flow
      • Exit codes
      • Full invocation flow
      • Safer recursive context construction
      • Error levels and handling
      • Detecting and handling out of gas errors
    • Interpreter
      • ApplyMessage
      • {Deduct,Deposit} -> Transfer - safer
      • Gas accounting
    • VM system actors
      • InitActor basic flow, plug into Runtime
      • CronActor full flow, static registry
    • AccountActor basic flow
  • Data Transfer
    • Full Data Transfer flows
      • push, pull, 1-RTT pull
    • protocol, data structures, interface
    • diagrams
  • blockchain/ChainSync:
    • first version of ChainSync protocol description
    • Includes protocol state machine description
    • Network bootstrap – connectivity and state
    • Progressive Block Validation
    • Progressive Block Propagation
  • Other
    • Spec section status indicators
    • Changelog

v1.0 - 2019-10-07 - 583b1d06

  • Full spec reorganization
  • Tooling
    • added a build system to compile tools
    • added diagraming tools (dot, mermaid, etc)
    • added dependency installation
    • added Orient to calculate protocol parameters
  • Content
    • filecoin_nodes
      • types - an overview of different filecoin node types
      • repository - local data-structure storage
      • network interface - connecting to libp2p
      • clock - a wall clock
    • files & data
      • file - basic representation of data
      • piece - representation of data to store in filecoin
    • blockchain
      • blocks - basic blockchain data structures (block, tipset, chain, etc)
      • storage power consensus - basic algorithms and crypto artifacts for SPC
      • StoragePowerActor basics
    • token
      • skeleton of sections
    • storage mining
      • storage miner: module that controls and coordinates storage mining
      • sector: unit of storage, sealing, crypto artifacts, etc.
      • sector index: accounting sectors and metadata
      • storage proving: seals, posts, and more
    • market
      • deals: storage market deal basics
      • storage market: StorageMarketActor basics
    • orient
      • orient models for proofs and block sizes
    • libraries
      • filcrypto - sealing, PoRep, PoSt algorithms
      • ipld - cids, ipldstores
      • libp2p - host/node representation
      • ipfs - graphsync and bitswap
      • multiformats - multihash, multiaddr
    • diagrams
      • system overview
      • full protocol mermaid flow

pre v1.0

System Decomposition

What are Systems? How do they work?

Filecoin decouples and modularizes functionality into loosely-joined systems. Each system adds significant functionality, usually to achieve a set of important and tightly related goals.

For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.

Why is System decoupling useful?

This decoupling is useful for:

  • Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
  • Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
  • Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
  • Scalability: systems and various use cases may drive different performance requirements for different opertators. System decoupling makes it easier for operators to scale their deployments along system boundaries.

Filecoin Nodes don’t need all the systems

Filecoin Nodes vary significantly, and do not need all the systems. Most systems are only needed for a subset of use cases.

For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust. Of course, such nodes

Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:

  • Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
  • Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
  • Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
  • Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.

Separating Systems

How do we determine what functionality belongs in one system vs another?

Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.

Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For example, the StoragePowerActor and the StorageMarketActor were a single Actor previously. This caused a large coupling of functionality across StorageDeal making, the StorageMarket, markets in general, with Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality requried breaking apart the one actor into two.

Decomposing within a System

Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.

Implementing Systems

System Requirements

In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).

All Systems, as defined in this document, require the following:

  • Repository:
    • Local IpldStore. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes.
    • User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
    • Local, Secure KeyStore. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie the KeyStore) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
  • Local FileStore. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB).
  • Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes. Systems expect to be initialized with a libp2p.Node on which they can mount their own protocols.
  • Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.

For this purpose, we use the FilecoinNode data structure, which is passed into all systems at initialization:

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import message_pool "github.com/filecoin-project/specs/systems/filecoin_blockchain/message_pool"

type FilecoinNode struct {
    Node         libp2p.Node

    Repository   repo.Repository
    FileStore    filestore.FileStore
    Clock        clock.UTCClock

    MessagePool  message_pool.MessagePoolSubsystem
}
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key_store"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    Config      config.Config
    KeyStore    key_store.KeyStore
    ChainStore  ipld.GraphStore
    StateStore  ipld.GraphStore
}

System Limitations

Further, Systems MUST abide by the following limitations:

  • Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achived by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
  • Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it). This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
  • No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
  • No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).

Systems

Filecoin Nodes

Node Types

Node Interface

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import message_pool "github.com/filecoin-project/specs/systems/filecoin_blockchain/message_pool"

type FilecoinNode struct {
    Node         libp2p.Node

    Repository   repo.Repository
    FileStore    filestore.FileStore
    Clock        clock.UTCClock

    MessagePool  message_pool.MessagePoolSubsystem
}

Examples

There are many kinds of Filecoin Nodes …

This section should contain:

  • what all nodes must have, and why
  • examples of using different systems

Chain Verifier Node

type ChainVerifierNode interface {
  FilecoinNode

  systems.Blockchain
}

Client Node

type ClientNode struct {
  FilecoinNode

  systems.Blockchain
  markets.StorageMarketClient
  markets.RetrievalMarketClient
  markets.MarketOrderBook
  markets.DataTransfers
}

Storage Miner Node

type StorageMinerNode interface {
  FilecoinNode

  systems.Blockchain
  systems.Mining
  markets.StorageMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Retrieval Miner Node

type RetrievalMinerNode interface {
  FilecoinNode

  blockchain.Blockchain
  markets.RetrievalMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Relayer Node

type RelayerNode interface {
  FilecoinNode

  blockchain.MessagePool
  markets.MarketOrderBook
}

Repository - Local Storage for Chain Data and Systems

The Filecoin node repository is simply an abstraction denoting that data which any functional Filecoin node needs to store locally in order to run correctly.

The repo is accessible to the node’s systems and subsystems and acts as local storage compartementalized from the node’s FileStore (for instance).

It stores the node’s keys, the IPLD datastructures of stateful objects and node configs.

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key_store"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    Config      config.Config
    KeyStore    key_store.KeyStore
    ChainStore  ipld.GraphStore
    StateStore  ipld.GraphStore
}

Config - Local Storage for ConfigurationValues

Filecoin Node configuration

type ConfigKey string
type ConfigVal Bytes

type Config struct {
    Get(k ConfigKey) union {c ConfigVal, e error}
    Put(k ConfigKey, v ConfigVal) error

    Subconfig(k ConfigKey) Config
}

Key Store

The Key Store is a fundamental abstraction in any full Filecoin node used to store the keypairs associated to a given miner’s address and distinct workers (should the miner choose to run multiple workers).

Node security depends in large part on keeping these keys secure. To that end we recommend keeping keys separate from any given subsystem and using a separate key store to sign requests as required by subsystems as well as keeping those keys not used as part of mining in cold storage.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import address "github.com/filecoin-project/go-address"

type KeyStore struct {
    MinerAddress  address.Address
    OwnerKey      filcrypto.VRFKeyPair
    WorkerKey     filcrypto.VRFKeyPair
}

Filecoin storage miners rely on three main components:

  • The miner address uniquely assigned to a given storage miner actor upon calling registerMiner() in the Storage Power Consensus Subsystem. It is a unique identifier for a given storage miner to which its power and other keys will be associated.
  • The owner keypair is provided by the miner ahead of registration and its public key associated to the miner address. Block rewards and other payments are made to the ownerAddress.
  • The worker keypair can be chosen and changed by the miner, its public key associated to the miner address. It is used to sign transactions, signatures, etc. It must be a BLS keypair given its use as part of the Verifiable Random Function.

While miner addresses are unique, multiple storage miner actors can share an owner public key or likewise a worker public key.

The process for changing the worker keypairs on-chain (i.e. the workerKey associated to a storage miner) is specified in Storage Miner Actor. Note that this is a two-step process. First a miner stages a change by sending a message to the chain. When received, the key change is staged to occur in twice the randomness lookback parameter number of epochs, to prevent adaptive key selection attacks. Every time a worker key is queried, a pending change is lazily checked and state is potentially updated as needed.

TODO:

  • potential reccomendations or clear disclaimers with regards to consequences of failed key security

IpldStore - Local Storage for hash-linked data

// imported as ipld.Object

import cid "github.com/ipfs/go-cid"

type Object interface {
    CID() cid.Cid

    // Populate(v interface{}) error
}

type GraphStore struct {
    // Retrieves a serialized value from the store by CID. Returns the value and whether it was found.
    Get(c cid.Cid) (util.Bytes, bool)

    // Puts a serialized value in the store, returning the CID.
    Put(value util.Bytes) (c cid.Cid)
}

Filecoin datastructures are stored in IPLD format, a data format akin to json built for storage, retrieval and traversal of hash-linked data DAGs.

The Filecoin network relies primarily on two distinct IPLD GraphStores:

  • One ChainStore which stores the blockchain, including block headers, associated messages, etc.
  • One StateStore which stores the payload state from a given blockchain, or the stateTree resulting from all block messages in a given chain being applied to the genesis state by the Filecoin VM.

The ChainStore is downloaded by nodes from their peers during the bootstrapping phase of ChainSync - synchronizing the Blockchain and stored by the node thereafter. It is updated on every new block reception, or if the node syncs to a new best chain.

The StateStore is computed through execution of all block messages in a given ChainStore and stored by the node thereafter. It is updated with every new incoming block’s processing by the VM Interpreter - Message Invocation (Outside VM) and referenced accordingly by new blocks produced atop it in the block block header’s ParentState field.

TODO:

  • What is IPLD
    • hash linked data
    • from IPFS
  • Why is it relevant to filecoin
    • all network datastructures are definitively IPLD
    • all local datastructures can be IPLD
  • What is an IpldStore
    • local storage of dags
  • How to use IpldStores in filecoin
    • pass it around
  • One ipldstore or many
    • temporary caches
    • intermediately computed state
  • Garbage Collection

Network Interface

import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

type Node libp2p.Node

Filecoin nodes use the libp2p protocol for peer discovery, peer routing, and message multicast, and so on. Libp2p is a set of modular protocols common to the peer-to-peer networking stack. Nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protcols will be mounted under /fil/... protocol identifiers.

Here is the list of libp2p protocols used by Filecoin.

  • Graphsync:
    • Graphsync is used to transfer blockchain and user data
    • Draft spec
    • No filecoin specific modifications to the protocol id
  • Gossipsub:
    • block headers and messages are broadcasted through a Gossip PubSub protocol where nodes can subscribe to topics for blockchain data and receive messages in those topics. When receiving messages related to a topic, nodes processes the message and forwards it to its peers who also subscribed to the same topic.
    • Spec is here
    • No filecoin specific modifications to the protocol id. However the topic identifiers MUST be of the form fil/blocks/<network-name> and fil/msgs/<network-name>
  • KademliaDHT:
    • Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. Kad DHT is used primarily for peer routing as well as peer discovery in the Filecoin protocol.
    • Spec TODO reference implementation
    • The protocol id must be of the form fil/<network-name>/kad/1.0.0
  • Bootstrap List:
    • Bootstrap is a list of nodes that a new node attempts to connect upon joining the network. The list of bootstrap nodes and their addresses are defined by the users.
  • Peer Exchange:
    • Peer Exchange is a discovery protocol enabling peers to create and issue queries for desired peers against their existing peers
    • spec TODO
    • No Filecoin specific modifications to the protocol id.
  • DNSDiscovery: Design and spec needed before implementing
  • HTTPDiscovery: Design and spec needed before implementing
  • Hello:
    • Hello protocol handles new connections to filecoin nodes to facilitate discovery
    • the protocol string is fil/hello/1.0.0.

Hello Spec

Protocol Flow

fil/hello is a filecoin specific protocol built on the libp2p stack. It consists of two conceptual procedures: hello_connect and hello_listen.

hello_listen: on new stream -> read peer hello msg from stream -> write latency message to stream -> close stream

hello_connect: on connected -> open stream -> write own hello msg to stream -> read peer latency msg from stream -> close stream

where stream and connection operations are all standard libp2p operations. Nodes running the Hello Protocol should consume the incoming Hello Message and use it to help manage peers and sync the chain.

Messages
import cid "github.com/ipfs/go-cid"

// HelloMessage shares information about a peer's chain head
type HelloMessage struct {
    HeaviestTipSet        [cid.Cid]
    HeaviestTipSetWeight  BigInt
    HeaviestTipSetHeight  Int
    GenesisHash           cid.Cid
}

// LatencyMessage shares information about a peer's network latency
type LatencyMessage struct {
    // Measured in unix nanoseconds 
    TArrival  Int
    // Measured in unix nanoseconds
    TSent     Int
}

When writing the HelloMessage to the stream the peer must inspect its current head to provide accurate information. When writing the LatencyMessage to the stream the peer should set TArrival immediately upon receipt and TSent immediately before writing the message to the stream.

Clock

type UnixTime int64  // unix timestamp

// UTCClock is a normal, system clock reporting UTC time.
// It should be kept in sync, with drift less than 1 second.
type UTCClock struct {
    NowUTCUnix() UnixTime
}

// ChainEpoch represents a round of a blockchain protocol.
type ChainEpoch int64

// ChainEpochClock is a clock that represents epochs of the protocol.
type ChainEpochClock struct {
    // GenesisTime is the time of the first block. EpochClock counts
    // up from there.
    GenesisTime              UnixTime

    EpochAtTime(t UnixTime)  ChainEpoch
}
package clock

import "time"

// UTCSyncPeriod notes how often to sync the UTC clock with an authoritative
// source, such as NTP, or a very precise hardware clock.
var UTCSyncPeriod = time.Hour

// EpochDuration is a constant that represents the duration in seconds
// of a blockchain epoch.
var EpochDuration = UnixTime(15)

func (_ *UTCClock_I) NowUTCUnix() UnixTime {
	return UnixTime(time.Now().Unix())
}

// EpochAtTime returns the ChainEpoch corresponding to time `t`.
// It first subtracts GenesisTime, then divides by EpochDuration
// and returns the resulting number of epochs.
func (c *ChainEpochClock_I) EpochAtTime(t UnixTime) ChainEpoch {
	difference := t - c.GenesisTime()
	epochs := difference / EpochDuration
	return ChainEpoch(epochs)
}

Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock (tolerating some bounded drift).

Filecoin relies on this system clock in order to secure consensus. Specifically the clock is necessary to support validation rules that prevent block producers from mining blocks with a future timstamp, and running leader elections more frequently than the protocol allows.

Clock uses

The Filecoin system clock is used:

  • by syncing nodes to validate that incoming blocks were mined in the appropriate epoch given their timestamp (see Block Validation). This is possible because the system clock maps all times to a unique epoch number totally determined by the start time in the genesis block.
  • by syncing nodes to drop blocks coming from a future epoch
  • by mining nodes to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in the current round (see Storage Power Consensus).

In order to allow miners to do the above, the system clock must:

  1. Have low enough clock drift (sub 1s) relative to other nodes so that blocks are not mined in epochs considered future epochs from the persective of other nodes (those blocks should not be validated until the proper epoch/time as per validation rules).
  2. Set epoch number on node initialization equal to epoch = Floor[(current_time - genesis_time) / epoch_time]

It is expected that other subsystems will register to a NewRound() event from the clock subsystem.

Clock Requirements

Clocks used as part of the Filecoin protocol should be kept in sync, with drift less than 1 second so as to enable appropriate validation.

Computer-grade clock crystals can be expected to have drift rates on the order of 1ppm (i.e. 1 microsecond every second or .6 seconds a week), therefore, in order to respect the above-requirement,

  • clients SHOULD query an NTP server (pool.ntp.org is recommended) on an hourly basis to adjust clock skew.
  • clients MAY consider using cesium clocks instead for accurate synchrony within larger mining operations

Mining operations have a strong incentive to prevent their clock from drifting ahead more than one epoch to keep their block submissions from being rejected. Likewise they have an incentive to prevent their clocks from drifting behind more than one epoch to avoid partitioning themselves off from the synchronized nodes in the network.

Future work

If either of the above metrics show significant network skew over time, future versions of Filecoin may include potential timestamp/epoch correction periods at regular intervals.

When recoverying from exceptional chain halting outages (for example all implementations panic on a given block) the network can potentially opt for per-outage “dead zone” rules banning the authoring of blocks during the outage epochs to prevent attack vectors related to unmined epochs during chain restart.

Future versions of the Filecoin protocol may use Verifiable Delay Functions (VDFs) to strongly enforce block time and fulfill this leader election requirement; we choose to explicitly assume clock synchrony until hardware VDF security has been proven more extensively.

Files & Data

Filecoin’s primary aim is to store client’s Files and Data. This section details data structures and tooling related to working with files, chunking, encoding, graph representations, Pieces, storage abstractions, and more.

File

// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string

// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
    Path()   Path
    Size()   int
    Close()  error

    // Read reads from File into buf, starting at offset, and for size bytes.
    Read(offset int, size int, buf Bytes) struct {size int, e error}

    // Write writes from buf into File, starting at offset, and for size bytes.
    Write(offset int, size int, buf Bytes) struct {size int, e error}
}

FileStore - Local Storage for Files

The FileStore is an abstraction used to refer to any underlying system or device that Filecoin will store its data to. It is based on Unix filesystem semantics, and includes the notion of Paths. This abstraction is here in order to make sure Filecoin implementations make it easy for end-users to replace the underlying storage system with whatever suits their needs. The simplest version of FileStore is just the host operating system’s file system.

// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
    Open(p Path)           union {f File, e error}
    Create(p Path)         union {f File, e error}
    Store(p Path, f File)  error
    Delete(p Path)         error

    // maybe add:
    // Copy(SrcPath, DstPath)
}
Varying user needs

Filecoin user needs vary significantly, and many users – especially miners – will implement complex storage architectures underneath and around Filecoin. The FileStore abstraction is here to make it easy for these varying needs to be easy to satisfy. All file and sector local data storage in the Filecoin Protocol is defined in terms of this FileStore interface, which makes it easy for implementations to make swappable, and for end-users to swap out with their system of choice.

Implementation examples

The FileStore interface may be implemented by many kinds of backing data storage systems. For example:

  • The host Operating System file system
  • Any Unix/Posix file system
  • RAID-backed file systems
  • Networked of distributed file systems (NFS, HDFS, etc)
  • IPFS
  • Databases
  • NAS systems
  • Raw serial or block devices
  • Raw hard drives (hdd sectors, etc)

Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.

Piece - a part of a file

A Piece is an object that represents a whole or part of a File, and is used by Clients and Miners in Deals. Clients hire Miners to store Pieces.

The piece data structure is designed for proving storage of arbitrary IPLD graphs and client data. This diagram shows the detailed composition of a piece and its proving tree, including both full and bandwidth-optimized piece data structures.

Pieces, Proving Trees, and Piece Data Structures (open in new tab)
import abi "github.com/filecoin-project/specs-actors/actors/abi"

// PieceInfo is an object that describes details about a piece, and allows
// decoupling storage of this information from the piece itself.
type PieceInfo struct {
    ID    PieceID
    Size  abi.PieceSize
    // TODO: store which algorithms were used to construct this piece.
}

// Piece represents the basic unit of tradeable data in Filecoin. Clients
// break files and data up into Pieces, maybe apply some transformations,
// and then hire Miners to store the Pieces.
//
// The kinds of transformations that may ocurr include erasure coding,
// encryption, and more.
//
// Note: pieces are well formed.
type Piece struct {
    Info       PieceInfo

    // tree is the internal representation of Piece. It is a tree
    // formed according to a sequence of algorithms, which make the
    // piece able to be verified.
    tree       PieceTree

    // Payload is the user's data.
    Payload()  Bytes

    // Data returns the serialized representation of the Piece.
    // It includes the payload data, and intermediate tree objects,
    // formed according to relevant storage algorithms.
    Data()     Bytes
}

// // LocalPieceRef is an object used to refer to pieces in local storage.
// // This is used by subsystems to store and locate pieces.
// type LocalPieceRef struct {
//   ID   PieceID
//   Path file.Path
// }

// PieceTree is a data structure used to form pieces. The algorithms involved
// in the storage proofs determine the shape of PieceTree and how it must be
// constructed.
//
// Usually, a node in PieceTree will include either Children or Data, but not
// both.
//
// TODO: move this into filproofs -- use a tree from there, as that's where
// the algorightms are defined. Or keep this as an interface, met by others.
type PieceTree struct {
    Children  [PieceTree]
    Data      Bytes
}

PieceStore - storing and indexing pieces

A PieceStore is an object that can store and retrieve pieces from some local storage. The PieceStore additionally keeps an index of pieces.

import ipld "github.com/filecoin-project/specs/libraries/ipld"

type PieceID UVarint

// PieceStore is an object that stores pieces into some local storage.
// it is internally backed by an IpldStore.
type PieceStore struct {
    Store              ipld.GraphStore
    Index              {PieceID: Piece}

    Get(i PieceID)     struct {p Piece, e error}
    Put(p Piece)       error
    Delete(i PieceID)  error
}

Data Transfer in Filecoin

Data Transfer is a system for transferring all or part of a Piece across the network when a deal is made.

Modules

This diagram shows how Data Transfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.

Data Transfer - Push Flow (open in new tab)

Terminology

  • Push Request: A request to send data to the other party
  • Pull Request: A request to have the other party send data
  • Requestor: The party that initiates the data transfer request (whether Push or Pull)
  • Responder: The party that receives the data transfer request
  • Data Transfer Voucher: A wrapper around storage or retrieval data that can identify and validate the transfer request to the other party
  • Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage deal or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request.
  • Scheduler: Once a request is negotiated and validated, actual transfer is managed by a scheduler on both sides. The scheduler is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
  • Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
  • GraphSync: The default underlying transfer protocol used by the Scheduler. The full graphsync specification can be found at https://github.com/ipld/specs/blob/master/block-layer/graphsync/graphsync.md

Request Phases

There are two basic phases to any data transfer:

  1. Negotiation - the requestor and responder agree to the transfer by validating with the data transfer voucher
  2. Transfer - Once the negotiation phase is complete, the data is actually transferred. The default protocol used to do the transfer is Graphsync.

Note that the Negotiation and Transfer stages can occur in separate round trips, or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data.

Example Flows

Push Flow
Data Transfer - Push Flow (open in new tab)
  1. A requestor initiates a Push transfer when it wants to send data to another party.
  2. The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher. It also puts the data transfer in the scheduler queue, meaning it expects the responder to initiate a transfer once the request is verified
  3. The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer
  5. The responder makes a GraphSync request for the data
  6. The requestor receives the graphsync request, verifies it’s in the scheduler and begins sending data
  7. The responder receives data and can produce an indication of progress
  8. The responder completes receiving data, and notifies any listeners

The push flow is ideal for storage deals, where the client initiates the push once it verifies the the deal is signed and on chain

Pull Flow
Data Transfer - Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  3. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer (meaning it is expecting the requestor to initiate the actual transfer)
  5. The responder’s data transfer module sends a response to the requestor saying it has accepted the transfer and is waiting for the requestor to initiate the transfer
  6. The requestor schedules the data transfer
  7. The requestor makes a GraphSync request for the data
  8. The responder receives the graphsync request, verifies it’s in the scheduler and begins sending data
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

The pull flow is ideal for retrieval deals, where the client initiates the pull when the deal is agreed upon.

Alternater Pull Flow - Single Round Trip

Data Transfer - Single Round Trip Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestor’s DTM schedules the data transfer
  3. The requestor makes a Graphsync request to the responder with a data transfer request
  4. The responder receives the graphsync request, and forwards the data transfer request to the data transfer module
  5. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  6. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  7. The responder’s data transfer module schedules the transfer
  8. The responder sends a graphsync response along with a data transfer accepted response piggypacked
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

Protocol

A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a Libp2p protocol type

A Pull request expects a response. The requestor does not initiate the transfer until they know the request is accepted.

The responder should send a response to a push request as well so the requestor can release the resources (if not accepted). However, if the Responder accepts the request they can immediately initiate the transfer

Using the Data Transfer Protocol as an independent libp2p communciation mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.

Data Structures

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import cid "github.com/ipfs/go-cid"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import peer "github.com/libp2p/go-libp2p-core/peer"

type StorageDeal struct {}
type RetrievalDeal struct {}

// A DataTransferVoucher is used to validate
// a data transfer request against the underlying storage or retrieval deal
// that precipitated it
type DataTransferVoucher union {
    StorageDealVoucher
    RetrievalDealVoucher
}

type StorageDealVoucher struct {
    deal StorageDeal
}

type RetrievalDealVoucher struct {
    deal RetrievalDeal
}

type Ongoing struct {}
type Paused struct {}
type Completed struct {}
type Failed struct {}
type ChannelNotFoundError struct {}

type DataTransferStatus union {
    Ongoing
    Paused
    Completed
    Failed
    ChannelNotFoundError
}

type TransferID UInt

type ChannelID struct {
    to peer.ID
    id TransferID
}

// All immutable data for a channel
type DataTransferChannel struct {
    // an identifier for this channel shared by request and responder, set by requestor through protocol
    transferID  TransferID
    // base CID for the piece being transferred
    PieceRef    cid.Cid
    // portion of Piece to return, specified by an IPLD selector
    Selector    ipld.Selector
    // used to verify this channel
    voucher     DataTransferVoucher
    // the party that is sending the data (not who initiated the request)
    sender      peer.ID
    // the party that is receiving the data (not who initiated the request)
    recipient   peer.ID
    // expected amount of data to be transferred
    totalSize   UVarint
}

// DataTransferState is immutable channel data plus mutable state
type DataTransferState struct @(mutable) {
    DataTransferChannel
    // total bytes sent from this node (0 if receiver)
    sent                 UVarint
    // total bytes received by this node (0 if sender)
    received             UVarint
}

type Open struct {
    Initiator peer.ID
}

type SendData struct {
    BytesToSend UInt
}

type Progress struct {
    BytesSent UInt
}

type Pause struct {
    Initiator peer.ID
}

type Error struct {
    ErrorMsg string
}

type Complete struct {}

type DataTransferEvent union {
    Open
    SendData
    Progress
    Pause
    Error
    Complete
}

type DataTransferSubscriber struct {
    OnEvent(event DataTransferEvent, channelState DataTransferState)
}

// RequestValidator is an interface implemented by the client of the data transfer module to validate requests
type RequestValidator struct {
    ValidatePush(
        sender    peer.ID
        voucher   DataTransferVoucher
        PieceRef  cid.Cid
        Selector  ipld.Selector
    )
    ValidatePull(
        receiver  peer.ID
        voucher   DataTransferVoucher
        PieceRef  cid.Cid
        Selector  ipld.Selector
    )
    ValidateIntermediate(
        otherPeer  peer.ID
        voucher    DataTransferVoucher
        PieceRef   cid.Cid
        Selector   ipld.Selector
    )
}

type DataTransferSubsystem struct @(mutable) {
    host              libp2p.Node
    dataTransfers     {ChannelID: DataTransferState}
    requestValidator  RequestValidator
    pieceStore        piece.PieceStore

    // open a data transfer that will send data to the recipient peer and
    // open a data transfer that will send data to the recipient peer and
    // transfer parts of the piece that match the selector
    OpenPushDataChannel(
        to        peer.ID
        voucher   DataTransferVoucher
        PieceRef  cid.Cid
        Selector  ipld.Selector
    ) ChannelID

    // open a data transfer that will request data from the sending peer and
    // transfer parts of the piece that match the selector
    OpenPullDataChannel(
        to        peer.ID
        voucher   DataTransferVoucher
        PieceRef  cid.Cid
        Selector  ipld.Selector
    ) ChannelID

    // close an open channel (effectively a cancel)
    CloseDataTransferChannel(x ChannelID)

    // get status of a transfer
    TransferChannelStatus(x ChannelID) DataTransferStatus

    // pause an ongoing channel
    PauseChannel(x ChannelID)

    // resume an ongoing channel
    ResumeChannel(x ChannelID)

    // send an additional voucher for an in progress request
    SendIntermediateVoucher(x ChannelID, voucher DataTransferVoucher)

    // get notified when certain types of events happen
    SubscribeToEvents(subscriber DataTransferSubscriber)

    // get all in progress transfers
    InProgressChannels() {ChannelID: DataTransferState}
}

Data Formats and Serialization

Filecoin seeks to make use of as few data formats as needed, with well-specced serialization rules to better protocol security through simplicity and enable interoperability amongst implementations of the Filecoin protocol.

Read more on design considerations here for CBOR-usage and here for int types in Filecoin.

Data Formats

Filecoin in-memory data types are mostly straightforward. Implementations should support two integer types: Int (meaning native 64-bit integer), and BigInt (meaning arbitrary length) and avoid dealing with floating-point numbers to minimize interoperability issues across programming languages and implementations.

You can also read more on data formats as part of randomness generation) in the Filecoin protocol.

Serialization

Data Serialization in Filecoin ensures a consistent format for serializing in-memory data for transfer in-flight and in-storage. Serialization is critical to protocol security and interoperability across implementations of the Filecoin protocol, enabling consistent state updates across Filecoin nodes.

All data structures in Filecoin are CBOR-tuple encoded. That is, any data structures used in the Filecoin system (structs in this spec) should be serialized as CBOR-arrays with items corresponding to the data structure fields in their order of declaration.

You can find the encoding structure for major data types in CBOR here.

For illustration, an in-memory map would be represented as a CBOR-array of the keys and values listed in some pre-determined order. A near-term update to the serialization format will involve tagging fields appropriately to ensure appropriate serialization/deserialization as the protocol evolves.

VM - Virtual Machine

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

// VM is the object that controls execution.
// It is a stateless, pure function. It uses no local storage.
//
// TODO: make it just a function: VMExec(...) ?
type VM struct {
    // Execute computes and returns outTree, a new StateTree which is the
    // application of msgs to inTree.
    //
    // *Important:* Execute is intended to be a pure function, with no side-effects.
    // however, storage of the new parts of the computed outTree may exist in
    // local storage.
    //
    // *TODO:* define whether this should take 0, 1, or 2 IpldStores:
    // - (): storage of IPLD datastructures is assumed implicit
    // - (store): get and put to same IpldStore
    // - (inStore, outStore): get from inStore, put new structures into outStore
    //
    // This decision impacts callers, and potentially impacts how we reason about
    // local storage, and intermediate storage. It is definitely the case that
    // implementations may want to operate on this differently, depending on
    // how their IpldStores work.
    Execute(inTree st.StateTree, msgs [msg.UnsignedMessage]) union {outTree st.StateTree, err error}
}

VM Actor Interface

// This contains actor things that are _outside_ of VM exection.
// The VM uses this to execute abi.

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import actor "github.com/filecoin-project/specs-actors/actors"
import cid "github.com/ipfs/go-cid"

// CallSeqNum is an invocation (Call) sequence (Seq) number (Num).
// This is a value used for securing against replay attacks:
// each AccountActor (user) invocation must have a unique CallSeqNum
// value. The sequenctiality of the numbers is used to make it
// easy to verify, and to order messages.
//
// Q&A
// - > Does it have to be sequential?
//   No, a random nonce could work against replay attacks, but
//   making it sequential makes it much easier to verify.
// - > Can it be used to order events?
//   Yes, a user may submit N separate messages with increasing
//   sequence number, causing them to execute in order.
//
type CallSeqNum int64

// Actor is a base computation object in the Filecoin VM. Similar
// to Actors in the Actor Model (programming), or Objects in Object-
// Oriented Programming, or Ethereum Contracts in the EVM.
//
// ActorState represents the on-chain storage all actors keep.
type ActorState struct {
    // Identifies the code this actor executes.
    CodeID      abi.ActorCodeID
    // CID of the root of optional actor-specific sub-state.
    State       actor.ActorSubstateCID
    // Balance of tokens held by this actor.
    Balance     abi.TokenAmount
    // Expected sequence number of the next message sent by this actor.
    // Initially zero, incremented when an account actor originates a top-level message.
    // Always zero for other abi.
    CallSeqNum
}

type ActorSystemStateCID cid.Cid

// ActorState represents the on-chain storage actors keep. This type is a
// union of concrete types, for each of the Actors:
// - InitActor
// - CronActor
// - AccountActor
// - PaymentChannelActor
// - StoragePowerActor
// - StorageMinerActor
// - StroageMarketActor
//
// TODO: move this into a directory inside the VM that patches in all
// the actors from across the system. this will be where we declare/mount
// all actors in the VM.
// type ActorState union {
//     Init struct {
//         AddressMap  {addr.Address: ActorID}
//         NextID      ActorID
//     }
// }
package actor

import (
	util "github.com/filecoin-project/specs/util"
	cid "github.com/ipfs/go-cid"
)

var IMPL_FINISH = util.IMPL_FINISH
var IMPL_TODO = util.IMPL_TODO
var TODO = util.TODO

type Serialization = util.Serialization

func (st *ActorState_I) CID() cid.Cid {
	panic("TODO")
}

State Tree

The State Tree is the output of applying operations on the Filecoin Blockchain.

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import addr "github.com/filecoin-project/go-address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import cid "github.com/ipfs/go-cid"

// The on-chain state data structure is a map (HAMT) of addresses to actor states.
// Only ID addresses are expected as keys.
type StateTree struct {
    ActorStates  {addr.Address: actor.ActorState}  // HAMT

    // Returns the CID of the root node of the HAMT.
    RootCID()    cid.Cid

    // Looks up an actor state by address.
    GetActor(a addr.Address) (state actor.ActorState, ok bool)

    // Looks up an abi.ActorCodeID by address.
    GetActorCodeID_Assert(a addr.Address) abi.ActorCodeID
}

TODO

  • Add ConvenienceAPI state to provide more user-friendly views.

Macroeconomic Indices

Indices are a set of global economic indicators computed from State Tree and a collection of pure functions to compute policy output based on user state/action. Indices are used to compute and implement economic mechanisms and policies for the system. There are no persistent states in Indicies. Neither can Indices introduce any state mutation. Note that where indices should live is a design decision. It is possible to break Indices into multiple files or place indices in different actors once all economic mechanisms have been decided on. Temporarily, Indices is a holding file for all potential macroeconomic indicators that the system needs to be aware of.

package indices

import (
	"math/big"

	abi "github.com/filecoin-project/specs-actors/actors/abi"
	actor_util "github.com/filecoin-project/specs-actors/actors/util"
)

var PARAM_FINISH = actor_util.PARAM_FINISH

// Data in Indices are populated at instantiation with data from the state tree
// Indices itself has no state tree or access to the runtime
// it is a passive data structure that allows for convenience access to network indices
// and pure functions in implementing economic policies given states
type Indices interface {
	Epoch() abi.ChainEpoch
	NetworkKPI() big.Int
	TotalNetworkSectorWeight() abi.SectorWeight
	TotalPledgeCollateral() abi.TokenAmount
	TotalNetworkEffectivePower() abi.StoragePower // power above minimum miner size
	TotalNetworkPower() abi.StoragePower          // total network power irrespective of meeting minimum miner size

	TotalMinedFIL() abi.TokenAmount
	TotalUnminedFIL() abi.TokenAmount
	TotalBurnedFIL() abi.TokenAmount
	LastEpochReward() abi.TokenAmount

	StorageDeal_DurationBounds(
		pieceSize abi.PieceSize,
		startEpoch abi.ChainEpoch,
	) (minDuration abi.ChainEpoch, maxDuration abi.ChainEpoch)
	StorageDeal_StoragePricePerEpochBounds(
		pieceSize abi.PieceSize,
		startEpoch abi.ChainEpoch,
		endEpoch abi.ChainEpoch,
	) (minPrice abi.TokenAmount, maxPrice abi.TokenAmount)
	StorageDeal_ProviderCollateralBounds(
		pieceSize abi.PieceSize,
		startEpoch abi.ChainEpoch,
		endEpoch abi.ChainEpoch,
	) (minProviderCollateral abi.TokenAmount, maxProviderCollateral abi.TokenAmount)
	StorageDeal_ClientCollateralBounds(
		pieceSize abi.PieceSize,
		startEpoch abi.ChainEpoch,
		endEpoch abi.ChainEpoch,
	) (minClientCollateral abi.TokenAmount, maxClientCollateral abi.TokenAmount)
	SectorWeight(
		sectorSize abi.SectorSize,
		startEpoch abi.ChainEpoch,
		endEpoch abi.ChainEpoch,
		dealWeight abi.DealWeight,
	) abi.SectorWeight
	PledgeCollateralReq(minerNominalPower abi.StoragePower) abi.TokenAmount
	SectorWeightProportion(minerActiveSectorWeight abi.SectorWeight) big.Int
	PledgeCollateralProportion(minerPledgeCollateral abi.TokenAmount) big.Int
	StoragePower(
		minerActiveSectorWeight abi.SectorWeight,
		minerInactiveSectorWeight abi.SectorWeight,
		minerPledgeCollateral abi.TokenAmount,
	) abi.StoragePower
	StoragePowerProportion(
		minerStoragePower abi.StoragePower,
	) big.Int
	CurrEpochBlockReward() abi.TokenAmount
	GetCurrBlockRewardRewardForMiner(
		minerStoragePower abi.StoragePower,
		minerPledgeCollateral abi.TokenAmount,
		// TODO extend or eliminate
	) abi.TokenAmount
	StoragePower_PledgeSlashForSectorTermination(
		storageWeightDesc actor_util.SectorStorageWeightDesc,
		terminationType actor_util.SectorTermination,
	) abi.TokenAmount
	StoragePower_PledgeSlashForSurprisePoStFailure(
		minerClaimedPower abi.StoragePower,
		numConsecutiveFailures int64,
	) abi.TokenAmount
	StorageMining_PreCommitDeposit(
		sectorSize abi.SectorSize,
		expirationEpoch abi.ChainEpoch,
	) abi.TokenAmount
	StorageMining_TemporaryFaultFee(
		storageWeightDescs []actor_util.SectorStorageWeightDesc,
		duration abi.ChainEpoch,
	) abi.TokenAmount
	NetworkTransactionFee(
		toActorCodeID abi.ActorCodeID,
		methodNum abi.MethodNum,
	) abi.TokenAmount
	GetCurrBlockRewardForMiner(
		minerStoragePower abi.StoragePower,
		minerPledgeCollateral abi.TokenAmount,
	) abi.TokenAmount
}

type IndicesImpl struct {
	// these fields are computed from StateTree upon construction
	// they are treated as globally available states
	Epoch                      abi.ChainEpoch
	NetworkKPI                 big.Int
	TotalNetworkSectorWeight   abi.SectorWeight
	TotalPledgeCollateral      abi.TokenAmount
	TotalNetworkEffectivePower abi.StoragePower // power above minimum miner size
	TotalNetworkPower          abi.StoragePower // total network power irrespective of meeting minimum miner size

	TotalMinedFIL   abi.TokenAmount
	TotalUnminedFIL abi.TokenAmount
	TotalBurnedFIL  abi.TokenAmount
	LastEpochReward abi.TokenAmount
}

func (inds *IndicesImpl) StorageDeal_DurationBounds(
	pieceSize abi.PieceSize,
	startEpoch abi.ChainEpoch,
) (minDuration abi.ChainEpoch, maxDuration abi.ChainEpoch) {

	// placeholder
	PARAM_FINISH()
	minDuration = abi.ChainEpoch(0)
	maxDuration = abi.ChainEpoch(1 << 20)
	return
}

func (inds *IndicesImpl) StorageDeal_StoragePricePerEpochBounds(
	pieceSize abi.PieceSize,
	startEpoch abi.ChainEpoch,
	endEpoch abi.ChainEpoch,
) (minPrice abi.TokenAmount, maxPrice abi.TokenAmount) {

	// placeholder
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StorageDeal_ProviderCollateralBounds(
	pieceSize abi.PieceSize,
	startEpoch abi.ChainEpoch,
	endEpoch abi.ChainEpoch,
) (minProviderCollateral abi.TokenAmount, maxProviderCollateral abi.TokenAmount) {

	// placeholder
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StorageDeal_ClientCollateralBounds(
	pieceSize abi.PieceSize,
	startEpoch abi.ChainEpoch,
	endEpoch abi.ChainEpoch,
) (minClientCollateral abi.TokenAmount, maxClientCollateral abi.TokenAmount) {

	// placeholder
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) SectorWeight(
	sectorSize abi.SectorSize,
	startEpoch abi.ChainEpoch,
	endEpoch abi.ChainEpoch,
	dealWeight abi.DealWeight,
) abi.SectorWeight {
	// for every sector, given its size, start, end, and deals within the sector
	// assign sector power for the duration of its lifetime
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) PledgeCollateralReq(minerNominalPower abi.StoragePower) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) SectorWeightProportion(minerActiveSectorWeight abi.SectorWeight) big.Int {
	// return proportion of SectorWeight for miner
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) PledgeCollateralProportion(minerPledgeCollateral abi.TokenAmount) big.Int {
	// return proportion of Pledge Collateral for miner
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StoragePower(
	minerActiveSectorWeight abi.SectorWeight,
	minerInactiveSectorWeight abi.SectorWeight,
	minerPledgeCollateral abi.TokenAmount,
) abi.StoragePower {
	// return StoragePower based on inputs
	// StoragePower for miner = func(ActiveSectorWeight for miner, PledgeCollateral for miner, global indices)
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StoragePowerProportion(
	minerStoragePower abi.StoragePower,
) big.Int {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) CurrEpochBlockReward() abi.TokenAmount {
	// total block reward allocated for CurrEpoch
	// each expected winner get an equal share of this reward
	// computed as a function of NetworkKPI, LastEpochReward, TotalUnmminedFIL, etc
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) GetCurrBlockRewardRewardForMiner(
	minerStoragePower abi.StoragePower,
	minerPledgeCollateral abi.TokenAmount,
	// TODO extend or eliminate
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

// TerminationFault
func (inds *IndicesImpl) StoragePower_PledgeSlashForSectorTermination(
	storageWeightDesc actor_util.SectorStorageWeightDesc,
	terminationType actor_util.SectorTermination,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

// DetectedFault
func (inds *IndicesImpl) StoragePower_PledgeSlashForSurprisePoStFailure(
	minerClaimedPower abi.StoragePower,
	numConsecutiveFailures int64,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StorageMining_PreCommitDeposit(
	sectorSize abi.SectorSize,
	expirationEpoch abi.ChainEpoch,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) StorageMining_TemporaryFaultFee(
	storageWeightDescs []actor_util.SectorStorageWeightDesc,
	duration abi.ChainEpoch,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) NetworkTransactionFee(
	toActorCodeID abi.ActorCodeID,
	methodNum abi.MethodNum,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func (inds *IndicesImpl) GetCurrBlockRewardForMiner(
	minerStoragePower abi.StoragePower,
	minerPledgeCollateral abi.TokenAmount,
) abi.TokenAmount {
	PARAM_FINISH()
	panic("")
}

func ConsensusPowerForStorageWeight(
	storageWeightDesc actor_util.SectorStorageWeightDesc,
) abi.StoragePower {
	PARAM_FINISH()
	panic("")
}

func StorageDeal_ProviderInitTimedOutSlashAmount(providerCollateral abi.TokenAmount) abi.TokenAmount {
	// placeholder
	PARAM_FINISH()
	return providerCollateral
}

func StoragePower_ConsensusMinMinerPower() abi.StoragePower {
	PARAM_FINISH()
	panic("")
}

func StorageMining_PoStNoChallengePeriod() abi.ChainEpoch {
	PARAM_FINISH()
	panic("")
}

func StorageMining_SurprisePoStProvingPeriod() abi.ChainEpoch {
	PARAM_FINISH()
	panic("")
}

func StoragePower_SurprisePoStMaxConsecutiveFailures() int64 {
	PARAM_FINISH()
	panic("")
}

func StorageMining_DeclaredFaultEffectiveDelay() abi.ChainEpoch {
	PARAM_FINISH()
	panic("")
}

VM Message - Actor Method Invocation

A message is the unit of communication between two actors, and thus the primitive cause of changes in state. A message combines:

  • a token amount to be transferred from the sender to the receiver, and
  • a method with parameters to be invoked on the receiver (optional).

Actor code may send additional messages to other actors while processing a received message. Messages are processed synchronously: an actor waits for a sent message to complete before resuming control.

The processing of a message consumes units of computation and storage denominated in gas. A message’s gas limit provides an upper bound on its computation. The sender of a message pays for the gas units consumed by a message’s execution (including all nested messages) at a gas price they determine. A block producer chooses which messages to include in a block and is rewarded according to each message’s gas price and consumption, forming a market.

Message syntax validation

A syntactically invalid message must not be transmitted, retained in a message pool, or included in a block.

A syntactically valid UnsignedMessage:

  • has a well-formed, non-empty To address,
  • has a well-formed, non-empty From address,
  • has a non-negative CallSeqNum,
  • has Value no less than zero and no greater than the total token supply (2e9 * 1e18), and
  • has a non-negative MethodNum,
  • has non-empty Params only if MethodNum is zero,
  • has non-negative GasPrice,
  • has GasLimit that is at least equal to the gas consumption associated with the message’s serialized bytes,
  • has GasLimit that is no greater than the block gas limit network parameter.

When transmitted individually (before inclusion in a block), a message is packaged as SignedMessage, regardless of signature scheme used. A valid signed message:

  • has a total serialized size no greater than message.MessageMaxSize.

Message semantic validation

Semantic validation refers to validation requiring information outside of the message itself.

A semantically valid SignedMessage must carry a signature that verifies the payload as having been signed with the public key of the account actor identified by the From address. Note that when the From address is an ID-address, the public key must be looked up in the state of the sending account actor in the parent state identified by the block.

Note: the sending actor must exist in the parent state identified by the block that includes the message. This means that it is not valid for a single block to include a message that creates a new account actor and a message from that same actor. The first message from that actor must wait until a subsequent epoch. Message pools may exclude messages from an actor that is not yet present in the chain state.

There is no further semantic validation of a message that can cause a block including the message to be invalid. Every syntactically valid and correctly signed message can be included in a block and will produce a receipt from execution. However, a message may fail to execute to completion, in which case it will not effect the desired state change.

The reason for this “no message semantic validation” policy is that the state that a message will be applied to cannot be known before the message is executed as part of a tipset. A block producer does not know whether another block will precede it in the tipset, thus altering the state to which the block’s messages will apply from the declared parent state.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/go-address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import abi "github.com/filecoin-project/specs-actors/actors/abi"

// GasAmount is a quantity of gas.
type GasAmount struct {
    value                BigInt

    Add(GasAmount)       GasAmount
    Subtract(GasAmount)  GasAmount
    SubtractIfNonnegative(GasAmount) (ret GasAmount, ok bool)
    LessThan(GasAmount) bool
    Equals(GasAmount) bool
    Scale(int) GasAmount
}

type UnsignedMessage struct {
    // Address of the receiving actor.
    To          addr.Address
    // Address of the sending actor.
    From        addr.Address
    // Expected CallSeqNum of the sending actor (only for top-level messages).
    CallSeqNum  actor.CallSeqNum

    // Amount of value to transfer from sender's to receiver's balance.
    Value       abi.TokenAmount

    // GasPrice is a Gas-to-FIL cost
    GasPrice    abi.TokenAmount
    GasLimit    GasAmount

    // Optional method to invoke on receiver, zero for a plain value send.
    Method      abi.MethodNum
    /// Serialized parameters to the method (if method is non-zero).
    Params      abi.MethodParams
}  // representation tuple

type SignedMessage struct {
    Message    UnsignedMessage
    Signature  filcrypto.Signature
}  // representation tuple
package message

import (
	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	util "github.com/filecoin-project/specs/util"
)

var IMPL_FINISH = util.IMPL_FINISH

type Serialization = util.Serialization

// The maximum serialized size of a SignedMessage.
const MessageMaxSize = 32 * 1024

func SignedMessage_Make(message UnsignedMessage, signature filcrypto.Signature) SignedMessage {
	return &SignedMessage_I{
		Message_:   message,
		Signature_: signature,
	}
}

func Sign(message UnsignedMessage, keyPair filcrypto.SigKeyPair) (SignedMessage, error) {
	sig, err := filcrypto.Sign(keyPair, util.Bytes(Serialize_UnsignedMessage(message)))
	if err != nil {
		return nil, err
	}
	return SignedMessage_Make(message, sig), nil
}

func SignatureVerificationError() error {
	IMPL_FINISH()
	panic("")
}

func Verify(message SignedMessage, publicKey filcrypto.PublicKey) (UnsignedMessage, error) {
	m := util.Bytes(Serialize_UnsignedMessage(message.Message()))
	sigValid, err := filcrypto.Verify(publicKey, message.Signature(), m)
	if err != nil {
		return nil, err
	}
	if !sigValid {
		return nil, SignatureVerificationError()
	}
	return message.Message(), nil
}

func (x *GasAmount_I) Add(y GasAmount) GasAmount {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Subtract(y GasAmount) GasAmount {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) SubtractIfNonnegative(y GasAmount) (ret GasAmount, ok bool) {
	ret = x.Subtract(y)
	ok = true
	if ret.LessThan(GasAmount_Zero()) {
		ret = x
		ok = false
	}
	return
}

func (x *GasAmount_I) LessThan(y GasAmount) bool {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Equals(y GasAmount) bool {
	IMPL_FINISH()
	panic("")
}

func (x *GasAmount_I) Scale(count int) GasAmount {
	IMPL_FINISH()
	panic("")
}

func GasAmount_Affine(b GasAmount, x int, m GasAmount) GasAmount {
	return b.Add(m.Scale(x))
}

func GasAmount_Zero() GasAmount {
	return GasAmount_FromInt(0)
}

func GasAmount_FromInt(x int) GasAmount {
	IMPL_FINISH()
	panic("")
}

func GasAmount_SentinelUnlimited() GasAmount {
	// Amount of gas larger than any feasible execution; meant to indicated unlimited gas
	// (e.g., for builtin system method invocations).
	return GasAmount_FromInt(1).Scale(1e9).Scale(1e9) // 10^18
}

VM Runtime Environment (Inside the VM)

Receipts

A MessageReceipt contains the result of a top-level message execution.

A syntactically valid receipt has:

  • a non-negative ExitCode,
  • a non empty ReturnValue only if the exit code is zero,
  • a non-negative GasUsed.

vm/runtime interface

package runtime

import actor "github.com/filecoin-project/specs-actors/actors"
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import crypto "github.com/filecoin-project/specs-actors/actors/crypto"
import exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
import addr "github.com/filecoin-project/go-address"
import indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
import cid "github.com/ipfs/go-cid"

// Runtime is the VM's internal runtime object.
// this is everything that is accessible to actors, beyond parameters.
type Runtime interface {
	CurrEpoch() abi.ChainEpoch

	// Randomness returns a (pseudo)random string for the given epoch and tag.
	GetRandomness(epoch abi.ChainEpoch) abi.RandomnessSeed

	// The address of the immediate calling actor.
	// Not necessarily the actor in the From field of the initial on-chain Message.
	// Always an ID-address.
	ImmediateCaller() addr.Address
	ValidateImmediateCallerIs(caller addr.Address)
	ValidateImmediateCallerInSet(callers []addr.Address)
	ValidateImmediateCallerAcceptAnyOfType(type_ abi.ActorCodeID)
	ValidateImmediateCallerAcceptAnyOfTypes(types []abi.ActorCodeID)
	ValidateImmediateCallerAcceptAny()
	ValidateImmediateCallerMatches(CallerPattern)

	// The address of the actor receiving the message. Always an ID-address.
	CurrReceiver() addr.Address

	// The actor who mined the block in which the initial on-chain message appears.
	// Always an ID-address.
	ToplevelBlockWinner() addr.Address

	AcquireState() ActorStateHandle

	SuccessReturn() InvocOutput
	ValueReturn([]byte) InvocOutput

	// Throw an error indicating a failure condition has occurred, from which the given actor
	// code is unable to recover.
	Abort(errExitCode exitcode.ExitCode, msg string)

	// Calls Abort with InvalidArguments_User.
	AbortArgMsg(msg string)
	AbortArg()

	// Calls Abort with InconsistentState_User.
	AbortStateMsg(msg string)
	AbortState()

	// Calls Abort with InsufficientFunds_User.
	AbortFundsMsg(msg string)
	AbortFunds()

	// Calls Abort with RuntimeAPIError.
	// For internal use only (not in actor code).
	AbortAPI(msg string)

	// Check that the given condition is true (and call Abort if not).
	Assert(bool)

	CurrentBalance() abi.TokenAmount
	ValueReceived() abi.TokenAmount

	// Look up the current values of several system-wide economic indices.
	CurrIndices() indices.Indices

	// Look up the code ID of a given actor address.
	GetActorCodeID(addr addr.Address) (ret abi.ActorCodeID, ok bool)

	// Run a (pure function) computation, consuming the gas cost associated with that function.
	// This mechanism is intended to capture the notion of an ABI between the VM and native
	// functions, and should be used for any function whose computation is expensive.
	Compute(ComputeFunctionID, args []interface{}) interface{}

	// Sends a message to another actor.
	// If the invoked method does not return successfully, this caller will be aborted too.
	SendPropagatingErrors(input InvocInput) InvocOutput
	Send(
		toAddr addr.Address,
		methodNum abi.MethodNum,
		params abi.MethodParams,
		value abi.TokenAmount,
	) InvocOutput
	SendQuery(
		toAddr addr.Address,
		methodNum abi.MethodNum,
		params abi.MethodParams,
	) []byte
	SendFunds(toAddr addr.Address, value abi.TokenAmount)

	// Sends a message to another actor, trapping an unsuccessful execution.
	// This may only be invoked by the singleton Cron actor.
	SendCatchingErrors(input InvocInput) (output InvocOutput, exitCode exitcode.ExitCode)

	// Computes an address for a new actor. The returned address is intended to uniquely refer to
	// the actor even in the event of a chain re-org (whereas an ID-address might refer to a
	// different actor after messages are re-ordered).
	// Always an ActorExec address.
	NewActorAddress() addr.Address

	// Creates an actor in the state tree, with empty state. May only be called by InitActor.
	CreateActor(
		// The new actor's code identifier.
		codeId abi.ActorCodeID,
		// Address under which the new actor's state will be stored. Must be an ID-address.
		address addr.Address,
	)

	// Deletes an actor in the state tree. May only be called by the actor itself,
	// or by StoragePowerActor in the case of StorageMinerActors.
	DeleteActor(address addr.Address)

	// Retrieves and deserializes an object from the store into o. Returns whether successful.
	IpldGet(c cid.Cid, o interface{}) bool
	// Serializes and stores an object, returning its CID.
	IpldPut(x interface{}) cid.Cid

	// Provides the system call interface.
	Syscalls() Syscalls
}

type Syscalls interface {
	// Verifies that a signature is valid for an address and plaintext.
	VerifySignature(
		signature crypto.Signature,
		signer addr.Address,
		plaintext []byte,
	) bool
	// Computes an unsealed sector CID (CommD) from its constituent piece CIDs (CommPs) and sizes.
	ComputeUnsealedSectorCID(sectorSize abi.SectorSize, pieces []abi.PieceInfo) (abi.UnsealedSectorCID, error)
	// Verifies a sector seal proof.
	VerifySeal(sectorSize abi.SectorSize, vi abi.SealVerifyInfo) bool
	// Verifies a proof of spacetime.
	VerifyPoSt(sectorSize abi.SectorSize, vi abi.PoStVerifyInfo) bool
}

type InvocInput struct {
	To     addr.Address
	Method abi.MethodNum
	Params abi.MethodParams
	Value  abi.TokenAmount
}

type InvocOutput struct {
	ReturnValue []byte
}

type ActorStateHandle interface {
	UpdateRelease(newStateCID actor.ActorSubstateCID)
	Release(checkStateCID actor.ActorSubstateCID)
	Take() actor.ActorSubstateCID
}

type ComputeFunctionID int64

const (
	Compute_VerifySignature = ComputeFunctionID(1)
)

vm/runtime implementation

package impl

import (
	"bytes"
	"encoding/binary"
	"fmt"

	addr "github.com/filecoin-project/go-address"
	actor "github.com/filecoin-project/specs-actors/actors"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	acctact "github.com/filecoin-project/specs-actors/actors/builtin/account"
	initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
	actstate "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
	st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
	cid "github.com/ipfs/go-cid"
	cbornode "github.com/ipfs/go-ipld-cbor"
	mh "github.com/multiformats/go-multihash"
)

type ActorSubstateCID = actor.ActorSubstateCID
type ExitCode = exitcode.ExitCode
type CallerPattern = vmr.CallerPattern
type Runtime = vmr.Runtime
type InvocInput = vmr.InvocInput
type InvocOutput = vmr.InvocOutput
type ActorStateHandle = vmr.ActorStateHandle

var EnsureErrorCode = exitcode.EnsureErrorCode

type Bytes = util.Bytes

var Assert = util.Assert
var IMPL_FINISH = util.IMPL_FINISH
var IMPL_TODO = util.IMPL_TODO
var TODO = util.TODO

var EmptyCBOR cid.Cid

type RuntimeError struct {
	ExitCode ExitCode
	ErrMsg   string
}

func init() {
	n, err := cbornode.WrapObject(map[string]struct{}{}, mh.SHA2_256, -1)
	Assert(err == nil)
	EmptyCBOR = n.Cid()
}

func (x *RuntimeError) String() string {
	ret := fmt.Sprintf("Runtime error: %v", x.ExitCode)
	if x.ErrMsg != "" {
		ret += fmt.Sprintf(" (\"%v\")", x.ErrMsg)
	}
	return ret
}

func RuntimeError_Make(exitCode ExitCode, errMsg string) *RuntimeError {
	exitCode = EnsureErrorCode(exitCode)
	return &RuntimeError{
		ExitCode: exitCode,
		ErrMsg:   errMsg,
	}
}

func ActorSubstateCID_Equals(x, y ActorSubstateCID) bool {
	IMPL_FINISH()
	panic("")
}

type ActorStateHandle_I struct {
	_initValue *ActorSubstateCID
	_rt        *VMContext
}

func (h *ActorStateHandle_I) UpdateRelease(newStateCID ActorSubstateCID) {
	h._rt._updateReleaseActorSubstate(newStateCID)
}

func (h *ActorStateHandle_I) Release(checkStateCID ActorSubstateCID) {
	h._rt._releaseActorSubstate(checkStateCID)
}

func (h *ActorStateHandle_I) Take() ActorSubstateCID {
	if h._initValue == nil {
		h._rt._apiError("Must call Take() only once on actor substate object")
	}
	ret := *h._initValue
	h._initValue = nil
	return ret
}

// Concrete instantiation of the Runtime interface. This should be instantiated by the
// interpreter once per actor method invocation, and responds to that method's Runtime
// API calls.
type VMContext struct {
	_store              ipld.GraphStore
	_globalStateInit    st.StateTree
	_globalStatePending st.StateTree
	_running            bool
	_chain              chain.Chain
	_actorAddress       addr.Address
	_actorStateAcquired bool
	// Tracks whether actor substate has changed in order to charge gas just once
	// regardless of how many times it's written.
	_actorSubstateUpdated bool

	_immediateCaller addr.Address
	// Note: This is the actor in the From field of the initial on-chain message.
	// Not necessarily the immediate caller.
	_toplevelSender      addr.Address
	_toplevelBlockWinner addr.Address
	// call sequence number of the top level message that began this execution sequence
	_toplevelMsgCallSeqNum actstate.CallSeqNum
	// Sequence number representing the total number of calls (to any actor, any method)
	// during the current top-level message execution.
	// Note: resets with every top-level message, and therefore not necessarily monotonic.
	_internalCallSeqNum actstate.CallSeqNum
	_valueReceived      abi.TokenAmount
	_gasRemaining       msg.GasAmount
	_numValidateCalls   int
	_output             vmr.InvocOutput
}

func VMContext_Make(
	store ipld.GraphStore,
	chain chain.Chain,
	toplevelSender addr.Address,
	toplevelBlockWinner addr.Address,
	toplevelMsgCallSeqNum actstate.CallSeqNum,
	internalCallSeqNum actstate.CallSeqNum,
	globalState st.StateTree,
	actorAddress addr.Address,
	valueReceived abi.TokenAmount,
	gasRemaining msg.GasAmount) *VMContext {

	return &VMContext{
		_store:                store,
		_chain:                chain,
		_globalStateInit:      globalState,
		_globalStatePending:   globalState,
		_running:              false,
		_actorAddress:         actorAddress,
		_actorStateAcquired:   false,
		_actorSubstateUpdated: false,

		_toplevelSender:        toplevelSender,
		_toplevelBlockWinner:   toplevelBlockWinner,
		_toplevelMsgCallSeqNum: toplevelMsgCallSeqNum,
		_internalCallSeqNum:    internalCallSeqNum,
		_valueReceived:         valueReceived,
		_gasRemaining:          gasRemaining,
		_numValidateCalls:      0,
		_output:                vmr.InvocOutput{},
	}
}

func (rt *VMContext) AbortArgMsg(msg string) {
	rt.Abort(exitcode.InvalidArguments_User, msg)
}

func (rt *VMContext) AbortArg() {
	rt.AbortArgMsg("Invalid arguments")
}

func (rt *VMContext) AbortStateMsg(msg string) {
	rt.Abort(exitcode.InconsistentState_User, msg)
}

func (rt *VMContext) AbortState() {
	rt.AbortStateMsg("Inconsistent state")
}

func (rt *VMContext) AbortFundsMsg(msg string) {
	rt.Abort(exitcode.InsufficientFunds_User, msg)
}

func (rt *VMContext) AbortFunds() {
	rt.AbortFundsMsg("Insufficient funds")
}

func (rt *VMContext) AbortAPI(msg string) {
	rt.Abort(exitcode.RuntimeAPIError, msg)
}

func (rt *VMContext) CreateActor(codeID abi.ActorCodeID, address addr.Address) {
	if rt._actorAddress != builtin.InitActorAddr {
		rt.AbortAPI("Only InitActor may call rt.CreateActor")
	}
	if address.Protocol() != addr.ID {
		rt.AbortAPI("New actor adddress must be an ID-address")
	}

	rt._createActor(codeID, address)
}

func (rt *VMContext) _createActor(codeID abi.ActorCodeID, address addr.Address) {
	// Create empty actor state.
	actorState := &actstate.ActorState_I{
		CodeID_:     codeID,
		State_:      actor.ActorSubstateCID(EmptyCBOR),
		Balance_:    abi.TokenAmount(0),
		CallSeqNum_: 0,
	}

	// Put it in the state tree.
	actorStateCID := actstate.ActorSystemStateCID(rt.IpldPut(actorState))
	rt._updateActorSystemStateInternal(address, actorStateCID)

	rt._rtAllocGas(gascost.ExecNewActor)
}

func (rt *VMContext) DeleteActor(address addr.Address) {
	// Only a given actor may delete itself.
	if rt._actorAddress != address {
		rt.AbortAPI("Invalid actor deletion request")
	}

	rt._deleteActor(address)
}

func (rt *VMContext) _deleteActor(address addr.Address) {
	rt._globalStatePending = rt._globalStatePending.Impl().WithDeleteActorSystemState(address)
	rt._rtAllocGas(gascost.DeleteActor)
}

func (rt *VMContext) _updateActorSystemStateInternal(actorAddress addr.Address, newStateCID actstate.ActorSystemStateCID) {
	newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSystemState(rt._actorAddress, newStateCID)
	if err != nil {
		panic("Error in runtime implementation: failed to update actor system state")
	}
	rt._globalStatePending = newGlobalStatePending
}

func (rt *VMContext) _updateActorSubstateInternal(actorAddress addr.Address, newStateCID actor.ActorSubstateCID) {
	newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorSubstate(rt._actorAddress, newStateCID)
	if err != nil {
		panic("Error in runtime implementation: failed to update actor substate")
	}
	rt._globalStatePending = newGlobalStatePending
}

func (rt *VMContext) _updateReleaseActorSubstate(newStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()
	rt._updateActorSubstateInternal(rt._actorAddress, newStateCID)
	rt._actorSubstateUpdated = true
	rt._actorStateAcquired = false
}

func (rt *VMContext) _releaseActorSubstate(checkStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()

	prevState, ok := rt._globalStatePending.GetActor(rt._actorAddress)
	util.Assert(ok)
	prevStateCID := prevState.State()
	if !ActorSubstateCID_Equals(prevStateCID, checkStateCID) {
		rt.AbortAPI("State CID differs upon release call")
	}

	rt._actorStateAcquired = false
}

func (rt *VMContext) Assert(cond bool) {
	if !cond {
		rt.Abort(exitcode.RuntimeAssertFailure, "Runtime assertion failed")
	}
}

func (rt *VMContext) _checkActorStateAcquiredFlag(expected bool) {
	rt._checkRunning()
	if rt._actorStateAcquired != expected {
		rt._apiError("State updates and message sends must be disjoint")
	}
}

func (rt *VMContext) _checkActorStateAcquired() {
	rt._checkActorStateAcquiredFlag(true)
}

func (rt *VMContext) _checkActorStateNotAcquired() {
	rt._checkActorStateAcquiredFlag(false)
}

func (rt *VMContext) Abort(errExitCode exitcode.ExitCode, errMsg string) {
	errExitCode = exitcode.EnsureErrorCode(errExitCode)
	rt._throwErrorFull(errExitCode, errMsg)
}

func (rt *VMContext) ImmediateCaller() addr.Address {
	return rt._immediateCaller
}

func (rt *VMContext) CurrReceiver() addr.Address {
	return rt._actorAddress
}

func (rt *VMContext) ToplevelBlockWinner() addr.Address {
	return rt._toplevelBlockWinner
}

func (rt *VMContext) ValidateImmediateCallerMatches(
	callerExpectedPattern CallerPattern) {

	rt._checkRunning()
	rt._checkNumValidateCalls(0)
	caller := rt.ImmediateCaller()
	if !callerExpectedPattern.Matches(caller) {
		rt.AbortAPI("Method invoked by incorrect caller")
	}
	rt._numValidateCalls += 1
}

func CallerPattern_MakeAcceptAnyOfTypes(rt *VMContext, types []abi.ActorCodeID) CallerPattern {
	return CallerPattern{
		Matches: func(y addr.Address) bool {
			codeID, ok := rt.GetActorCodeID(y)
			if !ok {
				panic("Internal runtime error: actor not found")
			}

			for _, type_ := range types {
				if codeID == type_ {
					return true
				}
			}
			return false
		},
	}
}

func (rt *VMContext) ValidateImmediateCallerIs(callerExpected addr.Address) {
	rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeSingleton(callerExpected))
}

func (rt *VMContext) ValidateImmediateCallerInSet(callersExpected []addr.Address) {
	rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeSet(callersExpected))
}

func (rt *VMContext) ValidateImmediateCallerAcceptAnyOfType(type_ abi.ActorCodeID) {
	rt.ValidateImmediateCallerAcceptAnyOfTypes([]abi.ActorCodeID{type_})
}

func (rt *VMContext) ValidateImmediateCallerAcceptAnyOfTypes(types []abi.ActorCodeID) {
	rt.ValidateImmediateCallerMatches(CallerPattern_MakeAcceptAnyOfTypes(rt, types))
}

func (rt *VMContext) ValidateImmediateCallerAcceptAny() {
	rt.ValidateImmediateCallerMatches(vmr.CallerPattern_MakeAcceptAny())
}

func (rt *VMContext) _checkNumValidateCalls(x int) {
	if rt._numValidateCalls != x {
		rt.AbortAPI("Method must validate caller identity exactly once")
	}
}

func (rt *VMContext) _checkRunning() {
	if !rt._running {
		panic("Internal runtime error: actor API called with no actor code running")
	}
}
func (rt *VMContext) SuccessReturn() InvocOutput {
	return vmr.InvocOutput_Make(nil)
}

func (rt *VMContext) ValueReturn(value util.Bytes) InvocOutput {
	return vmr.InvocOutput_Make(value)
}

func (rt *VMContext) _throwError(exitCode ExitCode) {
	rt._throwErrorFull(exitCode, "")
}

func (rt *VMContext) _throwErrorFull(exitCode ExitCode, errMsg string) {
	panic(RuntimeError_Make(exitCode, errMsg))
}

func (rt *VMContext) _apiError(errMsg string) {
	rt._throwErrorFull(exitcode.RuntimeAPIError, errMsg)
}

func _gasAmountAssertValid(x msg.GasAmount) {
	if x.LessThan(msg.GasAmount_Zero()) {
		panic("Interpreter error: negative gas amount")
	}
}

// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
// incurred yet.
func (rt *VMContext) _rtAllocGas(x msg.GasAmount) {
	_gasAmountAssertValid(x)
	var ok bool
	rt._gasRemaining, ok = rt._gasRemaining.SubtractIfNonnegative(x)
	if !ok {
		rt._throwError(exitcode.OutOfGas)
	}
}

func (rt *VMContext) _transferFunds(from addr.Address, to addr.Address, amount abi.TokenAmount) error {
	rt._checkRunning()
	rt._checkActorStateNotAcquired()

	newGlobalStatePending, err := rt._globalStatePending.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		return err
	}

	rt._globalStatePending = newGlobalStatePending
	return nil
}

func (rt *VMContext) GetActorCodeID(actorAddr addr.Address) (ret abi.ActorCodeID, ok bool) {
	IMPL_FINISH()
	panic("")
}

type ErrorHandlingSpec int

const (
	PropagateErrors ErrorHandlingSpec = 1 + iota
	CatchErrors
)

// TODO: This function should be private (not intended to be exposed to actors).
// (merging runtime and interpreter packages should solve this)
// TODO: this should not use the MessageReceipt return type, even though it needs the same triple
// of values. This method cannot compute the total gas cost and the returned receipt will never
// go on chain.
func (rt *VMContext) SendToplevelFromInterpreter(input InvocInput) (MessageReceipt, st.StateTree) {

	rt._running = true
	ret := rt._sendInternal(input, CatchErrors)
	rt._running = false
	return ret, rt._globalStatePending
}

func _catchRuntimeErrors(f func() InvocOutput) (output InvocOutput, exitCode exitcode.ExitCode) {
	defer func() {
		if r := recover(); r != nil {
			switch r.(type) {
			case *RuntimeError:
				output = vmr.InvocOutput_Make(nil)
				exitCode = (r.(*RuntimeError).ExitCode)
			default:
				panic(r)
			}
		}
	}()

	output = f()
	exitCode = exitcode.OK()
	return
}

func _invokeMethodInternal(
	rt *VMContext,
	actorCode vmr.ActorCode,
	method abi.MethodNum,
	params abi.MethodParams) (
	ret InvocOutput, exitCode exitcode.ExitCode, internalCallSeqNumFinal actstate.CallSeqNum) {

	if method == builtin.MethodSend {
		ret = vmr.InvocOutput_Make(nil)
		return
	}

	rt._running = true
	ret, exitCode = _catchRuntimeErrors(func() InvocOutput {
		IMPL_TODO("dispatch to actor code")
		var methodOutput vmr.InvocOutput // actorCode.InvokeMethod(rt, method, params)
		if rt._actorSubstateUpdated {
			rt._rtAllocGas(gascost.UpdateActorSubstate)
		}
		rt._checkActorStateNotAcquired()
		rt._checkNumValidateCalls(1)
		return methodOutput
	})
	rt._running = false

	internalCallSeqNumFinal = rt._internalCallSeqNum

	return
}

func (rtOuter *VMContext) _sendInternal(input InvocInput, errSpec ErrorHandlingSpec) MessageReceipt {
	rtOuter._checkRunning()
	rtOuter._checkActorStateNotAcquired()

	initGasRemaining := rtOuter._gasRemaining

	rtOuter._rtAllocGas(gascost.InvokeMethod(input.Value, input.Method))

	receiver, receiverAddr := rtOuter._resolveReceiver(input.To)
	receiverCode, err := loadActorCode(receiver.CodeID())
	if err != nil {
		rtOuter._throwError(exitcode.ActorCodeNotFound)
	}

	err = rtOuter._transferFunds(rtOuter._actorAddress, receiverAddr, input.Value)
	if err != nil {
		rtOuter._throwError(exitcode.InsufficientFunds_System)
	}

	rtInner := VMContext_Make(
		rtOuter._store,
		rtOuter._chain,
		rtOuter._toplevelSender,
		rtOuter._toplevelBlockWinner,
		rtOuter._toplevelMsgCallSeqNum,
		rtOuter._internalCallSeqNum+1,
		rtOuter._globalStatePending,
		receiverAddr,
		input.Value,
		rtOuter._gasRemaining,
	)

	invocOutput, exitCode, internalCallSeqNumFinal := _invokeMethodInternal(
		rtInner,
		receiverCode,
		input.Method,
		input.Params,
	)

	_gasAmountAssertValid(rtOuter._gasRemaining.Subtract(rtInner._gasRemaining))
	rtOuter._gasRemaining = rtInner._gasRemaining
	gasUsed := initGasRemaining.Subtract(rtOuter._gasRemaining)
	_gasAmountAssertValid(gasUsed)

	rtOuter._internalCallSeqNum = internalCallSeqNumFinal

	if exitCode == exitcode.OutOfGas {
		// OutOfGas error cannot be caught
		rtOuter._throwError(exitCode)
	}

	if errSpec == PropagateErrors && exitCode.IsError() {
		rtOuter._throwError(exitcode.MethodSubcallError)
	}

	if exitCode.AllowsStateUpdate() {
		rtOuter._globalStatePending = rtInner._globalStatePending
	}

	return MessageReceipt_Make(invocOutput, exitCode, gasUsed)
}

// Loads a receiving actor state from the state tree, resolving non-ID addresses through the InitActor state.
// If it doesn't exist, and the message is a simple value send to a pubkey-style address,
// creates the receiver as an account actor in the returned state.
// Aborts otherwise.
func (rt *VMContext) _resolveReceiver(targetRaw addr.Address) (actstate.ActorState, addr.Address) {
	// Resolve the target address via the InitActor, and attempt to load state.
	initSubState := rt._loadInitActorState()
	targetIdAddr := initSubState.ResolveAddress(targetRaw)
	act, found := rt._globalStatePending.GetActor(targetIdAddr)
	if found {
		return act, targetIdAddr
	}

	if targetRaw.Protocol() != addr.SECP256K1 && targetRaw.Protocol() != addr.BLS {
		// Don't implicitly create an account actor for an address without an associated key.
		rt._throwError(exitcode.ActorNotFound)
	}

	// Allocate an ID address from the init actor and map the pubkey To address to it.
	newIdAddr := initSubState.MapAddressToNewID(targetRaw)
	rt._saveInitActorState(initSubState)

	// Create new account actor (charges gas).
	rt._createActor(builtin.AccountActorCodeID, newIdAddr)

	// Initialize account actor substate with it's pubkey address.
	substate := &acctact.AccountActorState{
		Address: targetRaw,
	}
	rt._saveAccountActorState(newIdAddr, *substate)
	act, _ = rt._globalStatePending.GetActor(newIdAddr)
	return act, newIdAddr
}

func (rt *VMContext) _loadInitActorState() initact.InitActorState {
	initState, ok := rt._globalStatePending.GetActor(builtin.InitActorAddr)
	util.Assert(ok)
	var initSubState initact.InitActorState
	ok = rt.IpldGet(cid.Cid(initState.State()), &initSubState)
	util.Assert(ok)
	return initSubState
}

func (rt *VMContext) _saveInitActorState(state initact.InitActorState) {
	// Gas is charged here separately from _actorSubstateUpdated because this is a different actor
	// than the receiver.
	rt._rtAllocGas(gascost.UpdateActorSubstate)
	rt._updateActorSubstateInternal(builtin.InitActorAddr, actor.ActorSubstateCID(rt.IpldPut(&state)))
}

func (rt *VMContext) _saveAccountActorState(address addr.Address, state acctact.AccountActorState) {
	// Gas is charged here separately from _actorSubstateUpdated because this is a different actor
	// than the receiver.
	rt._rtAllocGas(gascost.UpdateActorSubstate)
	rt._updateActorSubstateInternal(address, actor.ActorSubstateCID(rt.IpldPut(state)))
}

func (rt *VMContext) _sendInternalOutputs(input InvocInput, errSpec ErrorHandlingSpec) (InvocOutput, exitcode.ExitCode) {
	ret := rt._sendInternal(input, errSpec)
	return vmr.InvocOutput_Make(ret.ReturnValue), ret.ExitCode
}

func (rt *VMContext) Send(
	toAddr addr.Address, methodNum abi.MethodNum, params abi.MethodParams, value abi.TokenAmount) InvocOutput {

	return rt.SendPropagatingErrors(vmr.InvocInput_Make(toAddr, methodNum, params, value))
}

func (rt *VMContext) SendQuery(toAddr addr.Address, methodNum abi.MethodNum, params abi.MethodParams) util.Serialization {
	invocOutput := rt.Send(toAddr, methodNum, params, abi.TokenAmount(0))
	ret := invocOutput.ReturnValue
	Assert(ret != nil)
	return ret
}

func (rt *VMContext) SendFunds(toAddr addr.Address, value abi.TokenAmount) {
	rt.Send(toAddr, builtin.MethodSend, nil, value)
}

func (rt *VMContext) SendPropagatingErrors(input InvocInput) InvocOutput {
	ret, _ := rt._sendInternalOutputs(input, PropagateErrors)
	return ret
}

func (rt *VMContext) SendCatchingErrors(input InvocInput) (InvocOutput, exitcode.ExitCode) {
	rt.ValidateImmediateCallerIs(builtin.CronActorAddr)
	return rt._sendInternalOutputs(input, CatchErrors)
}

func (rt *VMContext) CurrentBalance() abi.TokenAmount {
	IMPL_FINISH()
	panic("")
}

func (rt *VMContext) ValueReceived() abi.TokenAmount {
	return rt._valueReceived
}

func (rt *VMContext) GetRandomness(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return rt._chain.RandomnessAtEpoch(epoch)
}

func (rt *VMContext) NewActorAddress() addr.Address {
	addrBuf := new(bytes.Buffer)

	senderState, ok := rt._globalStatePending.GetActor(rt._toplevelSender)
	util.Assert(ok)
	var aast acctact.AccountActorState
	ok = rt.IpldGet(cid.Cid(senderState.State()), &aast)
	util.Assert(ok)
	err := aast.Address.MarshalCBOR(addrBuf)
	util.Assert(err == nil)
	err = binary.Write(addrBuf, binary.BigEndian, rt._toplevelMsgCallSeqNum)
	util.Assert(err != nil)
	err = binary.Write(addrBuf, binary.BigEndian, rt._internalCallSeqNum)
	util.Assert(err != nil)

	newAddr, err := addr.NewActorAddress(addrBuf.Bytes())
	util.Assert(err == nil)
	return newAddr
}

func (rt *VMContext) IpldPut(x ipld.Object) cid.Cid {
	IMPL_FINISH() // Serialization
	serialized := []byte{}
	cid := rt._store.Put(serialized)
	rt._rtAllocGas(gascost.IpldPut(len(serialized)))
	return cid
}

func (rt *VMContext) IpldGet(c cid.Cid, o ipld.Object) bool {
	serialized, ok := rt._store.Get(c)
	if ok {
		rt._rtAllocGas(gascost.IpldGet(len(serialized)))
	}
	IMPL_FINISH() // Deserialization into o
	return ok
}

func (rt *VMContext) CurrEpoch() abi.ChainEpoch {
	IMPL_FINISH()
	panic("")
}

func (rt *VMContext) CurrIndices() indices.Indices {
	// TODO: compute from state tree (rt._globalStatePending), using individual actor
	// state helper functions when possible
	TODO()
	panic("")
}

func (rt *VMContext) AcquireState() ActorStateHandle {
	rt._checkRunning()
	rt._checkActorStateNotAcquired()
	rt._actorStateAcquired = true

	state, ok := rt._globalStatePending.GetActor(rt._actorAddress)
	util.Assert(ok)

	stateRef := state.State().Ref()
	return &ActorStateHandle_I{
		_initValue: &stateRef,
		_rt:        rt,
	}
}

func (rt *VMContext) Compute(f ComputeFunctionID, args []util.Any) Any {
	def, found := _computeFunctionDefs[f]
	if !found {
		rt.AbortAPI("Function definition in rt.Compute() not found")
	}
	gasCost := def.GasCostFn(args)
	rt._rtAllocGas(gasCost)
	return def.Body(args)
}

Code Loading

package impl

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
)

func loadActorCode(codeID abi.ActorCodeID) (vmr.ActorCode, error) {

	panic("TODO")
	// TODO: resolve circular dependency

	// // load the code from StateTree.
	// // TODO: this is going to be enabled in the future.
	// // code, err := loadCodeFromStateTree(input.InTree, codeCID)
	// return staticActorCodeRegistry.LoadActor(codeCID)
}

Exit codes

package exitcode

type ExitCode int64

const (
	// TODO: remove once canonical error codes are finalized
	SystemErrorCode_Placeholder      = ExitCode(-(1 << 30))
	UserDefinedErrorCode_Placeholder = ExitCode(-(1 << 30))
)

const Ok = ExitCode(0)

// TODO: assign all of these.
const (
	// ActorNotFound represents a failure to find an actor.
	ActorNotFound = SystemErrorCode_Placeholder + iota

	// ActorCodeNotFound represents a failure to find the code for a
	// particular actor in the VM registry.
	ActorCodeNotFound

	// InvalidMethod represents a failure to find a method in
	// an actor
	InvalidMethod

	// InvalidArgumentsSystem indicates that a method was called with the incorrect
	// number of arguments, or that its arguments did not satisfy its
	// preconditions
	InvalidArguments_System

	// InsufficientFunds represents a failure to apply a message, as
	// it did not carry sufficient funds for its application.
	InsufficientFunds_System

	// InvalidCallSeqNum represents a message invocation out of sequence.
	// This happens when message.CallSeqNum is not exactly actor.CallSeqNum + 1
	InvalidCallSeqNum

	// OutOfGas is returned when the execution of an actor method
	// (including its subcalls) uses more gas than initially allocated.
	OutOfGas

	// RuntimeAPIError is returned when an actor method invocation makes a call
	// to the runtime that does not satisfy its preconditions.
	RuntimeAPIError

	// RuntimeAssertFailure is returned when an actor method invocation calls
	// rt.Assert with a false condition.
	RuntimeAssertFailure

	// MethodSubcallError is returned when an actor method's Send call has
	// returned with a failure error code (and the Send call did not specify
	// to ignore errors).
	MethodSubcallError
)

const (
	InsufficientFunds_User = UserDefinedErrorCode_Placeholder + iota
	InvalidArguments_User
	InconsistentState_User

	InvalidSectorPacking
	SealVerificationFailed
	PoStVerificationFailed
	DeadlineExceeded
	InsufficientPledgeCollateral
)

func (x ExitCode) IsSuccess() bool {
	return x == Ok
}

func (x ExitCode) IsError() bool {
	return !x.IsSuccess()
}

func (x ExitCode) AllowsStateUpdate() bool {
	return x.IsSuccess()
}

func OK() ExitCode {
	return Ok
}

func EnsureErrorCode(x ExitCode) ExitCode {
	if !x.IsError() {
		// Throwing an error with a non-error exit code is itself an error
		x = (RuntimeAPIError)
	}
	return x
}

VM Gas Cost Constants

package runtime

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	actor "github.com/filecoin-project/specs-actors/actors/builtin"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	util "github.com/filecoin-project/specs/util"
)

type Bytes = util.Bytes

var TODO = util.TODO

var (
	// TODO: assign all of these.
	GasAmountPlaceholder                 = msg.GasAmount_FromInt(1)
	GasAmountPlaceholder_UpdateStateTree = GasAmountPlaceholder
)

var (
	///////////////////////////////////////////////////////////////////////////
	// System operations
	///////////////////////////////////////////////////////////////////////////

	// Gas cost charged to the originator of an on-chain message (regardless of
	// whether it succeeds or fails in application) is given by:
	//   OnChainMessageBase + len(serialized message)*OnChainMessagePerByte
	// Together, these account for the cost of message propagation and validation,
	// up to but excluding any actual processing by the VM.
	// This is the cost a block producer burns when including an invalid message.
	OnChainMessageBase    = GasAmountPlaceholder
	OnChainMessagePerByte = GasAmountPlaceholder

	// Gas cost charged to the originator of a non-nil return value produced
	// by an on-chain message is given by:
	//   len(return value)*OnChainReturnValuePerByte
	OnChainReturnValuePerByte = GasAmountPlaceholder

	// Gas cost for any message send execution(including the top-level one
	// initiated by an on-chain message).
	// This accounts for the cost of loading sender and receiver actors and
	// (for top-level messages) incrementing the sender's sequence number.
	// Load and store of actor sub-state is charged separately.
	SendBase = GasAmountPlaceholder

	// Gas cost charged, in addition to SendBase, if a message send
	// is accompanied by any nonzero currency amount.
	// Accounts for writing receiver's new balance (the sender's state is
	// already accounted for).
	SendTransferFunds = GasAmountPlaceholder

	// Gas cost charged, in addition to SendBase, if a message invokes
	// a method on the receiver.
	// Accounts for the cost of loading receiver code and method dispatch.
	SendInvokeMethod = GasAmountPlaceholder

	// Gas cost (Base + len*PerByte) for any Get operation to the IPLD store
	// in the runtime VM context.
	IpldGetBase    = GasAmountPlaceholder
	IpldGetPerByte = GasAmountPlaceholder

	// Gas cost (Base + len*PerByte) for any Put operation to the IPLD store
	// in the runtime VM context.
	//
	// Note: these costs should be significantly higher than the costs for Get
	// operations, since they reflect not only serialization/deserialization
	// but also persistent storage of chain data.
	IpldPutBase    = GasAmountPlaceholder
	IpldPutPerByte = GasAmountPlaceholder

	// Gas cost for updating an actor's substate (i.e., UpdateRelease).
	// This is in addition to a per-byte fee for the state as for IPLD Get/Put.
	UpdateActorSubstate = GasAmountPlaceholder_UpdateStateTree

	// Gas cost for creating a new actor (via InitActor's Exec method).
	// Actor sub-state is charged separately.
	ExecNewActor = GasAmountPlaceholder

	// Gas cost for deleting an actor.
	DeleteActor = GasAmountPlaceholder

	///////////////////////////////////////////////////////////////////////////
	// Pure functions (VM ABI)
	///////////////////////////////////////////////////////////////////////////

	// Gas cost charged per public-key cryptography operation (e.g., signature
	// verification).
	PublicKeyCryptoOp = GasAmountPlaceholder
)

func OnChainMessage(onChainMessageLen int) msg.GasAmount {
	return msg.GasAmount_Affine(OnChainMessageBase, onChainMessageLen, OnChainMessagePerByte)
}

func OnChainReturnValue(returnValue Bytes) msg.GasAmount {
	retLen := 0
	if returnValue != nil {
		retLen = len(returnValue)
	}

	return msg.GasAmount_Affine(msg.GasAmount_Zero(), retLen, OnChainReturnValuePerByte)
}

func IpldGet(dataSize int) msg.GasAmount {
	return msg.GasAmount_Affine(IpldGetBase, dataSize, IpldGetPerByte)
}

func IpldPut(dataSize int) msg.GasAmount {
	return msg.GasAmount_Affine(IpldPutBase, dataSize, IpldPutPerByte)
}

func InvokeMethod(value abi.TokenAmount, method abi.MethodNum) msg.GasAmount {
	ret := SendBase
	if value != abi.TokenAmount(0) {
		ret = ret.Add(SendTransferFunds)
	}
	if method != actor.MethodSend {
		ret = ret.Add(SendInvokeMethod)
	}
	return ret
}

System Actors

  • There are two system actors required for VM processing:
    • InitActor - initializes new actors, records the network name
    • CronActor - runs critical functions at every epoch
  • There are two more VM level actors:

InitActor

package init

import (
	"bytes"

	addr "github.com/filecoin-project/go-address"
	actor "github.com/filecoin-project/specs-actors/actors"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
)

type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime
type Bytes = abi.Bytes

var AssertMsg = autil.AssertMsg

type InitActorState struct {
	// responsible for create new actors
	AddressMap  map[addr.Address]abi.ActorID
	NextID      abi.ActorID
	NetworkName string
}

func (s *InitActorState) ResolveAddress(address addr.Address) addr.Address {
	actorID, ok := s.AddressMap[address]
	if ok {
		idAddr, err := addr.NewIDAddress(uint64(actorID))
		autil.Assert(err == nil)
		return idAddr
	}
	return address
}

func (s *InitActorState) MapAddressToNewID(address addr.Address) addr.Address {
	actorID := s.NextID
	s.NextID++
	s.AddressMap[address] = actorID
	idAddr, err := addr.NewIDAddress(uint64(actorID))
	autil.Assert(err == nil)
	return idAddr
}

func (st *InitActorState) CID() cid.Cid {
	panic("TODO")
}

type InitActor struct{}

func (a *InitActor) Constructor(rt Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	h := rt.AcquireState()
	st := InitActorState{
		AddressMap:  map[addr.Address]abi.ActorID{}, // TODO: HAMT
		NextID:      abi.ActorID(builtin.FirstNonSingletonActorId),
		NetworkName: vmr.NetworkName(),
	}
	UpdateRelease(rt, h, st)
	return rt.ValueReturn(nil)
}

func (a *InitActor) Exec(rt Runtime, execCodeID abi.ActorCodeID, constructorParams abi.MethodParams) InvocOutput {
	rt.ValidateImmediateCallerAcceptAny()
	callerCodeID, ok := rt.GetActorCodeID(rt.ImmediateCaller())
	AssertMsg(ok, "no code for actor at %s", rt.ImmediateCaller())
	if !_codeIDSupportsExec(callerCodeID, execCodeID) {
		rt.AbortArgMsg("Caller type cannot create an actor of requested type")
	}

	// Compute a re-org-stable address.
	// This address exists for use by messages coming from outside the system, in order to
	// stably address the newly created actor even if a chain re-org causes it to end up with
	// a different ID.
	newAddr := rt.NewActorAddress()

	// Allocate an ID for this actor.
	// Store mapping of pubkey or actor address to actor ID
	h, st := _loadState(rt)
	idAddr := st.MapAddressToNewID(newAddr)
	UpdateRelease(rt, h, st)

	// Create an empty actor.
	rt.CreateActor(execCodeID, idAddr)

	// Invoke constructor. If construction fails, the error should propagate and cause
	// Exec to fail too.
	rt.SendPropagatingErrors(vmr.InvocInput{
		To:     idAddr,
		Method: builtin.MethodConstructor,
		Params: constructorParams,
		Value:  rt.ValueReceived(),
	})

	var addrBuf bytes.Buffer
	err := idAddr.MarshalCBOR(&addrBuf)
	autil.Assert(err == nil)

	return rt.ValueReturn(addrBuf.Bytes())
}

// This method is disabled until proven necessary.
//func (a *InitActorCode_I) GetActorIDForAddress(rt Runtime, address addr.Address) InvocOutput {
//	h, st := _loadState(rt)
//	actorID := st.AddressMap[address]
//	Release(rt, h, st)
//	return rt.ValueReturn(Bytes(addr.Serialize_ActorID(actorID)))
//}

func _codeIDSupportsExec(callerCodeID abi.ActorCodeID, execCodeID abi.ActorCodeID) bool {
	if execCodeID == builtin.AccountActorCodeID {
		// Special case: account actors must be created implicitly by sending value;
		// cannot be created via exec.
		return false
	}

	if execCodeID == builtin.PaymentChannelActorCodeID {
		return true
	}

	if execCodeID == builtin.StorageMinerActorCodeID {
		if callerCodeID == builtin.StoragePowerActorCodeID {
			return true
		}
	}

	return false
}

///// Boilerplate /////

func _loadState(rt Runtime) (vmr.ActorStateHandle, InitActorState) {
	h := rt.AcquireState()
	stateCID := cid.Cid(h.Take())
	var state InitActorState
	if !rt.IpldGet(stateCID, &state) {
		rt.AbortAPI("state not found")
	}
	return h, state
}

func Release(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(&st))
	h.Release(checkCID)
}

func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(&st))
	h.UpdateRelease(newCID)
}

CronActor

package cron

import (
	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
)

type CronActorState struct{}

type CronActor struct {
	// TODO move Entries into the CronActorState struct
	Entries []CronTableEntry
}

type CronTableEntry struct {
	ToAddr    addr.Address
	MethodNum abi.MethodNum
}

func (a *CronActor) Constructor(rt vmr.Runtime) vmr.InvocOutput {
	// Nothing. intentionally left blank.
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	return rt.SuccessReturn()
}

func (a *CronActor) EpochTick(rt vmr.Runtime) vmr.InvocOutput {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	// a.Entries is basically a static registry for now, loaded
	// in the interpreter static registry.
	for _, entry := range a.Entries {
		rt.SendCatchingErrors(vmr.InvocInput{
			To:     entry.ToAddr,
			Method: entry.MethodNum,
			Params: nil,
			Value:  abi.TokenAmount(0),
		})
	}

	return rt.SuccessReturn()
}

AccountActor

package account

import (
	addr "github.com/filecoin-project/go-address"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	cid "github.com/ipfs/go-cid"
)

type InvocOutput = vmr.InvocOutput

type AccountActor struct{}

func (a *AccountActor) Constructor(rt vmr.Runtime) InvocOutput {
	// Nothing. intentionally left blank.
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	return rt.SuccessReturn()
}

type AccountActorState struct {
	Address addr.Address
}

func (AccountActorState) CID() cid.Cid {
	panic("TODO")
}

RewardActor

RewardActor is where unminted Filecoin tokens are kept. RewardActor contains a RewardMap which is a mapping from owner addresses to Reward structs.

Reward struct is created to preserve the flexibility of introducing block reward vesting into the protocol. MintReward creates a new Reward struct and adds it to the RewardMap.

A Reward struct contains a StartEpoch that keeps track of when this Reward is created, Value that represents the total number of tokens rewarded, and EndEpoch which is when the reward will be fully vested. VestingFunction is currently an enum to represent the flexibility of different vesting functions. AmountWithdrawn records how many tokens have been withdrawn from a Reward struct so far. Owner addresses can call WithdrawReward which will withdraw all vested tokens that the investor address has from the RewardMap so far. When AmountWithdrawn equals Value in a Reward struct, the Reward struct will be removed from the RewardMap.

package reward

import (
	"math"

	addr "github.com/filecoin-project/go-address"
	actor "github.com/filecoin-project/specs-actors/actors"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
)

type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime

var IMPL_FINISH = autil.IMPL_FINISH
var IMPL_TODO = autil.IMPL_TODO
var TODO = autil.TODO

type VestingFunction int64

const (
	None VestingFunction = iota
	Linear
	// TODO: potential options
	// PieceWise
	// Quadratic
	// Exponential
)

type Reward struct {
	VestingFunction
	StartEpoch      abi.ChainEpoch
	EndEpoch        abi.ChainEpoch
	Value           abi.TokenAmount
	AmountWithdrawn abi.TokenAmount
}

func (r *Reward) AmountVested(elapsedEpoch abi.ChainEpoch) abi.TokenAmount {
	switch r.VestingFunction {
	case None:
		return r.Value
	case Linear:
		TODO() // BigInt
		vestedProportion := math.Max(1.0, float64(elapsedEpoch)/float64(r.StartEpoch-r.EndEpoch))
		return abi.TokenAmount(uint64(r.Value) * uint64(vestedProportion))
	default:
		return abi.TokenAmount(0)
	}
}

// ownerAddr to a collection of Reward
// TODO: AMT
type RewardBalanceAMT map[addr.Address][]Reward

type RewardActorState struct {
	RewardMap RewardBalanceAMT
}

func (st *RewardActorState) CID() cid.Cid {
	panic("TODO")
}

func (st *RewardActorState) _withdrawReward(rt vmr.Runtime, ownerAddr addr.Address) abi.TokenAmount {
	rewards, found := st.RewardMap[ownerAddr]
	if !found {
		rt.AbortStateMsg("ra._withdrawReward: ownerAddr not found in RewardMap.")
	}

	rewardToWithdrawTotal := abi.TokenAmount(0)
	indicesToRemove := make([]int, len(rewards))

	for i, r := range rewards {
		elapsedEpoch := rt.CurrEpoch() - r.StartEpoch
		unlockedReward := r.AmountVested(elapsedEpoch)
		withdrawableReward := unlockedReward - r.AmountWithdrawn

		if withdrawableReward < 0 {
			rt.AbortStateMsg("ra._withdrawReward: negative withdrawableReward.")
		}

		r.AmountWithdrawn = unlockedReward // modify rewards in place
		rewardToWithdrawTotal += withdrawableReward

		if r.AmountWithdrawn == r.Value {
			indicesToRemove = append(indicesToRemove, i)
		}
	}

	updatedRewards := removeIndices(rewards, indicesToRemove)
	st.RewardMap[ownerAddr] = updatedRewards

	return rewardToWithdrawTotal
}

type RewardActor struct{}

func (a *RewardActor) Constructor(rt vmr.Runtime) InvocOutput {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	// initialize Reward Map with investor accounts
	panic("TODO")
}

func (a *RewardActor) State(rt Runtime) (vmr.ActorStateHandle, RewardActorState) {
	h := rt.AcquireState()
	stateCID := cid.Cid(h.Take())
	var state RewardActorState
	if !rt.IpldGet(stateCID, &state) {
		rt.AbortAPI("state not found")
	}
	return h, state
}

func (a *RewardActor) WithdrawReward(rt vmr.Runtime) {
	vmr.RT_ValidateImmediateCallerIsSignable(rt)
	ownerAddr := rt.ImmediateCaller()

	h, st := a.State(rt)

	// withdraw available funds from RewardMap
	withdrawableReward := st._withdrawReward(rt, ownerAddr)
	UpdateReleaseRewardActorState(rt, h, st)

	rt.SendFunds(ownerAddr, withdrawableReward)
}

func (a *RewardActor) AwardBlockReward(
	rt vmr.Runtime,
	miner addr.Address,
	penalty abi.TokenAmount,
	minerNominalPower abi.StoragePower,
	currPledge abi.TokenAmount,
) {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	inds := rt.CurrIndices()
	pledgeReq := inds.PledgeCollateralReq(minerNominalPower)
	currReward := inds.GetCurrBlockRewardForMiner(minerNominalPower, currPledge)
	TODO()                                                                              // BigInt
	underPledge := math.Max(float64(abi.TokenAmount(0)), float64(pledgeReq-currPledge)) // 0 if over collateralized
	rewardToGarnish := math.Min(float64(currReward), float64(underPledge))

	TODO()
	// handle penalty here
	// also handle penalty greater than reward
	actualReward := currReward - abi.TokenAmount(rewardToGarnish)
	if rewardToGarnish > 0 {
		// Send fund to SPA for collateral
		rt.Send(
			builtin.StoragePowerActorAddr,
			builtin.Method_StoragePowerActor_AddBalance,
			serde.MustSerializeParams(miner),
			abi.TokenAmount(rewardToGarnish),
		)
	}

	h, st := a.State(rt)
	if actualReward > 0 {
		// put Reward into RewardMap
		newReward := &Reward{
			StartEpoch:      rt.CurrEpoch(),
			EndEpoch:        rt.CurrEpoch(),
			Value:           actualReward,
			AmountWithdrawn: abi.TokenAmount(0),
			VestingFunction: None,
		}
		rewards, found := st.RewardMap[miner]
		if !found {
			rewards = make([]Reward, 0)
		}
		rewards = append(rewards, *newReward)
		st.RewardMap[miner] = rewards
	}
	UpdateReleaseRewardActorState(rt, h, st)
}

func UpdateReleaseRewardActorState(rt Runtime, h vmr.ActorStateHandle, st RewardActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(&st))
	h.UpdateRelease(newCID)
}

func removeIndices(rewards []Reward, indices []int) []Reward {
	// remove fully paid out Rewards by indices
	panic("TODO")
}

VM Interpreter - Message Invocation (Outside VM)

The VM interpreter orchestrates the execution of messages from a tipset on that tipset’s parent state, producing a new state and a sequence of message receipts. The CIDs of this new state and of the receipt collection are included in blocks from the subsequent epoch, which must agree about those CIDs in order to form a new tipset.

Every state change is driven by the execution of a message. The messages from all the blocks in a tipset must be executed in order to produce a next state. All messages from the first block are executed before those of second and subsequent blocks in the tipset. For each block, BLS-aggregated messages are executed first, then SECP signed messages.

Implicit messages

In addition to the messages explicitly included in each block, a few state changes at each epoch are made by implicit messages. Implicit messages are not transmitted between nodes, but constructed by the interpreter at evaluation time.

For each block in a tipset, an implicit message:

  • invokes the block producer’s miner actor to process the (already-validated) election PoSt submission, as the first message in the block;
  • invokes the reward actor to pay the block reward to the miner’s owner account, as the final message in the block;

For each tipset, an implicit message:

  • invokes the cron actor to process automated checks and payments, as the final message in the tipset.

All implicit messages are constructed with a From address being the distinguished system account actor. They specify a gas price of zero, but must be included in the computation. They must succeed (have an exit code of zero) in order for the new state to be computed. Receipts for implicit messages are not included in the receipt list; only explicit messages have an explicit receipt.

Gas payments

In most cases, the sender of a message pays the miner which produced the block including that message a gas fee for its execution.

The gas payments for each message execution are paid to the miner owner account immediately after that message is executed. There are no encumbrances to either the block reward or gas fees earned: both may be spent immediately.

Duplicate messages

Since different miners produce blocks in the same epoch, multiple blocks in a single tipset may include the same message (identified by the same CID). When this happens, the message is processed only the first time it is encountered in the tipset’s canonical order. Subsequent instances of the message are ignored and do not result in any state mutation, produce a receipt, or pay gas to the block producer.

The sequence of executions for a tipset is thus summarised:

  • pay reward for first block
  • process election post for first block
  • messages for first block (BLS before SECP)
  • pay reward for second block
  • process election post for second block
  • messages for second block (BLS before SECP, skipping any already encountered)
  • [… subsequent blocks …]
  • cron tick

Message validity and failure

Every message in a valid block can be processed and produce a receipt (note that block validity implies all messages are syntactically valid – see Message Syntax – and correctly signed). However, execution may or may not succeed, depending on the state to which the message is applied. If the execution of a message fails, the corresponding receipt will carry a non-zero exit code.

If a message fails due to a reason that can reasonably be attributed to the miner including a message that could never have succeeded in the parent state, or because the sender lacks funds to cover the maximum message cost, then the miner pays a penalty by burning the gas fee (rather than the sender paying fees to the block miner).

The only state changes resulting from a message failure are either:

  • incrementing of the sending actor’s CallSeqNum, and payment of gas fees from the sender to the owner of the miner of the block including the message; or
  • a penalty equivalent to the gas fee for the failed message, burnt by the miner (sender’s CallSeqNum unchanged).

A message execution will fail if, in the immediately preceding state:

  • the From actor does not exist in the state (miner penalized),
  • the From actor is not an account actor (miner penalized),
  • the CallSeqNum of the message does not match the CallSeqNum of the From actor (miner penalized),
  • the From actor does not have sufficient balance to cover the sum of the message Value plus the maximum gas cost, GasLimit * GasPrice (miner penalized),
  • the To actor does not exist in state and the To address is not a pubkey-style address,
  • the To actor exists (or is implicitly created as an account) but does not have a method corresponding to the non-zero MethodNum,
  • deserialized Params is not an array of length matching the arity of the To actor’s MethodNum method,
  • deserialized Params are not valid for the types specified by the To actor’s MethodNum method,
  • the invoked method consumes more gas than the GasLimit allows,
  • the invoked method exits with a non-zero code (via Runtime.Abort()), or
  • any inner message sent by the receiver fails for any of the above reasons.

Note that if the To actor does not exist in state and the address is a valid H(pubkey) address, it will be created as an account actor.

(You can see the old VM interpreter here )

vm/interpreter interface

import addr "github.com/filecoin-project/go-address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import vmri "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/impl"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import abi "github.com/filecoin-project/specs-actors/actors/abi"

type UInt64 UInt

// The messages from one block in a tipset.
type BlockMessages struct {
    BLSMessages   [msg.UnsignedMessage]
    SECPMessages  [msg.SignedMessage]
    Miner         addr.Address  // The block miner's actor address
    PoStProof     Bytes  // The miner's Election PoSt proof output
}

// The messages from a tipset, grouped by block.
type TipSetMessages struct {
    Blocks  [BlockMessages]
    Epoch   UInt64  // The chain epoch of the blocks
}

type VMInterpreter struct {
    Node node_base.FilecoinNode
    ApplyTipSetMessages(
        inTree  st.StateTree
        tipset  chain.Tipset
        msgs    TipSetMessages
    ) struct {outTree st.StateTree, ret [vmri.MessageReceipt]}

    ApplyMessage(
        inTree          st.StateTree
        chain           chain.Chain
        msg             msg.UnsignedMessage
        onChainMsgSize  int
        minerAddr       addr.Address
    ) struct {
        outTree          st.StateTree
        ret              vmri.MessageReceipt
        retMinerPenalty  abi.TokenAmount
    }
}

vm/interpreter implementation

package interpreter

import (
	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	exitcode "github.com/filecoin-project/specs-actors/actors/runtime/exitcode"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
	actstate "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
	vmri "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/impl"
	st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
	cid "github.com/ipfs/go-cid"
)

type Bytes = util.Bytes

var Assert = util.Assert
var TODO = util.TODO
var IMPL_FINISH = util.IMPL_FINISH

type SenderResolveSpec int

const (
	SenderResolveSpec_OK SenderResolveSpec = 1 + iota
	SenderResolveSpec_Invalid
)

// Applies all the message in a tipset, along with implicit block- and tipset-specific state
// transitions.
func (vmi *VMInterpreter_I) ApplyTipSetMessages(inTree st.StateTree, tipset chain.Tipset, msgs TipSetMessages) (outTree st.StateTree, receipts []vmri.MessageReceipt) {
	outTree = inTree
	seenMsgs := make(map[cid.Cid]struct{}) // CIDs of messages already seen once.
	var receipt vmri.MessageReceipt
	store := vmi.Node().Repository().StateStore()
	// get chain from Tipset
	chainRand := &chain.Chain_I{
		HeadTipset_: tipset,
	}

	for _, blk := range msgs.Blocks() {
		minerAddr := blk.Miner()
		util.Assert(minerAddr.Protocol() == addr.ID) // Block syntactic validation requires this.

		// Process block miner's Election PoSt.
		epostMessage := _makeElectionPoStMessage(outTree, minerAddr)
		outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, epostMessage, minerAddr)

		minerPenaltyTotal := abi.TokenAmount(0)
		var minerPenaltyCurr abi.TokenAmount

		minerGasRewardTotal := abi.TokenAmount(0)
		var minerGasRewardCurr abi.TokenAmount

		// Process BLS messages from the block.
		for _, m := range blk.BLSMessages() {
			_, found := seenMsgs[_msgCID(m)]
			if found {
				continue
			}
			onChainMessageLen := len(msg.Serialize_UnsignedMessage(m))
			outTree, receipt, minerPenaltyCurr, minerGasRewardCurr = vmi.ApplyMessage(outTree, chainRand, m, onChainMessageLen, minerAddr)
			minerPenaltyTotal += minerPenaltyCurr
			minerGasRewardTotal += minerGasRewardCurr

			receipts = append(receipts, receipt)
			seenMsgs[_msgCID(m)] = struct{}{}
		}

		// Process SECP messages from the block.
		for _, sm := range blk.SECPMessages() {
			m := sm.Message()
			_, found := seenMsgs[_msgCID(m)]
			if found {
				continue
			}
			onChainMessageLen := len(msg.Serialize_SignedMessage(sm))
			outTree, receipt, minerPenaltyCurr, minerGasRewardCurr = vmi.ApplyMessage(outTree, chainRand, m, onChainMessageLen, minerAddr)
			minerPenaltyTotal += minerPenaltyCurr
			minerGasRewardTotal += minerGasRewardCurr

			receipts = append(receipts, receipt)
			seenMsgs[_msgCID(m)] = struct{}{}
		}

		// transfer gas reward from BurntFundsActor to RewardActor
		_withTransferFundsAssert(outTree, builtin.BurntFundsActorAddr, builtin.RewardActorAddr, minerGasRewardTotal)

		// Pay block reward.
		rewardMessage := _makeBlockRewardMessage(outTree, minerAddr, minerPenaltyTotal, minerGasRewardTotal)
		outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, rewardMessage, minerAddr)
	}

	// Invoke cron tick.
	// Since this is outside any block, the top level block winner is declared as the system actor.
	cronMessage := _makeCronTickMessage(outTree)
	outTree = _applyMessageBuiltinAssert(store, outTree, chainRand, cronMessage, builtin.SystemActorAddr)

	return
}

func (vmi *VMInterpreter_I) ApplyMessage(inTree st.StateTree, chain chain.Chain, message msg.UnsignedMessage, onChainMessageSize int, minerAddr addr.Address) (
	retTree st.StateTree, retReceipt vmri.MessageReceipt, retMinerPenalty abi.TokenAmount, retMinerGasReward abi.TokenAmount) {

	store := vmi.Node().Repository().StateStore()
	senderAddr := _resolveSender(store, inTree, message.From())

	vmiGasRemaining := message.GasLimit()
	vmiGasUsed := msg.GasAmount_Zero()

	_applyReturn := func(
		tree st.StateTree, invocOutput vmr.InvocOutput, exitCode exitcode.ExitCode,
		senderResolveSpec SenderResolveSpec) {

		vmiGasRemainingFIL := _gasToFIL(vmiGasRemaining, message.GasPrice())
		vmiGasUsedFIL := _gasToFIL(vmiGasUsed, message.GasPrice())

		switch senderResolveSpec {
		case SenderResolveSpec_OK:
			// In this case, the sender is valid and has already transferred funds to the burnt funds actor
			// sufficient for the gas limit. Thus, we may refund the unused gas funds to the sender here.
			Assert(!message.GasLimit().LessThan(vmiGasUsed))
			Assert(message.GasLimit().Equals(vmiGasUsed.Add(vmiGasRemaining)))
			tree = _withTransferFundsAssert(tree, builtin.BurntFundsActorAddr, senderAddr, vmiGasRemainingFIL)
			retMinerGasReward = vmiGasUsedFIL
			retMinerPenalty = abi.TokenAmount(0)

		case SenderResolveSpec_Invalid:
			retMinerPenalty = vmiGasUsedFIL
			retMinerGasReward = abi.TokenAmount(0)

		default:
			Assert(false)
		}

		retTree = tree
		retReceipt = vmri.MessageReceipt_Make(invocOutput, exitCode, vmiGasUsed)
	}

	// TODO move this to a package with a less redundant name
	_applyError := func(tree st.StateTree, errExitCode exitcode.ExitCode, senderResolveSpec SenderResolveSpec) {
		_applyReturn(tree, vmr.InvocOutput_Make(nil), errExitCode, senderResolveSpec)
	}

	// Deduct an amount of gas corresponding to cost about to be incurred, but not necessarily
	// incurred yet.
	_vmiAllocGas := func(amount msg.GasAmount) (vmiAllocGasOK bool) {
		vmiGasRemaining, vmiAllocGasOK = vmiGasRemaining.SubtractIfNonnegative(amount)
		vmiGasUsed = message.GasLimit().Subtract(vmiGasRemaining)
		Assert(!vmiGasRemaining.LessThan(msg.GasAmount_Zero()))
		Assert(!vmiGasUsed.LessThan(msg.GasAmount_Zero()))
		return
	}

	// Deduct an amount of gas corresponding to costs already incurred, and for which the
	// gas cost must be paid even if it would cause the gas used to exceed the limit.
	_vmiBurnGas := func(amount msg.GasAmount) (vmiBurnGasOK bool) {
		vmiGasUsedPre := vmiGasUsed
		vmiBurnGasOK = _vmiAllocGas(amount)
		if !vmiBurnGasOK {
			vmiGasRemaining = msg.GasAmount_Zero()
			vmiGasUsed = vmiGasUsedPre.Add(amount)
		}
		return
	}

	ok := _vmiBurnGas(gascost.OnChainMessage(onChainMessageSize))
	if !ok {
		// Invalid message; insufficient gas limit to pay for the on-chain message size.
		_applyError(inTree, exitcode.OutOfGas, SenderResolveSpec_Invalid)
		return
	}

	fromActor, ok := inTree.GetActor(senderAddr)
	if !ok {
		// Execution error; sender does not exist at time of message execution.
		_applyError(inTree, exitcode.ActorNotFound, SenderResolveSpec_Invalid)
		return
	}

	// make sure this is the right message order for fromActor
	if message.CallSeqNum() != fromActor.CallSeqNum() {
		_applyError(inTree, exitcode.InvalidCallSeqNum, SenderResolveSpec_Invalid)
		return
	}

	// Check sender balance.
	gasLimitCost := _gasToFIL(message.GasLimit(), message.GasPrice())
	tidx := indicesFromStateTree(inTree)
	networkTxnFee := tidx.NetworkTransactionFee(
		inTree.GetActorCodeID_Assert(message.To()), message.Method())
	totalCost := message.Value() + gasLimitCost + networkTxnFee
	if fromActor.Balance() < totalCost {
		// Execution error; sender does not have sufficient funds to pay for the gas limit.
		_applyError(inTree, exitcode.InsufficientFunds_System, SenderResolveSpec_Invalid)
		return
	}

	// At this point, construct compTreePreSend as a state snapshot which includes
	// the sender paying gas, and the sender's CallSeqNum being incremented;
	// at least that much state change will be persisted even if the
	// method invocation subsequently fails.
	compTreePreSend := _withTransferFundsAssert(inTree, senderAddr, builtin.BurntFundsActorAddr, gasLimitCost+networkTxnFee)
	compTreePreSend = compTreePreSend.Impl().WithIncrementedCallSeqNum_Assert(senderAddr)

	invoc := _makeInvocInput(message)
	sendRet, compTreePostSend := _applyMessageInternal(store, compTreePreSend, chain, message.CallSeqNum(), senderAddr, invoc, vmiGasRemaining, minerAddr)

	ok = _vmiBurnGas(sendRet.GasUsed)
	if !ok {
		panic("Interpreter error: runtime execution used more gas than provided")
	}

	ok = _vmiAllocGas(gascost.OnChainReturnValue(sendRet.ReturnValue))
	if !ok {
		// Insufficient gas remaining to cover the on-chain return value; proceed as in the case
		// of method execution failure.
		_applyError(compTreePreSend, exitcode.OutOfGas, SenderResolveSpec_OK)
		return
	}

	compTreeRet := compTreePreSend
	if sendRet.ExitCode.AllowsStateUpdate() {
		compTreeRet = compTreePostSend
	}

	_applyReturn(
		compTreeRet, vmr.InvocOutput_Make(sendRet.ReturnValue), sendRet.ExitCode, SenderResolveSpec_OK)
	return
}

// Resolves an address through the InitActor's map.
// Returns the resolved address (which will be an ID address) if found, else the original address.
func _resolveSender(store ipld.GraphStore, tree st.StateTree, address addr.Address) addr.Address {
	initState, ok := tree.GetActor(builtin.InitActorAddr)
	util.Assert(ok)
	serialized, ok := store.Get(cid.Cid(initState.State()))
	var initSubState initact.InitActorState
	serde.MustDeserialize(serialized, &initSubState)
	return initSubState.ResolveAddress(address)
}

func _applyMessageBuiltinAssert(store ipld.GraphStore, tree st.StateTree, chain chain.Chain, message msg.UnsignedMessage, minerAddr addr.Address) st.StateTree {
	senderAddr := message.From()
	Assert(senderAddr == builtin.SystemActorAddr)
	Assert(senderAddr.Protocol() == addr.ID)
	// Note: this message CallSeqNum is never checked (b/c it's created in this file), but probably should be.
	// Since it changes state, we should be sure about the state transition.
	// Alternatively we could special-case the system actor and declare that its CallSeqNumber
	// never changes (saving us the state-change overhead).
	tree = tree.Impl().WithIncrementedCallSeqNum_Assert(senderAddr)

	invoc := _makeInvocInput(message)
	retReceipt, retTree := _applyMessageInternal(store, tree, chain, message.CallSeqNum(), senderAddr, invoc, message.GasLimit(), minerAddr)
	if retReceipt.ExitCode != exitcode.OK() {
		panic("internal message application failed")
	}

	return retTree
}

func _applyMessageInternal(store ipld.GraphStore, tree st.StateTree, chain chain.Chain, messageCallSequenceNumber actstate.CallSeqNum, senderAddr addr.Address, invoc vmr.InvocInput,
	gasRemainingInit msg.GasAmount, topLevelBlockWinner addr.Address) (vmri.MessageReceipt, st.StateTree) {

	rt := vmri.VMContext_Make(
		store,
		chain,
		senderAddr,
		topLevelBlockWinner,
		messageCallSequenceNumber,
		actstate.CallSeqNum(0),
		tree,
		senderAddr,
		abi.TokenAmount(0),
		gasRemainingInit,
	)

	return rt.SendToplevelFromInterpreter(invoc)
}

func _withTransferFundsAssert(tree st.StateTree, from addr.Address, to addr.Address, amount abi.TokenAmount) st.StateTree {
	// TODO: assert amount nonnegative
	retTree, err := tree.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		panic("Interpreter error: insufficient funds (or transfer error) despite checks")
	} else {
		return retTree
	}
}

func indicesFromStateTree(st st.StateTree) indices.Indices {
	TODO()
	panic("")
}

func _gasToFIL(gas msg.GasAmount, price abi.TokenAmount) abi.TokenAmount {
	IMPL_FINISH()
	panic("") // BigInt arithmetic
	// return abi.TokenAmount(util.UVarint(gas) * util.UVarint(price))
}

func _makeInvocInput(message msg.UnsignedMessage) vmr.InvocInput {
	return vmr.InvocInput{
		To:     message.To(), // Receiver address is resolved during execution.
		Method: message.Method(),
		Params: message.Params(),
		Value:  message.Value(),
	}
}

// Builds a message for paying block reward to a miner's owner.
func _makeBlockRewardMessage(state st.StateTree, minerAddr addr.Address, penalty abi.TokenAmount, gasReward abi.TokenAmount) msg.UnsignedMessage {
	params := serde.MustSerializeParams(minerAddr, penalty)
	TODO() // serialize other inputs to BlockRewardMessage or get this from query in RewardActor

	sysActor, ok := state.GetActor(builtin.SystemActorAddr)
	Assert(ok)

	return &msg.UnsignedMessage_I{
		From_:       builtin.SystemActorAddr,
		To_:         builtin.RewardActorAddr,
		Method_:     builtin.Method_RewardActor_AwardBlockReward,
		Params_:     params,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

// Builds a message for submitting ElectionPost on behalf of a miner actor.
func _makeElectionPoStMessage(state st.StateTree, minerActorAddr addr.Address) msg.UnsignedMessage {
	sysActor, ok := state.GetActor(builtin.SystemActorAddr)
	Assert(ok)
	return &msg.UnsignedMessage_I{
		From_:       builtin.SystemActorAddr,
		To_:         minerActorAddr,
		Method_:     builtin.Method_StorageMinerActor_OnVerifiedElectionPoSt,
		Params_:     nil,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

// Builds a message for invoking the cron actor tick.
func _makeCronTickMessage(state st.StateTree) msg.UnsignedMessage {
	sysActor, ok := state.GetActor(builtin.SystemActorAddr)
	Assert(ok)
	return &msg.UnsignedMessage_I{
		From_:       builtin.SystemActorAddr,
		To_:         builtin.CronActorAddr,
		Method_:     builtin.Method_CronActor_EpochTick,
		Params_:     nil,
		CallSeqNum_: sysActor.CallSeqNum(),
		Value_:      0,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}
}

func _msgCID(msg msg.UnsignedMessage) cid.Cid {
	panic("TODO")
}

vm/interpreter/registry

package interpreter

import (
	"errors"

	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	accact "github.com/filecoin-project/specs-actors/actors/builtin/account"
	cronact "github.com/filecoin-project/specs-actors/actors/builtin/cron"
	initact "github.com/filecoin-project/specs-actors/actors/builtin/init"
	smarkact "github.com/filecoin-project/specs-actors/actors/builtin/storage_market"
	spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
)

var (
	ErrActorNotFound = errors.New("Actor Not Found")
)

var staticActorCodeRegistry = &actorCodeRegistry{}

type actorCodeRegistry struct {
	code map[abi.ActorCodeID]vmr.ActorCode
}

func (r *actorCodeRegistry) _registerActor(id abi.ActorCodeID, actor vmr.ActorCode) {
	r.code[id] = actor
}

func (r *actorCodeRegistry) _loadActor(id abi.ActorCodeID) (vmr.ActorCode, error) {
	a, ok := r.code[id]
	if !ok {
		return nil, ErrActorNotFound
	}
	return a, nil
}

func RegisterActor(id abi.ActorCodeID, actor vmr.ActorCode) {
	staticActorCodeRegistry._registerActor(id, actor)
}

func LoadActor(id abi.ActorCodeID) (vmr.ActorCode, error) {
	return staticActorCodeRegistry._loadActor(id)
}

// init is called in Go during initialization of a program.
// this is an idiomatic way to do this. Implementations should approach this
// however they wish. The point is to initialize a static registry with
// built in pure types that have the code for each actor. Once we have
// a way to load code from the StateTree, use that instead.
func init() {
	_registerBuiltinActors()
}

func _registerBuiltinActors() {
	// TODO

	cron := &cronact.CronActor{}

	RegisterActor(builtin.InitActorCodeID, &initact.InitActor{})
	RegisterActor(builtin.CronActorCodeID, cron)
	RegisterActor(builtin.AccountActorCodeID, &accact.AccountActor{})
	RegisterActor(builtin.StoragePowerActorCodeID, &spowact.StoragePowerActor{})
	RegisterActor(builtin.StorageMarketActorCodeID, &smarkact.StorageMarketActor{})

	// wire in CRON actions.
	// TODO: move this to CronActor's constructor method
	cron.Entries = append(cron.Entries, cronact.CronTableEntry{
		ToAddr:    builtin.StoragePowerActorAddr,
		MethodNum: builtin.Method_StoragePowerActor_OnEpochTickEnd,
	})

	cron.Entries = append(cron.Entries, cronact.CronTableEntry{
		ToAddr:    builtin.StorageMarketActorAddr,
		MethodNum: builtin.Method_StorageMarketActor_OnEpochTickEnd,
	})
}

Blockchain

The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol. It is the main interface linking various actors in the Filecoin system.

It includes:

  • A Message Pool subsystem that nodes use to track and propagate messages related to the storage market throughout a gossip network.
  • A VM - Virtual Machine subsystem used to interpret and execute messages in order to update system state.
  • A subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
  • A ChainSync - synchronizing the Blockchain susbystem that tracks and propagates validated message blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
  • A Storage Power Consensus subsystem which tracks storage state for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.

And also:

  • A {{ }} – which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
  • A {{ }} – which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.

At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards. Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.

Most of the functions of the Filecoin blockchain system are detailed in the code below.

Blocks

Block

Block

The Block is a unit of the Filecoin blockchain.

A block header contains information relevant to a particular point in time over which the network may achieve consensus.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import addr "github.com/filecoin-project/go-address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type ChainWeight UVarint
type MessageReceipt util.Bytes

// On-chain representation of a block header.
type BlockHeader struct {
    // Chain linking
    Parents                [&BlockHeader]
    ParentWeight           ChainWeight
    // State
    ParentState            &st.StateTree
    ParentMessageReceipts  &[&MessageReceipt]  // array-mapped trie ref

    // Consensus things
    Epoch                  abi.ChainEpoch
    Timestamp              clock.UnixTime
    Ticket

    Miner                  addr.Address
    ElectionPoStOutput     ElectionPoStVerifyInfo

    // Fork Signal bitfield with bits used to advertise support for
    // proposed forks and reset if fork is executed.
    ForkSignal             uint64

    // Proposed update
    Messages               &TxMeta
    BLSAggregate           filcrypto.Signature

    // Signatures
    Signature              filcrypto.Signature

    //	SerializeSigned()            []byte
    //	ComputeUnsignedFingerprint() []
}

type TxMeta struct {
    BLSMessages   &[&msg.UnsignedMessage]  // array-mapped trie
    SECPMessages  &[&msg.SignedMessage]  // array-mapped trie
}

// Internal representation of a full block, with all messages.
type Block struct {
    Header        BlockHeader
    BLSMessages   [msg.UnsignedMessage]
    SECPMessages  [msg.SignedMessage]
}

// HACK: All of the below was duplicated from posting.id
// in order to get spec to compile. Check the actual source for details
type ElectionPoStVerifyInfo struct {
    Candidates  [PoStCandidate]
    Proof       PoStProof
    Randomness  PoStRandomness
}

type ChallengeTicketsCommitment struct {}  // see sector
type PoStCandidate struct {}  // see sector
type PoStRandomness struct {}  // see sector
type PoStProof struct {}
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/go-address"

type Ticket struct {
    VRFResult  filcrypto.VRFResult

    Output     Bytes                @(cached)
    DrawRandomness(round abi.ChainEpoch) Bytes
    ValidateSyntax() bool
    Verify(
        input           Bytes
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool
}
Block syntax validation

Syntax validation refers to validation that may be performed on a block and its messages without reference to outside information such as the parent state tree.

An invalid block must not be transmitted or referenced as a parent.

A syntactically valid block header must decode into fields matching the type definition below.

A syntactically valid header must have:

  • between 1 and 5*ec.ExpectedLeaders Parents CIDs if Epoch is greater than zero (else empty Parents),
  • a non-negative ParentWeight,
  • a Miner address which is an ID-address,
  • a non-negative Epoch,
  • a positive Timestamp,
  • a Ticket with non-empty VRFResult,
  • ElectionPoStOutput containing:
    • a Candidates array with between 1 and EC.ExpectedLeaders values (inclusive),
    • a non-empty PoStRandomness field,
    • a non-empty Proof field,
  • a non-empty ForkSignal field.

A syntactically valid full block must have:

  • all referenced messages syntactically valid,
  • all referenced parent receipts syntactically valid,
  • the sum of the serialized sizes of the block header and included messages is no greater than block.BlockMaxSize,
  • the sum of the gas limit of all explicit messages is no greater than block.BlockGasLimit.

Note that validation of the block signature requires access to the miner worker address and public key from the parent tipset state, so signature validation forms part of semantic validation. Similarly, message signature validation requires lookup of the public key associated with each message’s From account actor in the block’s parent state.

Block semantic validation

Semantic validation refers to validation that requires reference to information outside the block header and messages themselves, in particular the parent tipset and state on which the block is built.

A semantically valid block must have:

  • Parents listed in lexicographic order of their header’s Ticket,
  • Parents all reference valid blocks and form a valid ,
  • ParentState matching the state tree produced by executing the parent tipset’s messages (as defined by the VM interpreter) against that tipset’s parent state,
  • ParentMessageReceipts identifying the receipt list produced by parent tipset execution, with one receipt for each unique message from the parent tipset,
  • ParentWeight matching the weight of the chain up to and including the parent tipset,
  • Epoch greater than that of its parents, and
    • not in the future according to the node’s local clock reading of the current epoch,
      • blocks with future epochs should not be rejected, but should not be evaluated (validated or included in a tipset) until the appropriate epoch
    • not farther in the past than the soft finality as defined by SPC $EC Finality,
      • this rule only applied when receiving new gossip blocks (i.e. from the current chain head), not when syncing to the chain for the first time (e.g.)
  • Miner that is active in the storage power table in the parent tipset state,
  • a Ticket derived from the minimum ticket from the parent tipset’s block headers,
    • Ticket.VRFResult validly signed by the Miner actor’s worker account public key,
  • ElectionPoStOutput yielding winning partial tickets that were generated validly,
    • ElectionPoSt.Randomness is well formed and appropriately drawn from a past tipset according to the PoStLookback,
    • ElectionPoSt.Proof is a valid proof verifying the generation of the ElectionPoSt.Candidates from the Miner's eligible sectors,
    • ElectionPoSt.Candidates contains well formed PoStCandidates each of which has a PartialTicket yielding a winning ChallengeTicket in Expected Consensus.
  • a Timestamp in seconds lying within the quantized epoch window implied by the genesis block’s timestamp and the block’s Epoch,
  • all SECP messages correctly signed by their sending actor’s worker account key,
  • a BLSAggregate signature that signs the array of CIDs of the BLS messages referenced by the block with their sending actor’s key.
  • a valid Signature over the block header’s fields from the block’s Miner actor’s worker account public key.

There is no semantic validation of the messages included in a block beyond validation of their signatures. If all messages included in a block are syntactically valid then they may be executed and produce a receipt.

A chain sync system may perform syntactic and semantic validation in stages in order to minimize unnecessary resource expenditure.

Tipset

Expected Consensus probabilistically elects multiple leaders in each epoch meaning a Filecoin chain may contain zero or multiple blocks at each epoch (one per elected miner). Blocks from the same epoch are assembled into tipsets. The modifies the Filecoin state tree by executing all messages in a tipset (after de-duplication of identical messages included in more than one block).

Each block references a parent tipset and validates that tipset’s state, while proposing messages to be included for the current epoch. The state to which a new block’s messages apply cannot be known until that block is incorporated into a tipset. It is thus not meaningful to execute the messages from a single block in isolation: a new state tree is only known once all messages in that block’s tipset are executed.

A valid tipset contains a non-empty collection of blocks that have distinct miners and all specify identical:

  • Epoch
  • Parents
  • ParentWeight
  • StateRoot
  • ReceiptsRoot

The blocks in a tipset are canonically ordered by the lexicographic ordering of the bytes in each block’s ticket, breaking ties with the bytes of the CID of the block itself.

Due to network propagation delay, it is possible for a miner in epoch N+1 to omit valid blocks mined at epoch N from their parent tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.

Block producers are expected to coordinate how they select messages for inclusion in blocks in order to avoid duplicates and thus maximize their expected earnings from transaction fees (see Message Pool).

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

type Tipset struct {
    BlockCIDs           [&block.BlockHeader]
    Blocks              [block.BlockHeader]

    Has(b block.Block)  bool                  @(cached)
    Parents             Tipset                @(cached)
    StateTree           st.StateTree          @(cached)
    Weight              block.ChainWeight     @(cached)
    Epoch               abi.ChainEpoch        @(cached)

    // Returns the largest timestamp fom the tipset's blocks.
    LatestTimestamp()   clock.UnixTime        @(cached)
    // Returns the lexicographically smallest ticket from the tipset's blocks.
    MinTicket()         block.Ticket          @(cached)
}

Chain

A Chain is a sequence of tipsets, linked together. It is a single history of execution in the Filecoin blockchain. Randomness is drawn from the chain, as mentioned in Ticket Chain.

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type Chain struct {
    HeadTipset Tipset

    TipsetAtEpoch(epoch abi.ChainEpoch) Tipset
    RandomnessAtEpoch(epoch abi.ChainEpoch) abi.RandomnessSeed

    // call by StorageMiningSubsystem during block production
    GetTicketProductionRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed

    // call by StorageMiningSubsystem in sealing sector
    GetSealRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed

    // call by StorageMiningSubsystem after sealing
    GetPoStChallengeRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed
}

// Checkpoint represents a particular block to use as a trust anchor
// in Consensus and ChainSync
//
// Note: a Block uniquely identifies a tipset (the parents)
// from here, we may consider many tipsets that _include_ Block
// but we must indeed include t and not consider tipsets that
// fork from Block.Parents, but do not include Block.
type Checkpoint &block.BlockHeader

// SoftCheckpoint is a checkpoint that Filecoin nodes may use as they
// gain confidence in the blockchain. It is a unilateral checkpoint,
// and derived algorithmically from notions of probabilistic consensus
// and finality.
type SoftCheckpoint Checkpoint

// TrustedCheckpoint is a Checkpoint that is trusted by the broader
// Filecoin Network. These TrustedCheckpoints are arrived at through
// the higher level economic consensus that surrounds Filecoin.
// TrustedCheckpoints:
// - MUST be at least 200,000 blocks old (>1mo)
// - MUST be at least
// - MUST be widely known and accepted
// - MAY ship with Filecoin software implementations
// - MAY be propagated through other side-channel systems
// For more, see the Checkpoints section.
// TODO: consider renaming as EconomicCheckpoint
type TrustedCheckpoint Checkpoint
package chain

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
)

// Returns the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) TipsetAtEpoch(epoch abi.ChainEpoch) Tipset {
	current := chain.HeadTipset()
	for current.Epoch() > epoch {
		current = current.Parents()
	}

	return current
}

// Draws randomness from the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) RandomnessAtEpoch(epoch abi.ChainEpoch) abi.RandomnessSeed {
	ts := chain.TipsetAtEpoch(epoch)
	return ts.MinTicket().DrawRandomness(epoch)
}

func (chain *Chain_I) GetTicketProductionRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - node_base.SPC_LOOKBACK_TICKET)
}

func (chain *Chain_I) GetSealRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - builtin.SPC_LOOKBACK_SEAL)
}

func (chain *Chain_I) GetPoStChallengeRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - builtin.SPC_LOOKBACK_POST)
}

Chain Manager

The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.

In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.

The chain manager interfaces and functions are included here, but we expand on important details below for clarity.

Chain Expansion
Incoming block reception

Once a block has been received and passes syntactic and semantic validation it must be added to the local datastore, regardless whether it is understood as the best tip at this point. Future blocks from other miners may be mined on top of it and in that case we will want to have it around to avoid refetching.

Chain selection is a crucial component of how the Filecoin blockchain works. Every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. It is always preferable to mine atop a heavier Tipset rather than a lighter one. While a miner may be foregoing block rewards earned in the past, this lighter chain is likely to be abandoned by other miners forfeiting any block reward earned as miners converge on a final chain. For more on this, see chain selection in the Expected Consensus spec.

However, ahead of finality, a given subchain may be abandoned in order of another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.

That is, for every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either:

  • it is able to do so, through the reception of another block in that subchain
  • it is able to discard it, as that block was mined before finality

We give an example of how this could work in the block reception algorithm.

ChainTipsManager

The Chain Tips Manager is a subcomponent of Filecoin consensus that is technically up to the implementer, but since the pseudocode in previous sections reference it, it is documented here for clarity.

The Chain Tips Manager is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.

// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}

// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}

// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()

// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)

Block Producer

Mining Blocks

A miner registered with the storage power actor may begin generating and checking election tickets if it has proven storage meeting the Minimum Miner Size threshold requirement.

In order to do so, the miner must be running chain validation, and be keeping track of the most recent blocks received. A miner’s new block will be based on parents from the previous epoch.

Block Creation

Producing a block for epoch H requires computing a tipset for epoch H-1 (or possibly a prior epoch, if no blocks were received for that epoch). Using the state produced by this tipset, a miner can scratch winning ElectionPoSt ticket(s). Armed with the requisite ElectionPoStOutput, as well as a new randomness ticket generated in this epoch, a miner can produce a new block.

See VM Interpreter - Message Invocation (Outside VM) for details of parent tipset evaluation, and Block for constraints on valid block header values.

To create a block, the eligible miner must compute a few fields:

  • Parents - the CIDs of the parent tipset’s blocks.
  • ParentWeight - the parent chain’s weight (see Chain Selection).
  • ParentState - the CID of the state root from the parent tipset state evaluation (see the VM Interpreter - Message Invocation (Outside VM)).
  • ParentMessageReceipts - the CID of the root of an AMT containing receipts produced while computing ParentState.
  • Epoch - the block’s epoch, derived from the Parents epoch and the number of epochs it took to generate this block.
  • Timestamp - a Unix timestamp, in seconds, generated at block creation.
  • Ticket - a new ticket generated from that in the prior epoch (see Ticket Generation).
  • Miner - the block producer’s miner actor address.
  • ElectionPoStVerifyInfo - The byproduct of running an ElectionPoSt yielding requisite on-chain information (see Election PoSt), namely:
    • An array of PoStCandidate objects, all of which include a winning partial ticket used to run leader election.
    • PoStRandomness used to challenge the miner’s sectors and generate the partial tickets.
    • A PoStProof snark output to prove that the partial tickets were correctly generated.
  • Messages - The CID of a TxMeta object containing message proposed for inclusion in the new block:
    • Select a set of messages from the mempool to include in the block, satisfying block size and gas limits
    • Separate the messages into BLS signed messages and secpk signed messages
    • TxMeta.BLSMessages: The CID of the root of an AMT comprising the bare UnsignedMessages
    • TxMeta.SECPMessages: the CID of the root of an AMT comprising the SignedMessages
  • BLSAggregate - The aggregated signature of all messages in the block that used BLS signing.
  • Signature - A signature with the miner’s worker account private key (must also match the ticket signature) over the the block header’s serialized representation (with empty signature).

Note that the messages to be included in a block need not be evaluated in order to produce a valid block. A miner may wish to speculatively evaluate the messages anyway in order to optimize for including messages which will succeed in execution and pay the most gas.

The block reward is not evaluated when producing a block. It is paid when the block is included in a tipset in the following epoch.

The block’s signature ensures integrity of the block after propagation, since unlike many PoW blockchains, a winning ticket is found independently of block generation.

Block Broadcast

An eligible miner broadcasts the completed block to the network and, assuming everything was done correctly, the network will accept it and other miners will mine on top of it, earning the miner a block reward!

Miners should output their valid block as soon as it is produced, otherwise they risk other miners receiving the block after the EPOCH_CUTOFF and not including them.

Block Rewards

TODO: Rework this.

Over the entire lifetime of the protocol, 1,400,000,000 FIL (TotalIssuance) will be given out to miners. Each of the miners who produced a block in a tipset will receive a block reward.

Note: Due to jitter in EC, and the gregorian calendar, there may be some error in the issuance schedule over time. This is expected to be small enough that it’s not worth correcting for. Additionally, since the payout mechanism is transferring from the network account to the miner, there is no risk of minting too much FIL.

TODO: Ensure that if a miner earns a block reward while undercollateralized, then min(blockReward, requiredCollateral-availableBalance) is garnished (transfered to the miner actor instead of the owner).

Message Pool

The Message Pool is a subsystem in the Filecoin blockchain system. The message pool is acts as the interface between Filecoin nodes and a peer-to-peer network used for off-chain message transmission. It is used by nodes to maintain a set of messages to transmit to the Filecoin VM (for “on-chain” execution).

import addr "github.com/filecoin-project/go-address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type MessagePoolSubsystem struct {
    // needs access to:
    // - BlockchainSubsystem
    //   - needs access to StateTree
    //   - needs access to Messages mined into blocks (probably past finality)
    //     to remove from the MessagePool
    // - NetworkSubsystem
    //   - needs access to MessagePubsub
    //
    // Important remaining questions:
    // - how does BlockchainSubsystem.BlockReceiver handle asking for messages?
    // - how do we note messages are now part of the blockchain
    //   - how are they cleared from the mempool
    // - do we need to have some sort of purge?

    // Stats returns information about the MessagePool contents.
    Stats() MessagePoolStats

    // FindMessage receives a descriptor query q, and returns a set of
    // messages currently in the mempool that match the Query constraints.
    // q may have all, any, or no constraints specified.
    // FindMessage(q MessageQuery) union {
    //  [base.Message],
    //  Error
    // }

    // MostProfitableMessages returns messages that are most profitable
    // to mine for this miner.
    //
    // Note: This is where algorithms about chosing best messages given
    //       many leaders should go.
    GetMostProfitableMessages(miner addr.Address) [msg.SignedMessage]

    // messageSyncer to manager incoming and outgoing messages
    Syncer MessageSyncer
}

type MessagePoolStats struct {
    // Size is the amount of messages in the MessagePool
    Size UInt
}

// MessageQuery is a descriptor used to find messages matching one or more
// of the constraints specified.
type MessageQuery struct {
    /*
  From   base.Address
  To     base.Address
  Method ActorMethodId
  Params ActorMethodParams

  ValueMin    TokenAmount
  ValueMax    TokenAmount
  GasPriceMin TokenAmount
  GasPriceMax TokenAmount
  GasLimitMin TokenAmount
  GasLimitMax TokenAmount
  */
}

Clients that use a message pool include:

  • storage market provider and client nodes - for transmission of deals on chain
  • storage miner nodes - for transmission of PoSts, sector commitments, deals, and other operations tracked on chain
  • verifier nodes - for transmission of potential faults on chain
  • relayer nodes - for forwarding and discarding messages appropriately.

The message pool subsystem is made of two components:

TODOs:

  • discuss how messages are meant to propagate slowly/async
  • explain algorithms for choosing profitable txns

Message Syncer

TODO:

  • explain message syncer works
  • include the message syncer code
Message Propagation

Messages are propagated over the libp2p pubsub channel /fil/messages. On this channel, every serialised SignedMessage is announced (see Message).

Upon receiving the message, its validity must be checked: the signature must be valid, and the account in question must have enough funds to cover the actions specified. If the message is not valid it should be dropped and must not be forwarded.

Message Storage

TODO:

  • give sample algorithm for miner message selection in block production (to avoid dups)
  • give sample algorithm for message storage caching/purging policies.

ChainSync - synchronizing the Blockchain

What is blockchain synchronization?

Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and transactions (messages), and thus in charge of distributed state replication. This process is security critical – problems here can be catastrophic to the operation of a blockchain.

What is ChainSync?

ChainSync is the protocol Filecoin uses to synchronize its blockchain. It is specific to Filecoin’s choices in state representation and consensus rules, but is general enough that it can serve other blockchains. ChainSync is a group of smaller protocols, which handle different parts of the sync process.

Terms and Concepts

  • LastCheckpoint the last hard social-consensus oriented checkpoint that ChainSync is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on. ChainSync takes LastCheckpoint on faith, and builds on it, never switching away from its history.
  • TargetHeads a list of BlockCIDs that represent blocks at the fringe of block production. These are the newest and best blocks ChainSync knows about. They are “target” heads because ChainSync will try to sync to them. This list is sorted by “likelihood of being the best chain” (eg for now, simply ChainWeight)
  • BestTargetHead the single best chain head BlockCID to try to sync to. This is the first element of TargetHeads

ChainSync Summary

At a high level, ChainSync does the following:

  • Part 1: Verify internal state (INIT state below)
    • SHOULD verify data structures and validate local chain
    • Resource expensive verification MAY be skipped at nodes’ own risk
  • Part 2: Bootstrap to the network (BOOTSTRAP)
    • Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
    • Step 2. Bootstrap to the BlockPubsub channels
    • Step 3. Listen and serve on Graphsync
  • Part 3: Synchronize trusted checkpoint state (SYNC_CHECKPOINT)
    • Step 1. Start with a TrustedCheckpoint (defaults to GenesisCheckpoint).
    • Step 2. Get the block it points to, and that block’s parents
    • Step 3. Graphsync the StateTree
  • Part 4: Catch up to the chain (CHAIN_CATCHUP)
    • Step 1. Maintain a set of TargetHeads (BlockCIDs), and select the BestTargetHead from it
    • Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
    • Step 3. As validation progresses, TargetHeads and BestTargetHead will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate.
    • Step 4. Finish when node has “caught up” with BestTargetHead (retrieved all the state, linked to local chain, validated all the blocks, etc).
  • Part 5: Stay in sync, and participate in block propagation (CHAIN_FOLLOW)
    • Step 1. If security conditions change, go back to Part 4 (CHAIN_CATCHUP)
    • Step 2. Receive, validate, and propagate received Blocks
    • Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.

libp2p Network Protocols

As a networking-heavy protocol, ChainSync makes heavy use of libp2p. In particular, we use three sets of protocols:

  • libp2p.PubSub a family of publish/subscribe protocols to propagate recent Blocks. The concrete protocol choice impacts ChainSync's effectiveness, efficiency, and security dramatically. For Filecoin v1.0 we will use libp2p.Gossipsub, a recent libp2p protocol that combines features and learnings from many excellent PubSub systems. In the future, Filecoin may use other PubSub protocols. Important Note: is entirely possible for Filecoin Nodes to run multiple versions simultaneously. That said, this specification requires that filecoin nodes MUST connect and participate in the main channel, using libp2p.Gossipsub.
  • libp2p.PeerDiscovery a family of discovery protocols, to learn about peers in the network. This is especially important for security because network “Bootstrap” is a difficult problem in peer-to-peer networks. The set of peers we initially connect to may completely dominate our awareness of other peers, and therefore all state. We use a union of PeerDiscovery protocols as each by itself is not secure or appropriate for users’ threat models. The union of these provides a pragmatic and effective solution. Discovery protocols marked as required MUST be included in implementations and will be provided by implementation teams. Protocols marked as optional MAY be provided by implementation teams but can be built independently by third parties to augment bootstrap security.
  • libp2p.DataTransfer a family of protocols for transfering data Filecoin Nodes must run libp2p.Graphsync.

More concretely, we use these protocols:

  • libp2p.PeerDiscovery
    • (required) libp2p.BootstrapList a protocol that uses a persistent and user-configurable list of semi-trusted bootstrap peers. The default list includes a set of peers semi-trusted by the Filecoin Community.
    • (optional) libp2p.KademliaDHT a dht protocol that enables random queries across the entire network
    • (required) libp2p.Gossipsub a pub/sub protocol that includes “prune peer exchange” by default, disseminating peer info as part of operation
    • (optional) libp2p.PersistentPeerstore a connectivity component that keeps persistent information about peers observed in the network throughout the lifetime of the node. This is useful because we resume and continually improve Bootstrap security.
    • (optional) libp2p.DNSDiscovery to learn about peers via DNS lookups to semi-trusted peer aggregators
    • (optional) libp2p.HTTPDiscovery to learn about peers via HTTP lookups to semi-trusted peer aggregators
    • (optional) libp2p.PEX a general use peer exchange protocol distinct from pubsub peer exchange for 1:1 adhoc peer exchange
  • libp2p.PubSub
    • (required) libp2p.Gossipsub the concrete libp2p.PubSub protocol ChainSync uses
  • libp2p.DataTransfer
    • (required) libp2p.Graphsync the data transfer protocol nodes must support for providing blockchain and user data
    • (optional) BlockSync a blockchain data transfer protocol that can be used by some nodes

Subcomponents

Aside from libp2p, ChainSync uses or relies on the following components:

  • Libraries:
    • ipld data structures, selectors, and protocols
      • ipld.GraphStore local persistent storage for chain datastructures
      • ipld.Selector a way to express requests for chain data structures
      • ipfs.GraphSync a general-purpose ipld datastructure syncing protocol
  • Data Structures:
    • Data structures in the chain package: Block, Tipset, Chain, Checkpoint ...
    • chainsync.BlockCache a temporary cache of blocks, to constrain resource expended
    • chainsync.AncestryGraph a datastructure to efficiently link Blocks, Tipsets, and PartialChains
    • chainsync.ValidationGraph a datastructure for efficient and secure validation of Blocks and Tipsets
Graphsync in ChainSync

ChainSync is written in terms of Graphsync. ChainSync adds blockchain and filecoin-specific synchronization functionality that is critical for Filecoin security.

Rate Limiting Graphsync responses (SHOULD)

When running Graphsync, Filecoin nodes must respond to graphsync queries. Filecoin requires nodes to provide critical data structures to others, otherwise the network will not function. During ChainSync, it is in operators’ interests to provide data structures critical to validating, following, and participating in the blockchain they are on. However, this has limitations, and some level of rate limiting is critical for maintaining security in the presence of attackers who might issue large Graphsync requests to cause DOS.

We recommend the following:

  • Set and enforce batch size rate limits. Force selectors to be shaped like: LimitedBlockIpldSelector(blockCID, BatchSize) for a single constant BatchSize = 1000. Nodes may push for this equilibrium by only providing BatchSize objects in responses, even for pulls much larger than BatchSize. This forces subsequent pulls to be run, re-rooted appropriately, and hints at other parties that they should be requesting with that BatchSize.
  • Force all Graphsync queries for blocks to be aligned along cacheable bounderies. In conjunction with a BatchSize, implementations should aim to cache the results of Graphsync queries, so that they may propagate them to others very efficiently. Aligning on certain boundaries (eg specific ChainEpoch limits) increases the likelihood many parties in the network will request the same batches of content. Another good cacheable boundary is the entire contents of a Block (BlockHeader, Messages, Signatures, etc).
  • Maintain per-peer rate-limits. Use bandwidth usage to decide whether to respond and how much on a per-peer basis. Libp2p already tracks bandwidth usage in each connection. This information can be used to impose rate limits in Graphsync and other Filecoin protocols.
  • Detect and react to DOS: restrict operation. The safest implementations will likely detect and react to DOS attacks. Reactions could include:
    • Smaller Graphsync.BatchSize limits
    • Fewer connections to other peers
    • Rate limit total Graphsync bandwidth
    • Assign Graphsync bandwidth based on a peer priority queue
    • Disconnect from and do not accept connections from unknown peers
    • Introspect Graphsync requests and filter/deny/rate limit suspicious ones
Previous BlockSync protocol

Prior versions of this spec recommended a BlockSync protocol. This protocol definition is available here. Filecoin nodes are libp2p nodes, and therefore may run a variety of other protocols, including this BlockSync protocol. As with anything else in Filecoin, nodes MAY opt to use additional protocols to achieve the results. That said, Nodes MUST implement the version of ChainSync as described in this spec in order to be considered implementations of Filecoin. Test suites will assume this protocol.

ChainSync State Machine

ChainSync uses the following conceptual state machine. Since this is a conceptual state machine, implementations MAY deviate from implementing precisely these states, or dividing them strictly. Implementations MAY blur the lines between the states. If so, implementations MUST ensure security of the altered protocol.

State Machine:

ChainSync State Machine (open in new tab)
ChainSync FSM: INIT
  • beginning state. no network connections, not synchronizing.
  • local state is loaded: internal data structures (eg chain, cache) are loaded
  • LastTrustedCheckpoint is set the latest network-wide accepted TrustedCheckpoint
  • FinalityTipset is set to finality achieved in a prior protocol run.
    • Default: If no later FinalityTipset has been achieved, set FinalityTipset to LastTrustedCheckpoint
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint)
  • security conditions to transition out:
    • local state and data structures SHOULD be verified to be correct
      • this means validating any parts of the chain or StateTree the node has, from LastTrustedCheckpoint on.
    • LastTrustedCheckpoint is well-known across the Filecoin Network to be a true TrustedCheckpoint
      • this SHOULD NOT be verified in software, it SHOULD be verified by operators
      • Note: we ALWAYS have at least one TrustedCheckpoint, the GenesisCheckpoint.
  • transitions out:
    • once done verifying things: move to BOOTSTRAP
ChainSync FSM: BOOTSTRAP
  • network.Bootstrap(): establish connections to peers until we satisfy security requirement
    • for better security, use many different libp2p.PeerDiscovery protocols
  • BlockPubsub.Bootstrap(): establish connections to BlockPubsub peers
    • The subscription is for both peer discovery and to start selecting best heads. Listing on pubsub from the start keeps the node informed about potential head changes.
  • Graphsync.Serve(): set up a Graphsync service, that responds to others’ queries
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint).
  • security conditions to transition out:
    • Network connectivity MUST have reached the security level acceptable for ChainSync
    • BlockPubsub connectivity MUST have reached the security level acceptable for ChainSync
    • “on time” blocks MUST be arriving through BlockPubsub
  • transitions out:
    • once bootstrap is deemed secure enough:
      • if node does not have the Blocks or StateTree corresponding to LastTrustedCheckpoint: move to SYNC_CHECKPOINT
      • otherwise: move to CHAIN_CATCHUP
ChainSync FSM: SYNC_CHECKPOINT
  • While in this state:
    • ChainSync is well-bootstrapped, but does not yet have the Blocks or StateTree for LastTrustedCheckpoint
    • ChainSync issues Graphsync requests to its peers randomly for the Blocks and StateTree for LastTrustedCheckpoint:
      • ChainSync's counterparts in other peers MUST provide the state tree.
      • It is only semi-rational to do so, so ChainSync may have to try many peers.
      • Some of these requests MAY fail.
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is the available Blocks and StateTree for LastTrustedCheckpoint.
  • Important Notes:
    • ChainSync needs to fetch several blocks: the Block pointed at by LastTrustedCheckpoint, and its direct Block.Parents.
    • Nodes only need hashing to validate these Blocks and StateTrees – no block validation or state machine computation is needed.
    • The initial value of LastTrustedCheckpoint is GenesisCheckpoint, but it MAY be a value later in Chain history.
    • LastTrustedCheckpoint enables efficient syncing by making the implicit economic consensus of chain history explicit.
    • By allowing fetching of the StateTree of LastTrustedCheckpoint via Graphsync, ChainSync can yield much more efficient syncing than comparable blockchain synchronization protocols, as syncing and validation can start there.
    • Nodes DO NOT need to validate the chain from GenesisCheckpoint. LastTrustedCheckpoint MAY be a value later in Chain history.
    • Nodes DO NOT need to but MAY sync earlier StateTrees than LastTrustedCheckpoint as well.
  • Pseudocode 1: a basic version of SYNC_CHECKPOINT:
    func (c *ChainSync) SyncCheckpoint() {
        while !c.HasCompleteStateTreeFor(c.LastTrustedCheckpoint) {
            selector := ipldselector.SelectAll(c.LastTrustedCheckpoint)
            c.Graphsync.Pull(c.Peers, sel, c.IpldStore)
            // Pull SHOULD NOT pull what c.IpldStore already has (check first)
            // Pull SHOULD pull from different peers simultaneously
            // Pull SHOULD be efficient (try different parts of the tree from many peers)
            // Graphsync implementations may not offer these features. These features
            // can be implemented on top of a graphsync that only pulls from a single
            // peer and does not check local store first.
        }
        c.ChainCatchup() // on to CHAIN_CATCHUP
    }
    
  • security conditions to transition out:
    • StateTree for LastTrustedCheckpoint MUST be stored locally and verified (hashing is enough)
  • transitions out:
    • once node receives and verifies complete StateTree for LastTrustedCheckpoint: move to CHAIN_CATCHUP
ChainSync FSM: CHAIN_CATCHUP
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync is receiving latest Blocks from BlockPubsub
    • ChainSync starts fetching and validating blocks
    • ChainSync has unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has:
      • FinalityTipset does not change.
      • No new blocks are reported to consumers/users of ChainSync yet.
      • The chain state provided is the available Blocks and StateTree for all available epochs, specially the FinalityTipset.
      • finality must not move forward here because there are serious attack vectors where a node can be forced to end up on the wrong fork if finality advances before validation is complete up to the block production fringe.
    • Validation must advance, all the way to the block production fringe:
      • Validate the whole chain, from FinalityTipset to BestTargetHead
      • The node can reach BestTargetHead only to find out it was invalid, then has to update BestTargetHead with next best one, and sync to it (without having advanced FinalityTipset yet, as otherwise we may end up on the wrong fork)
  • security conditions to transition out:
    • Gaps between ChainSync.FinalityTipset ... ChainSync.BestTargetHead have been closed:
      • All Blocks and their content MUST be fetched, stored, linked, and validated locally. This includes BlockHeaders, Messages, etc.
      • Bad heads have been expunged from ChainSync.TargetHeads. Bad heads include heads that initially seemed good but turned out invalid, or heads that ChainSync has failed to connect (ie. cannot fetch ancestors connecting back to ChainSync.FinalityTipset within a reasonable amount of time).
      • All blocks between ChainSync.FinalityTipset ... ChainSync.TargetHeads have been validated This means all blocks before the best heads.
    • Not under a temporary network partition
  • transitions out:
    • once gaps between ChainSync.FinalityTipset ... ChainSync.TargetHeads are closed: move to CHAIN_FOLLOW
    • (Perhaps moving to CHAIN_FOLLOW when 1-2 blocks back in validation may be ok.
      • we dont know we have the right head until we validate it, so if other heads of similar height are right/better, we wont know till then.)
ChainSync FSM: CHAIN_FOLLOW
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync fetches and validates blocks.
    • ChainSync is receiving and validating latest Blocks from BlockPubsub
    • ChainSync DOES NOT have unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
    • ChainSync MUST drop back to another state if security conditions change.
    • Keep a set of gap measures:
      • BlockGap is the number of remaining blocks to validate between the Validated blocks and BestTargetHead.
        • (ie how many epochs do we need to validate to have validated BestTargetHead. does not include null blocks)
      • EpochGap is the number of epochs between the latest validated block, and BestTargetHead (includes null blocks).
      • MaxBlockGap = 2, which means how many blocks may ChainSync fall behind on before switching back to CHAIN_CATCHUP (does not include null blocks)
      • MaxEpochGap = 10, which means how many epochs may ChainSync fall behind on before switching back to CHAIN_CATCHUP (includes null blocks)
  • Chain State and Finality:
    • In this state, the chain MUST advance as all the blocks up to BestTargetHead are validated.
    • New blocks are finalized as they cross the finality threshold (ValidG.Heads[0].ChainEpoch - FinalityLookback)
    • New finalized blocks are reported to consumers.
    • The chain state provided includes the Blocks and StateTree for the Finality epoch, as well as candidate Blocks and StateTrees for unfinalized epochs.
  • security conditions to transition out:
    • Temporary network partitions (see Detecting Network Partitions).
    • Encounter gaps of >MaxBlockGap or >MaxEpochGap between Validated set and a new ChainSync.BestTargetHead
  • transitions out:
    • if a temporary network partition is detected: move to CHAIN_CATCHUP
    • if BlockGap > MaxBlockGap: move to CHAIN_CATCHUP
    • if EpochGap > MaxEpochGap: move to CHAIN_CATCHUP
    • if node is shut down: move to INIT

Block Fetching, Validation, and Propagation

Notes on changing TargetHeads while syncing
  • TargetHeads is changing, as ChainSync must be aware of the best heads at any time. reorgs happen, and our first set of peers could’ve been bad, we keep discovering others.
    • Hello protocol is good, but it’s polling. unless node is constantly polllng, wont see all the heads.
    • BlockPubsub gives us the realtime view into what’s actually going on.
    • weight can also be close between 2+ possible chains (long-forked), and ChainSync must select the right one (which, we may not be able to distinguish until validating all the way)
  • fetching + validation are strictly faster per round on average than blocks produced/block time (if they’re not, will always fall behind), so we definitely catch up eventually (and even quickly). the last couple rounds can be close (“almost got it, almost got it, there”).
General notes on fetching Blocks
  • ChainSync selects and maintains a set of the most likely heads to be correct from among those received via BlockPubsub. As more blocks are received, the set of TargetHeads is reevaluated.
  • ChainSync fetches Blocks, Messages, and StateTree through the Graphsync protocol.
  • ChainSync maintains sets of Blocks/Tipsets in Graphs (see ChainSync.id)
  • ChainSync gathers a list of TargetHeads from BlockPubsub, sorted by likelihood of being the best chain (see below).
  • ChainSync makes requests for chains of BlockHeaders to close gaps between TargetHeads
  • ChainSync forms partial unvalidated chains of BlockHeaders, from those received via BlockPubsub, and those requested via Graphsync.
  • ChainSync attempts to form fully connected chains of BlockHeaders, parting from StateTree, toward observed Heads
  • ChainSync minimizes resource expenditures to fetch and validate blocks, to protect against DOS attack vectors. ChainSync employs Progressive Block Validation, validating different facets at different stages of syncing.
  • ChainSync delays syncing Messages until they are needed. Much of the structure of the partial chains can be checked and used to make syncing decisions without fetching the Messages.
Progressive Block Validation
  • Blocks may be validated in progressive stages, in order to minimize resource expenditure.

  • Validation computation is considerable, and a serious DOS attack vector.

  • Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.

  • ChainSync SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed by FinalityTipset, or when ChainSync is under significant resource load.

  • These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.

  • Progressive Stages of Block Validation

    • BV0 - Syntax: Serialization, typing, value ranges.
    • BV1 - Plausible Consensus: Plausible miner, weight, and epoch values (e.g from chain state at b.ChainEpoch - consensus.LookbackParameter).
    • BV2 - Block Signature
    • BV3 - ElectionPoSt: Correct PoSt with a winning ticket.
    • BV4 - Chain ancestry and finality: Verify block links back to trusted chain, not prior to finality.
    • BV4 - Message Signatures:
    • BV5 - State tree: Parent tipset message execution produces the claimed state tree root and receipts.

Notes:

  • in CHAIN_CATCHUP, if a node is receiving/fetching hundreds/thousands of BlockHeaders, validating signatures can be very expensive, and can be deferred in favor of other validation. (ie lots of BlockHeaders coming in through network pipe, dont want to bound on sig verification, other checks can help dump blocks on the floor faster (BV0, BV2)
  • in CHAIN_FOLLOW, we’re not receiving thousands, we’re receiving maybe a dozen or 2 dozen packets in a few seconds. We receive cid w/ Sig and addr first (ideally fits in 1 packet), and can afford to (a) check if we already have the cid (if so done, cheap), or (b) if not, check if sig is correct before fetching header (expensive computation, but checking 1 sig is way faster than checking a ton). In practice likely that which one to do is dependent on miner tradeoffs. we’ll recommend something but let miners decide, because one strat or the other may be much more effective depending on their hardware, on their bandwidth limitations, or their propensity to getting DOSed
Progressive Block Propagation (or BlockSend)
  • In order to make Block propagation more efficient, we trade off network round trips for bandwidth usage.
  • Motivating observations:
    • Block propagation is one of the most security critical points of the whole protocol.
    • Bandwidth usage during Block propagation is the biggest rate limiter for network scalability.
    • The time it takes for a Block to propagate to the whole network is a critical factor in determining a secure BlockTime
    • Blocks propagating through the network should take as few sequential roundtrips as possible, as these roundtrips impose serious block time delays. However, interleaved roundtrips may be fine. Meaning that block.CIDs may be propagated on their own, without the header, then the header without the messages, then the messages.
    • Blocks will propagate over a libp2p.PubSub. libp2p.PubSub.Messages will most likely arrive multiple times at a node. Therefore, using only the block.CID here could make this very cheap in bandwidth (more expensive in round trips)
    • Blocks in a single epoch may include the same Messages, and duplicate transfers can be avoided
    • Messages propagate through their own MessagePubsub, and nodes have a significant probability of already having a large fraction of the messages in a block. Since messages are the bulk of the size of a Block, this can present great bandwidth savings.
  • Progressive Steps of Block Propagation
    • IMPORTANT NOTES:
      • these can be effectively pipelined. The receiver is in control of what to pull, and when. It is up them to decide when to trade-off RTTs for Bandwidth.
      • If the sender is propagating the block at all to receiver, it is in their interest to provide the full content to receiver when asked. Otherwise the block may not get included at all.
      • Lots of security assumptions here – this needs to be hyper verified, in both spec and code.
      • sender is a filecoin node running ChainSync, propagating a block via Gossipsub (as the originator, as another peer in the network, or just a Gossipsub router).
      • receiver is the local filecoin node running ChainSync, trying to get the blocks.
      • for receiver to Pull things from sender, receivermust conntect to sender. Usually sender is sending to receiver because of the Gossipsub propagation rules. receiver could choose to Pull from any other node they are connected to, but it is most likely sender will have the needed information. They usually may be more well-connected in the network.
    • Step 1. (sender) Push BlockHeader:
      • sender sends block.BlockHeader to receiver via Gossipsub:
        • bh := Gossipsub.Send(h block.BlockHeader)
        • This is a light-ish object (<4KB).
      • receiver receives bh.
        • This has many fields that can be validated before pulling the messages. (See Progressive Block Validation).
        • BV0, BV1, BV2, and BV3 validation takes place before propagating bh to other nodes.
        • receiver MAY receive many advertisements for each winning block in an epoch in quick succession. this is because (a) many want propagation as fast as possible, (b) many want to make those network advertisements as light as reasonable, (c) we want to enable receiver to choose who to ask it from (usually the first party to advertise it, and that’s what spec will recommend), and (d) want to be able to fall back to asking others if that fails (fail = dont get it in 1s or so)
    • Step 2. (receiver) Pull MessageCids:
      • upon receiving bh, receiver checks whether it already has the full block for bh.BlockCID. if not:
        • receiver requests bh.MessageCids from sender:
          • bm := Graphsync.Pull(sender, SelectAMTCIDs(b.Messages))
    • Step 3. (receiver) Pull Messages:
      • if receiver DOES NOT already have the all messages for b.BlockCID, then:
        • if receiver has some of the messages:
          • receiver requests missing Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bm[3], bm[10], bm[50], ...)) or
            • for m in bm {
                Graphsync.Pull(sender, SelectAll(m))
              }
              
        • if receiver does not have any of the messages (default safe but expensive thing to do):
          • receiver requests all Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bh.Messages))
        • (This is the largest amount of stuff)
    • Step 4. (receiver) Validate Block:
      • the only remaining thing to do is to complete Block Validation.

Calculations

Security Parameters
  • Peers >= 32 – direct connections
    • ideally Peers >= {64, 128}
Pubsub Bandwidth

These bandwidth calculations are used to motivate choices in ChainSync.

If you imagine that you will receive the header once per gossipsub peer (or if lucky, half of them), and that there is EC.E_LEADERS=10 blocks per round, then we’re talking the difference between:

16 peers, 1 pkt  -- 1 * 16 * 10 = 160 dup pkts (256KB) in <5s
16 peers, 4 pkts -- 4 * 16 * 10 = 640 dup pkts (1MB)   in <5s

32 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (512KB) in <5s
32 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (2MB)   in <5s

64 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (1MB) in <5s
64 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (4MB)   in <5s

2MB in <5s may not be worth saving– and maybe gossipsub can be much better about supressing dups.

Notes (TODO: move elsewhere)

Checkpoints
  • A checkpoint is the CID of a block (not a tipset list of CIDs, or StateTree)
  • The reason a block is OK is that it uniquely identifies a tipset.
  • using tipsets directly would make Checkpoints harder to communicate. we want to make checkpoints a single hash, as short as we can have it. They will be shared in tweets, URLs, emails, printed into newspapers, etc. Compactness, ease of copy-paste, etc matters.
  • we’ll make human readable lists of checkpoints, and making “lists of lists” is more annoying.
  • When we have EC.E_PARENTS > 5 or = 10, tipsets will get annoyingly large.
  • the big quirk/weirdness with blocks it that it also must be in the chain. (if you relaxed that constraint you could end up in a weird case where a checkpoint isnt in the chain and that’s weird/violates assumptions).

Bootstrap chain stub
  • the mainnet filecoin chain will need to start with a small chain stub of blocks.
  • we must include some data in different blocks.
  • we do need a genesis block – we derive randomness from the ticket there. Rather than special casing, it is easier/less complex to ensure a well-formed chain always, including at the beginning
  • A lot of code expects lookbacks, especially actor code. Rather than introducing a bunch of special case logic for what happens ostensibly once in network history (special case logic which adds complexity and likelihood of problems), it is easiest to assume the chain is always at least X blocks long, and the system lookback parameters are all fine and dont need to be scaled in the beginning of network’s history.
PartialGraph

The PartialGraph of blocks.

Is a graph necessarily connected, or is this just a bag of blocks, with each disconnected subgraph being reported in heads/tails?

The latter. the partial graph is a DAG fragment– including disconnected components. here’s a visual example, 4 example PartialGraphs, with Heads and Tails. (note they aren’t tipsets)

Storage Power Consensus

The Storage Power Consensus subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. SPC accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.

Succinctly, the SPC subsystem offers the following services:

Much of the Storage Power Consensus’ subsystem functionality is detailed in the code below but we touch upon some of its behaviors in more detail.

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import addr "github.com/filecoin-project/go-address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"

type StoragePowerConsensusSubsystem struct {//(@mutable)
    ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]

    node        node_base.FilecoinNode
    ec          ExpectedConsensus
    blockchain  blockchain.BlockchainSubsystem

    // call by BlockchainSubsystem during block reception
    ValidateBlock(block block.Block) error

    IsWinningPartialTicket(
        st                 st.StateTree
        partialTicket      abi.PartialTicket
        sectorUtilization  abi.StoragePower
        numSectors         util.UVarint
    ) bool

    _getStoragePowerActorState(stateTree st.StateTree) spowact.StoragePowerActorState

    validateTicket(
        tix             block.Ticket
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool

    computeChainWeight(tipset chain.Tipset) block.ChainWeight

    StoragePowerConsensusError() StoragePowerConsensusError

    GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch
}

type StoragePowerConsensusError struct {}
Distinguishing between storage miners and block miners

There are two ways to earn Filecoin tokens in the Filecoin network:

  • By participating in the Storage Market as a storage provider and being paid by clients for file storage deals.
  • By mining new blocks on the network, helping modify system state and secure the Filecoin consensus mechanism.

We must distinguish between both types of “miners” (storage and block miners). Secret Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.

However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage (PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.

On Power

Claimed power is assigned to every sector as a static function of its SectorStorageWeightDesc which includes SectorSize, Duration, and DealWeight. DealWeight is a measure that maps size and duration of active deals in a sector during its lifetime to its impact on power and reward distribution. A CommittedCapacity Sector (see Sector Types) will have a DealWeight of zero but all sectors have an explicit Duration which is defined from the ChainEpoch that the sector comes online in a ProveCommit message to the Expiration ChainEpoch of the sector. In principle, power is the number of votes a miner has in leader election and it is a point in time concept of storage. However, the exact function that maps SectorStorageWeightDesc to claimed StoragePower and BlockReward will be announced soon.

More precisely,

  • Claimed power = power from ProveCommit sectors minus sectors in TemporaryFault effective duration.
  • Nominal power = claimed power, unless the miner is in DetectedFault or Challenged state. Nominal power is used to determine total network storage power for purposes of consensus minimum.
  • Consensus power = nominal power, unless the miner fails to meet consensus minimum, or is undercollateralized.
Tickets

Tickets are used across the Filecoin protocol as sources of randomness:

  • The Sector Sealer uses tickets as SealSeeds to bind sector commitments to a given subchain.
  • The Storage Miner likewise uses tickets as PoStChallenges to prove sectors remain committed as of a given block.
  • They are drawn by the Storage Power subsystem as randomness in Secret Leader Election to determine their eligibility to mine a block
  • They are drawn by the Storage Power subsystem in order to generate new tickets for future use.

Each of these ticket uses may require drawing tickets at different chain epochs, according to the security requirements of the particular protocol making use of tickets. Specifically, the ticket output (which is a SHA256 output) is used for randomness.

In Filecoin, every block header contains a single ticket.

You can find the Ticket data structure here.

Comparing Tickets in a Tipset

Whenever comparing tickets is evoked in Filecoin, for instance when discussing selecting the “min ticket” in a Tipset, the comparison is that of the little endian representation of the ticket’s VFOutput bytes.

The Ticket chain and drawing randomness

While each Filecoin block header contains a ticket field (see Tickets), it is useful to think of a ticket chain abstraction. Due to the nature of Filecoin’s Tipsets and the possibility of using tickets from epochs that did not yield leaders to produce randomness at a given epoch, tracking the canonical ticket of a subchain at a given height can be arduous to reason about in terms of blocks. To that end, it is helpful to create a ticket chain abstraction made up of only those tickets to be used for randomness generation at a given height.

To read more about specifically how tickets are processed for randomness, see Randomness.

To sample a ticket for a given epoch n:

Set referenceTipsetOffset = 0
While true:
    Set referenceTipsetHeight = n - referenceTipsetOffset
    If blocks were mined at referenceTipsetHeight:
        ReferenceTipset = TipsetAtHeight(referenceTipsetHeight)
        Select the block in ReferenceTipset with the smallest final ticket, return its value (pastTicket).
    If no blocks were mined at referenceTipsetHeight:
        Increment referenceTipsetOffset
        (Repeat)
newRandomness = H(TicketDrawDST || index || Serialization(epoch || pastTicketOutput))

In plain language, this means two things:

  • Choose the smallest ticket in the Tipset if it contains multiple blocks.
  • When sampling a ticket from an epoch with no blocks, draw the min ticket from the prior epoch with blocks and concatenate it with
    • the wanted epoch number
    • hash this concatenation for a usable ticket value

See the RandomnessAtEpoch method below:

package chain

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
)

// Returns the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) TipsetAtEpoch(epoch abi.ChainEpoch) Tipset {
	current := chain.HeadTipset()
	for current.Epoch() > epoch {
		current = current.Parents()
	}

	return current
}

// Draws randomness from the tipset at or immediately prior to `epoch`.
func (chain *Chain_I) RandomnessAtEpoch(epoch abi.ChainEpoch) abi.RandomnessSeed {
	ts := chain.TipsetAtEpoch(epoch)
	return ts.MinTicket().DrawRandomness(epoch)
}

func (chain *Chain_I) GetTicketProductionRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - node_base.SPC_LOOKBACK_TICKET)
}

func (chain *Chain_I) GetSealRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - builtin.SPC_LOOKBACK_SEAL)
}

func (chain *Chain_I) GetPoStChallengeRandSeed(epoch abi.ChainEpoch) abi.RandomnessSeed {
	return chain.RandomnessAtEpoch(epoch - builtin.SPC_LOOKBACK_POST)
}

The above means that ticket randomness is reseeded with every new block, but can indeed be derived by any miner for an arbitrary epoch number using a past epoch.

Randomness Ticket generation

This section discusses how tickets are generated by EC for the Ticket field in every block header.

At round N, a new ticket is generated using tickets drawn from the Tipset at round N-1 (as shown below).

The miner runs the prior ticket through a Verifiable Random Function (VRF) to get a new unique ticket which can later be derived for randomness (as shown above). The prior ticket is prepended with the ticket domain separation tag and concatenated with the miner actor address (to ensure miners using the same worker keys get different randomness).

To generate a ticket for a given epoch n:

LastTicket = MinTicketValueAtEpoch(n-1)
newRandomness = VRF_miner(H(TicketProdDST || index || Serialization(pastTicket, minerActorAddress)))

The VRF’s deterministic output adds entropy to the ticket chain, limiting a miner’s ability to alter one block to influence a future ticket (given a miner does not know who will win a given round in advance).

We use the VRF from Verifiable Random Function for ticket generation in EC (see the PrepareNewTicket method below).

package storage_mining

import (
	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	smarkact "github.com/filecoin-project/specs-actors/actors/builtin/storage_market"
	sminact "github.com/filecoin-project/specs-actors/actors/builtin/storage_miner"
	spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
	acrypto "github.com/filecoin-project/specs-actors/actors/crypto"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
	cid "github.com/ipfs/go-cid"
	peer "github.com/libp2p/go-libp2p-core/peer"
)

type Serialization = util.Serialization

var Assert = util.Assert
var TODO = util.TODO

// Note that implementations may choose to provide default generation methods for miners created
// without miner/owner keypairs. We omit these details from the spec.
// Also note that the pledge amount should be available in the ownerAddr in order for this call
// to succeed.
func (sms *StorageMiningSubsystem_I) CreateMiner(
	state stateTree.StateTree,
	ownerAddr addr.Address,
	workerAddr addr.Address,
	sectorSize util.UInt,
	peerId peer.ID,
	pledgeAmt abi.TokenAmount,
) (addr.Address, error) {

	ownerActor, ok := state.GetActor(ownerAddr)
	Assert(ok)

	unsignedCreationMessage := &msg.UnsignedMessage_I{
		From_:       ownerAddr,
		To_:         builtin.StoragePowerActorAddr,
		Method_:     builtin.Method_StoragePowerActor_CreateMiner,
		Params_:     serde.MustSerializeParams(ownerAddr, workerAddr, peerId),
		CallSeqNum_: ownerActor.CallSeqNum(),
		Value_:      pledgeAmt,
		GasPrice_:   0,
		GasLimit_:   msg.GasAmount_SentinelUnlimited(),
	}

	var workerKey filcrypto.SigKeyPair // sms._keyStore().Worker()
	signedMessage, err := msg.Sign(unsignedCreationMessage, workerKey)
	if err != nil {
		return addr.Undef, err
	}

	err = sms.Node().MessagePool().Syncer().SubmitMessage(signedMessage)
	if err != nil {
		return addr.Undef, err
	}

	// WAIT for block reception with appropriate response from SPA
	util.IMPL_TODO()

	// harvest address from that block
	var storageMinerAddr addr.Address
	// and set in key store appropriately
	return storageMinerAddr, nil
}

func (sms *StorageMiningSubsystem_I) HandleStorageDeal(deal smarkact.StorageDeal) {
	sms.SectorIndex().AddNewDeal(deal)
	// stagedDealResponse := sms.SectorIndex().AddNewDeal(deal)
	// TODO: way within a node to notify different components
	// market.StorageProvider().NotifyStorageDealStaged(&storage_provider.StorageDealStagedNotification_I{
	// 	Deal_:     deal,
	// 	SectorID_: stagedDealResponse.SectorID(),
	// })
}

func (sms *StorageMiningSubsystem_I) CommitSectorError() smarkact.StorageDeal {
	panic("TODO")
}

// triggered by new block reception and tipset assembly
func (sms *StorageMiningSubsystem_I) OnNewBestChain() {
	sms._runMiningCycle()
}

// triggered by wall clock
func (sms *StorageMiningSubsystem_I) OnNewRound() {
	sms._runMiningCycle()
}

func (sms *StorageMiningSubsystem_I) _runMiningCycle() {
	chainHead := sms._blockchain().BestChain().HeadTipset()
	sma := sms._getStorageMinerActorState(chainHead.StateTree(), sms.Node().Repository().KeyStore().MinerAddress())

	if sma.PoStState.Is_OK() {
		ePoSt := sms._tryLeaderElection(chainHead.StateTree(), sma)
		if ePoSt != nil {
			// Randomness for ticket generation in block production
			randomness1 := sms._blockchain().BestChain().GetTicketProductionRandSeed(sms._blockchain().LatestEpoch())
			newTicket := sms.PrepareNewTicket(randomness1, sms.Node().Repository().KeyStore().MinerAddress())

			sms._blockProducer().GenerateBlock(*ePoSt, newTicket, chainHead, sms.Node().Repository().KeyStore().MinerAddress())
		}
	} else if sma.PoStState.Is_Challenged() {
		sPoSt := sms._trySurprisePoSt(chainHead.StateTree(), sma)

		var gasLimit msg.GasAmount
		var gasPrice = abi.TokenAmount(0)
		util.IMPL_FINISH("read from consts (in this case user set param)")
		sms._submitSurprisePoStMessage(chainHead.StateTree(), *sPoSt, gasPrice, gasLimit)
	}
}

func (sms *StorageMiningSubsystem_I) _tryLeaderElection(currState stateTree.StateTree, sma sminact.StorageMinerActorState) *abi.OnChainElectionPoStVerifyInfo {
	// Randomness for ElectionPoSt

	randomnessK := sms._blockchain().BestChain().GetPoStChallengeRandSeed(sms._blockchain().LatestEpoch())
	input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_ElectionPoStChallengeSeed, randomnessK, sms.Node().Repository().KeyStore().MinerAddress())
	// Use VRF to generate secret randomness
	postRandomness := sms.Node().Repository().KeyStore().WorkerKey().Impl().Generate(input).Output()

	// TODO: add how sectors are actually stored in the SMS proving set
	util.TODO()
	provingSet := make([]abi.SectorID, 0)

	candidates := sms.StorageProving().Impl().GenerateElectionPoStCandidates(postRandomness, provingSet)

	if len(candidates) <= 0 {
		return nil // fail to generate post candidates
	}

	winningCandidates := make([]abi.PoStCandidate, 0)

	var numMinerSectors uint64
	TODO() // update
	// numMinerSectors := uint64(len(sma.SectorTable().Impl().ActiveSectors_.SectorsOn()))

	for _, candidate := range candidates {
		sectorNum := candidate.SectorID.Number
		sectorWeightDesc, ok := sma.GetStorageWeightDescForSectorMaybe(sectorNum)
		if !ok {
			return nil
		}
		sectorPower := indices.ConsensusPowerForStorageWeight(sectorWeightDesc)
		if sms._consensus().IsWinningPartialTicket(currState, candidate.PartialTicket, sectorPower, numMinerSectors) {
			winningCandidates = append(winningCandidates, candidate)
		}
	}

	if len(winningCandidates) <= 0 {
		return nil
	}

	postProofs := sms.StorageProving().Impl().CreateElectionPoStProof(postRandomness, winningCandidates)

	electionPoSt := &abi.OnChainElectionPoStVerifyInfo{
		Candidates: winningCandidates,
		Randomness: postRandomness,
		Proofs:     postProofs,
	}

	return electionPoSt
}

func (sms *StorageMiningSubsystem_I) PrepareNewTicket(randomness abi.RandomnessSeed, minerActorAddr addr.Address) block.Ticket {
	// run it through the VRF and get deterministic output

	// take the VRFResult of that ticket as input, specifying the personalization (see data structures)
	// append the miner actor address for the miner generifying this in order to prevent miners with the same
	// worker keys from generating the same randomness (given the VRF)
	input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_TicketProduction, randomness, minerActorAddr)

	// run through VRF
	vrfRes := sms.Node().Repository().KeyStore().WorkerKey().Impl().Generate(input)

	newTicket := &block.Ticket_I{
		VRFResult_: vrfRes,
		Output_:    vrfRes.Output(),
	}

	return newTicket
}

func (sms *StorageMiningSubsystem_I) _getStorageMinerActorState(stateTree stateTree.StateTree, minerAddr addr.Address) sminact.StorageMinerActorState {
	actorState, ok := stateTree.GetActor(minerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, ok := sms.Node().Repository().StateStore().Get(cid.Cid(substateCID))
	if !ok {
		panic("Couldn't find sma state")
	}
	// fix conversion to bytes
	util.IMPL_TODO(substate)
	var serializedSubstate Serialization
	var st sminact.StorageMinerActorState
	serde.MustDeserialize(serializedSubstate, &st)
	return st
}

func (sms *StorageMiningSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) spowact.StoragePowerActorState {
	powerAddr := builtin.StoragePowerActorAddr
	actorState, ok := stateTree.GetActor(powerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, ok := sms.Node().Repository().StateStore().Get(cid.Cid(substateCID))
	if !ok {
		panic("Couldn't find spa state")
	}

	// fix conversion to bytes
	util.IMPL_TODO(substate)
	var serializedSubstate util.Serialization
	var st spowact.StoragePowerActorState
	serde.MustDeserialize(serializedSubstate, &st)
	return st
}

func (sms *StorageMiningSubsystem_I) VerifyElectionPoSt(inds indices.Indices, header block.BlockHeader, onChainInfo abi.OnChainElectionPoStVerifyInfo) bool {
	sma := sms._getStorageMinerActorState(header.ParentState(), header.Miner())
	spa := sms._getStoragePowerActorState(header.ParentState())

	pow, found := spa.PowerTable[header.Miner()]
	if !found {
		return false
	}

	// 1. Verify miner has enough power (includes implicit checks on min miner size
	// and challenge status via SPA's power table).
	if pow == abi.StoragePower(0) {
		return false
	}

	// 2. verify no duplicate tickets included
	challengeIndices := make(map[int64]bool)
	for _, tix := range onChainInfo.Candidates {
		if _, ok := challengeIndices[tix.ChallengeIndex]; ok {
			return false
		}
		challengeIndices[tix.ChallengeIndex] = true
	}

	// 3. Verify partialTicket values are appropriate
	if !sms._verifyElection(header, onChainInfo) {
		return false
	}

	// verify the partialTickets themselves
	// 4. Verify appropriate randomness
	// TODO: fix away from BestChain()... every block should track its own chain up to its own production.
	randomness := sms._blockchain().BestChain().GetPoStChallengeRandSeed(header.Epoch())
	input := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_ElectionPoStChallengeSeed, randomness, header.Miner())

	postRand := &filcrypto.VRFResult_I{
		Output_: onChainInfo.Randomness,
	}

	// TODO if the workerAddress is secp then payload will be the blake2b hash of its public key
	// and we will need to recover the entire public key from worker before handing off to Verify
	// example of recover code: https://github.com/ipsn/go-secp256k1/blob/master/secp256.go#L93
	workerKey := sma.Info.Worker.Payload()
	// Verify VRF output from appropriate input corresponds to randomness used
	if !postRand.Verify(input, filcrypto.VRFPublicKey(workerKey)) {
		return false
	}

	// A proof must be a valid snark proof with the correct public inputs
	// 5. Get public inputs
	info := sma.Info
	sectorSize := info.SectorSize

	postCfg := filproofs.ElectionPoStCfg(sectorSize)

	pvInfo := abi.PoStVerifyInfo{
		Candidates: onChainInfo.Candidates,
		Proofs:     onChainInfo.Proofs,
		Randomness: onChainInfo.Randomness,
	}

	pv := filproofs.MakeElectionPoStVerifier(postCfg)

	// 5. Verify the PoSt Proof
	isPoStVerified := pv.VerifyElectionPoSt(pvInfo)

	return isPoStVerified
}

func (sms *StorageMiningSubsystem_I) _verifyElection(header block.BlockHeader, onChainInfo abi.OnChainElectionPoStVerifyInfo) bool {
	st := sms._getStorageMinerActorState(header.ParentState(), header.Miner())

	var numMinerSectors uint64
	TODO()
	// TODO: Decide whether to sample sectors uniformly for EPoSt (the cleanest),
	// or to sample weighted by nominal power.

	for _, info := range onChainInfo.Candidates {
		sectorNum := info.SectorID.Number
		sectorWeightDesc, ok := st.GetStorageWeightDescForSectorMaybe(sectorNum)
		if !ok {
			return false
		}
		sectorPower := indices.ConsensusPowerForStorageWeight(sectorWeightDesc)
		if !sms._consensus().IsWinningPartialTicket(header.ParentState(), info.PartialTicket, sectorPower, numMinerSectors) {
			return false
		}
	}
	return true
}

func (sms *StorageMiningSubsystem_I) _trySurprisePoSt(currState stateTree.StateTree, sma sminact.StorageMinerActorState) *abi.OnChainSurprisePoStVerifyInfo {
	if !sma.PoStState.Is_Challenged() {
		return nil
	}

	// get randomness for SurprisePoSt
	challEpoch := sma.PoStState.SurpriseChallengeEpoch
	randomnessK := sms._blockchain().BestChain().GetPoStChallengeRandSeed(challEpoch)
	// unlike with ElectionPoSt no need to use a VRF
	postRandomness := acrypto.DeriveRandWithMinerAddr(acrypto.DomainSeparationTag_SurprisePoStChallengeSeed, randomnessK, sms.Node().Repository().KeyStore().MinerAddress())

	// TODO: add how sectors are actually stored in the SMS proving set
	util.TODO()
	provingSet := make([]abi.SectorID, 0)

	candidates := sms.StorageProving().Impl().GenerateSurprisePoStCandidates(abi.PoStRandomness(postRandomness), provingSet)

	if len(candidates) <= 0 {
		// Error. Will fail this surprise post and must then redeclare faults
		return nil // fail to generate post candidates
	}

	winningCandidates := make([]abi.PoStCandidate, 0)
	for _, candidate := range candidates {
		if sma.VerifySurprisePoStMeetsTargetReq(candidate) {
			winningCandidates = append(winningCandidates, candidate)
		}
	}

	postProofs := sms.StorageProving().Impl().CreateSurprisePoStProof(abi.PoStRandomness(postRandomness), winningCandidates)

	// var ctc sector.ChallengeTicketsCommitment // TODO: proofs to fix when complete
	surprisePoSt := &abi.OnChainSurprisePoStVerifyInfo{
		// CommT_:      ctc,
		Candidates: winningCandidates,
		Proofs:     postProofs,
	}
	return surprisePoSt
}

func (sms *StorageMiningSubsystem_I) _submitSurprisePoStMessage(state stateTree.StateTree, sPoSt abi.OnChainSurprisePoStVerifyInfo, gasPrice abi.TokenAmount, gasLimit msg.GasAmount) error {
	// TODO if workerAddr is not a secp key (e.g. BLS) then this will need to be handled differently
	workerAddr, err := addr.NewSecp256k1Address(sms.Node().Repository().KeyStore().WorkerKey().VRFPublicKey())
	if err != nil {
		return err
	}
	worker, ok := state.GetActor(workerAddr)
	Assert(ok)
	unsignedCreationMessage := &msg.UnsignedMessage_I{
		From_:       sms.Node().Repository().KeyStore().MinerAddress(),
		To_:         sms.Node().Repository().KeyStore().MinerAddress(),
		Method_:     builtin.Method_StorageMinerActor_SubmitSurprisePoStResponse,
		Params_:     serde.MustSerializeParams(sPoSt),
		CallSeqNum_: worker.CallSeqNum(),
		Value_:      abi.TokenAmount(0),
		GasPrice_:   gasPrice,
		GasLimit_:   gasLimit,
	}

	var workerKey filcrypto.SigKeyPair // sms.Node().Repository().KeyStore().Worker()
	signedMessage, err := msg.Sign(unsignedCreationMessage, workerKey)
	if err != nil {
		return err
	}

	err = sms.Node().MessagePool().Syncer().SubmitMessage(signedMessage)
	if err != nil {
		return err
	}

	return nil
}
Ticket Validation

Each Ticket should be generated from the prior one in the ticket-chain and verified accordingly as shown in validateTicket below.

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import addr "github.com/filecoin-project/go-address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
import node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"

type StoragePowerConsensusSubsystem struct {//(@mutable)
    ChooseTipsetToMine(tipsets [chain.Tipset]) [chain.Tipset]

    node        node_base.FilecoinNode
    ec          ExpectedConsensus
    blockchain  blockchain.BlockchainSubsystem

    // call by BlockchainSubsystem during block reception
    ValidateBlock(block block.Block) error

    IsWinningPartialTicket(
        st                 st.StateTree
        partialTicket      abi.PartialTicket
        sectorUtilization  abi.StoragePower
        numSectors         util.UVarint
    ) bool

    _getStoragePowerActorState(stateTree st.StateTree) spowact.StoragePowerActorState

    validateTicket(
        tix             block.Ticket
        pk              filcrypto.VRFPublicKey
        minerActorAddr  addr.Address
    ) bool

    computeChainWeight(tipset chain.Tipset) block.ChainWeight

    StoragePowerConsensusError() StoragePowerConsensusError

    GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch
}

type StoragePowerConsensusError struct {}
package storage_power_consensus

import (
	"math"

	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	spowact "github.com/filecoin-project/specs-actors/actors/builtin/storage_power"
	acrypto "github.com/filecoin-project/specs-actors/actors/crypto"
	inds "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	chain "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/chain"
	node_base "github.com/filecoin-project/specs/systems/filecoin_nodes/node_base"
	stateTree "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
	util "github.com/filecoin-project/specs/util"
	cid "github.com/ipfs/go-cid"
)

// Storage Power Consensus Subsystem

func (spc *StoragePowerConsensusSubsystem_I) ValidateBlock(block block.Block_I) error {
	util.IMPL_FINISH()
	return nil
}

func (spc *StoragePowerConsensusSubsystem_I) validateTicket(ticket block.Ticket, pk filcrypto.VRFPublicKey, minerActorAddr addr.Address) bool {
	randomness1 := spc.blockchain().BestChain().GetTicketProductionRandSeed(spc.blockchain().LatestEpoch())

	return ticket.Verify(randomness1, pk, minerActorAddr)
}

func (spc *StoragePowerConsensusSubsystem_I) ComputeChainWeight(tipset chain.Tipset) block.ChainWeight {
	return spc.ec().ComputeChainWeight(tipset)
}

func (spc *StoragePowerConsensusSubsystem_I) IsWinningPartialTicket(stateTree stateTree.StateTree, inds inds.Indices, partialTicket abi.PartialTicket, sectorUtilization abi.StoragePower, numSectors util.UVarint) bool {

	// finalize the partial ticket
	challengeTicket := acrypto.SHA256(abi.Bytes(partialTicket))

	networkPower := inds.TotalNetworkEffectivePower()

	sectorsSampled := uint64(math.Ceil(float64(node_base.EPOST_SAMPLE_RATE_NUM/node_base.EPOST_SAMPLE_RATE_DENOM) * float64(numSectors)))

	return spc.ec().IsWinningChallengeTicket(challengeTicket, sectorUtilization, networkPower, sectorsSampled, numSectors)
}

func (spc *StoragePowerConsensusSubsystem_I) _getStoragePowerActorState(stateTree stateTree.StateTree) spowact.StoragePowerActorState {
	powerAddr := builtin.StoragePowerActorAddr
	actorState, ok := stateTree.GetActor(powerAddr)
	util.Assert(ok)
	substateCID := actorState.State()

	substate, ok := spc.node().Repository().StateStore().Get(cid.Cid(substateCID))
	util.Assert(ok)

	// fix conversion to bytes
	util.IMPL_FINISH(substate)
	var serializedSubstate util.Serialization
	var st spowact.StoragePowerActorState
	serde.MustDeserialize(serializedSubstate, &st)
	return st
}

func (spc *StoragePowerConsensusSubsystem_I) GetFinalizedEpoch(currentEpoch abi.ChainEpoch) abi.ChainEpoch {
	return currentEpoch - node_base.FINALITY
}

Repeated Leader Election attempts

In the case that no miner is eligible to produce a block in a given round of EC, the storage power consensus subsystem will be called by the block producer to attempt another leader election by incrementing the nonce appended to the ticket drawn from the past in order to attempt to find a new winning PartialTicket and trying again. Note that a miner may attempt to grind through tickets by incrementing the nonce repeatedly until they find a winning ticket. However, any block so generated in the future will be rejected by other miners (with synchronized clocks) until that epoch’s appropriate time.

Minimum Miner Size

In order to secure Storage Power Consensus, the system defines a minimum miner size required to participate in consensus.

Specifically, miners must have either at least MIN_MINER_SIZE_STOR of power (i.e. storage power currently used in storage deals) in order to participate in leader election. If no miner has MIN_MINER_SIZE_STOR or more power, miners with at least as much power as the smallest miner in the top MIN_MINER_SIZE_TARG of miners (sorted by storage power) will be able to participate in leader election. In plain english, take MIN_MINER_SIZE_TARG = 3 for instance, this means that miners with at least as much power as the 3rd largest miner will be eligible to participate in consensus.

Miners smaller than this cannot mine blocks and earn block rewards in the network. Their power will still be counted in the total network (raw or claimed) storage power, even though their power will not be counted as votes for leader election. However, it is important to note that such miners can still have their power faulted and be penalized accordingly.

Accordingly, to bootstrap the network, the genesis block must include miners, potentially just CommittedCapacity sectors, to initiate the network.

The MIN_MINER_SIZE_TARG condition will not be used in a network in which any miner has more than MIN_MINER_SIZE_STOR power. It is nonetheless defined to ensure liveness in small networks (e.g. close to genesis or after large power drops).

We currently set:

  • MIN_MINER_SIZE_STOR = 100 * (1 << 40) Bytes (100 TiB)
  • `MIN_MINER_SIZE_TARG = 3
Network recovery after halting

Placeholder where we will define a means of rebooting network liveness after it halts catastrophically (i.e. empty power table).

Storage Power Actor

StoragePowerActorState implementation
package storage_power

import (
	"sort"

	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	crypto "github.com/filecoin-project/specs-actors/actors/crypto"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
)

// TODO: HAMT
type PowerTableHAMT map[addr.Address]abi.StoragePower // TODO: convert address to ActorID

// TODO: HAMT
type MinerEventsHAMT map[abi.ChainEpoch]autil.MinerEventSetHAMT

type StoragePowerActorState struct {
	TotalNetworkPower abi.StoragePower

	PowerTable  PowerTableHAMT
	EscrowTable autil.BalanceTableHAMT

	// Metadata cached for efficient processing of sector/challenge events.
	CachedDeferredCronEvents MinerEventsHAMT
	PoStDetectedFaultMiners  autil.MinerSetHAMT
	ClaimedPower             PowerTableHAMT
	NominalPower             PowerTableHAMT
	NumMinersMeetingMinPower int
}

func (st *StoragePowerActorState) CID() cid.Cid {
	panic("TODO")
}

func (st *StoragePowerActorState) _minerNominalPowerMeetsConsensusMinimum(minerPower abi.StoragePower) bool {

	// if miner is larger than min power requirement, we're set
	if minerPower >= builtin.MIN_MINER_SIZE_STOR {
		return true
	}

	// otherwise, if another miner meets min power requirement, return false
	if st.NumMinersMeetingMinPower > 0 {
		return false
	}

	// else if none do, check whether in MIN_MINER_SIZE_TARG miners
	if len(st.PowerTable) <= builtin.MIN_MINER_SIZE_TARG {
		// miner should pass
		return true
	}

	// get size of MIN_MINER_SIZE_TARGth largest miner
	minerSizes := make([]abi.StoragePower, 0, len(st.PowerTable))
	for _, v := range st.PowerTable {
		minerSizes = append(minerSizes, v)
	}
	sort.Slice(minerSizes, func(i, j int) bool { return int(i) > int(j) })
	return minerPower >= minerSizes[builtin.MIN_MINER_SIZE_TARG-1]
}

func (st *StoragePowerActorState) _slashPledgeCollateral(
	minerAddr addr.Address, slashAmountRequested abi.TokenAmount) abi.TokenAmount {

	Assert(slashAmountRequested >= abi.TokenAmount(0))

	newTable, amountSlashed, ok := autil.BalanceTable_WithSubtractPreservingNonnegative(
		st.EscrowTable, minerAddr, slashAmountRequested)
	Assert(ok)
	st.EscrowTable = newTable

	TODO()
	// Decide whether we can take any additional action if there is not enough
	// pledge collateral to be slashed.

	return amountSlashed
}

func addrInArray(a addr.Address, list []addr.Address) bool {
	for _, b := range list {
		if b == a {
			return true
		}
	}
	return false
}

// _selectMinersToSurprise implements the PoSt-Surprise sampling algorithm
func (st *StoragePowerActorState) _selectMinersToSurprise(challengeCount int, randomness abi.Randomness) []addr.Address {
	// this wont quite work -- a.PowerTable is a HAMT by actor address, doesn't
	// support enumerating by int index. maybe we need that as an interface too,
	// or something similar to an iterator (or iterator over the keys)
	// or even a seeded random call directly in the HAMT: myhamt.GetRandomElement(seed []byte, idx int) using the ticket as a seed

	ptSize := len(st.PowerTable)
	allMiners := make([]addr.Address, len(st.PowerTable))
	index := 0

	for address, _ := range st.PowerTable {
		allMiners[index] = address
		index++
	}

	selectedMiners := make([]addr.Address, 0)
	for chall := 0; chall < challengeCount; chall++ {
		minerIndex := crypto.RandomInt(randomness, chall, ptSize)
		potentialChallengee := allMiners[minerIndex]
		// skip dups
		for addrInArray(potentialChallengee, selectedMiners) {
			minerIndex := crypto.RandomInt(randomness, chall, ptSize)
			potentialChallengee = allMiners[minerIndex]
		}
		selectedMiners = append(selectedMiners, potentialChallengee)
	}

	return selectedMiners
}

func (st *StoragePowerActorState) _getPowerTotalForMiner(minerAddr addr.Address) (
	power abi.StoragePower, ok bool) {

	minerPower, found := st.PowerTable[minerAddr]
	if !found {
		return abi.StoragePower(0), found
	}

	return minerPower, true
}

func (st *StoragePowerActorState) _getCurrPledgeForMiner(minerAddr addr.Address) (currPledge abi.TokenAmount, ok bool) {
	return autil.BalanceTable_GetEntry(st.EscrowTable, minerAddr)
}

func (st *StoragePowerActorState) _addClaimedPowerForSector(minerAddr addr.Address, storageWeightDesc SectorStorageWeightDesc) {
	// Note: The following computation does not use any of the dynamic information from CurrIndices();
	// it depends only on storageWeightDesc. This means that the power of a given storageWeightDesc
	// does not vary over time, so we can avoid continually updating it for each sector every epoch.
	//
	// The function is located in the indices module temporarily, until we find a better place for
	// global parameterization functions.
	sectorPower := indices.ConsensusPowerForStorageWeight(storageWeightDesc)

	currentPower, ok := st.ClaimedPower[minerAddr]
	Assert(ok)
	st._setClaimedPowerEntryInternal(minerAddr, currentPower+sectorPower)
	st._updatePowerEntriesFromClaimedPower(minerAddr)
}

func (st *StoragePowerActorState) _deductClaimedPowerForSectorAssert(minerAddr addr.Address, storageWeightDesc SectorStorageWeightDesc) {
	// Note: The following computation does not use any of the dynamic information from CurrIndices();
	// it depends only on storageWeightDesc. This means that the power of a given storageWeightDesc
	// does not vary over time, so we can avoid continually updating it for each sector every epoch.
	//
	// The function is located in the indices module temporarily, until we find a better place for
	// global parameterization functions.
	sectorPower := indices.ConsensusPowerForStorageWeight(storageWeightDesc)

	currentPower, ok := st.ClaimedPower[minerAddr]
	Assert(ok)
	st._setClaimedPowerEntryInternal(minerAddr, currentPower-sectorPower)
	st._updatePowerEntriesFromClaimedPower(minerAddr)
}

func (st *StoragePowerActorState) _updatePowerEntriesFromClaimedPower(minerAddr addr.Address) {
	claimedPower, ok := st.ClaimedPower[minerAddr]
	Assert(ok)

	// Compute nominal power: i.e., the power we infer the miner to have (based on the network's
	// PoSt queries), which may not be the same as the claimed power.
	// Currently, the only reason for these to differ is if the miner is in DetectedFault state
	// from a SurprisePoSt challenge.
	nominalPower := claimedPower
	if st.PoStDetectedFaultMiners[minerAddr] {
		nominalPower = 0
	}
	st._setNominalPowerEntryInternal(minerAddr, nominalPower)

	// Compute actual (consensus) power, i.e., votes in leader election.
	power := nominalPower
	if !st._minerNominalPowerMeetsConsensusMinimum(nominalPower) {
		power = 0
	}

	TODO() // TODO: Decide effect of undercollateralization on (consensus) power.

	st._setPowerEntryInternal(minerAddr, power)
}

func (st *StoragePowerActorState) _setClaimedPowerEntryInternal(minerAddr addr.Address, updatedMinerClaimedPower abi.StoragePower) {
	Assert(updatedMinerClaimedPower >= 0)
	st.ClaimedPower[minerAddr] = updatedMinerClaimedPower
}

func (st *StoragePowerActorState) _setNominalPowerEntryInternal(minerAddr addr.Address, updatedMinerNominalPower abi.StoragePower) {
	Assert(updatedMinerNominalPower >= 0)
	prevMinerNominalPower, ok := st.NominalPower[minerAddr]
	Assert(ok)
	st.NominalPower[minerAddr] = updatedMinerNominalPower

	consensusMinPower := indices.StoragePower_ConsensusMinMinerPower()
	if updatedMinerNominalPower >= consensusMinPower && prevMinerNominalPower < consensusMinPower {
		st.NumMinersMeetingMinPower += 1
	} else if updatedMinerNominalPower < consensusMinPower && prevMinerNominalPower >= consensusMinPower {
		st.NumMinersMeetingMinPower -= 1
	}
}

func (st *StoragePowerActorState) _setPowerEntryInternal(minerAddr addr.Address, updatedMinerPower abi.StoragePower) {
	Assert(updatedMinerPower >= 0)
	prevMinerPower, ok := st.PowerTable[minerAddr]
	Assert(ok)
	st.PowerTable[minerAddr] = updatedMinerPower
	st.TotalNetworkPower += (updatedMinerPower - prevMinerPower)
}

func (st *StoragePowerActorState) _getPledgeSlashForConsensusFault(currPledge abi.TokenAmount, faultType ConsensusFaultType) abi.TokenAmount {
	// default is to slash all pledge collateral for all consensus fault
	TODO()
	switch faultType {
	case DoubleForkMiningFault:
		return currPledge
	case ParentGrindingFault:
		return currPledge
	case TimeOffsetMiningFault:
		return currPledge
	default:
		panic("Unsupported case for pledge collateral consensus fault slashing")
	}
}

func _getConsensusFaultSlasherReward(elapsedEpoch abi.ChainEpoch, collateralToSlash abi.TokenAmount) abi.TokenAmount {
	TODO()
	// BigInt Operation
	// var growthRate = builtin.SLASHER_SHARE_GROWTH_RATE_NUM / builtin.SLASHER_SHARE_GROWTH_RATE_DENOM
	// var multiplier = growthRate^elapsedEpoch
	// var slasherProportion = min(INITIAL_SLASHER_SHARE * multiplier, 1.0)
	// return collateralToSlash * slasherProportion
	return abi.TokenAmount(0)
}

func PowerTableHAMT_Empty() PowerTableHAMT {
	IMPL_FINISH()
	panic("")
}

func MinerEventsHAMT_Empty() MinerEventsHAMT {
	IMPL_FINISH()
	panic("")
}
StoragePowerActor implementation
package storage_power

import (
	"math"

	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	crypto "github.com/filecoin-project/specs-actors/actors/crypto"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	"github.com/ipfs/go-cid"
	peer "github.com/libp2p/go-libp2p-core/peer"
)

type ConsensusFaultType int

const (
	UncommittedPowerFault ConsensusFaultType = 0
	DoubleForkMiningFault ConsensusFaultType = 1
	ParentGrindingFault   ConsensusFaultType = 2
	TimeOffsetMiningFault ConsensusFaultType = 3
)

type StoragePowerActor struct{}

func (a *StoragePowerActor) State(rt Runtime) (vmr.ActorStateHandle, StoragePowerActorState) {
	h := rt.AcquireState()
	stateCID := cid.Cid(h.Take())
	var state StoragePowerActorState
	if !rt.IpldGet(stateCID, &state) {
		rt.AbortAPI("state not found")
	}
	return h, state
}

////////////////////////////////////////////////////////////////////////////////
// Actor methods
////////////////////////////////////////////////////////////////////////////////

func (a *StoragePowerActor) AddBalance(rt Runtime, minerAddr addr.Address) {
	RT_MinerEntry_ValidateCaller_DetermineFundsLocation(rt, minerAddr, vmr.MinerEntrySpec_MinerOnly)

	msgValue := rt.ValueReceived()

	h, st := a.State(rt)
	newTable, ok := autil.BalanceTable_WithAdd(st.EscrowTable, minerAddr, msgValue)
	if !ok {
		rt.AbortStateMsg("Escrow operation failed")
	}
	st.EscrowTable = newTable
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActor) WithdrawBalance(rt Runtime, minerAddr addr.Address, amountRequested abi.TokenAmount) {
	if amountRequested < 0 {
		rt.AbortArgMsg("Amount to withdraw must be nonnegative")
	}

	recipientAddr := RT_MinerEntry_ValidateCaller_DetermineFundsLocation(rt, minerAddr, vmr.MinerEntrySpec_MinerOnly)

	minBalanceMaintainRequired := a._rtGetPledgeCollateralReqForMinerOrAbort(rt, minerAddr)

	h, st := a.State(rt)
	newTable, amountExtracted, ok := autil.BalanceTable_WithExtractPartial(
		st.EscrowTable, minerAddr, amountRequested, minBalanceMaintainRequired)
	if !ok {
		rt.AbortStateMsg("Escrow operation failed")
	}
	st.EscrowTable = newTable
	UpdateRelease(rt, h, st)

	rt.SendFunds(recipientAddr, amountExtracted)
}

func (a *StoragePowerActor) CreateMiner(rt Runtime, workerAddr addr.Address, sectorSize abi.SectorSize, peerId peer.ID) addr.Address {
	vmr.RT_ValidateImmediateCallerIsSignable(rt)
	ownerAddr := rt.ImmediateCaller()

	newMinerAddr, err := addr.NewFromBytes(
		rt.Send(
			builtin.InitActorAddr,
			builtin.Method_InitActor_Exec,
			serde.MustSerializeParams(
				builtin.StorageMinerActorCodeID,
				ownerAddr,
				workerAddr,
				sectorSize,
				peerId,
			),
			abi.TokenAmount(0),
		).ReturnValue,
	)
	autil.Assert(err == nil)

	h, st := a.State(rt)
	newTable, ok := autil.BalanceTable_WithNewAddressEntry(st.EscrowTable, newMinerAddr, rt.ValueReceived())
	Assert(ok)
	st.EscrowTable = newTable
	st.PowerTable[newMinerAddr] = abi.StoragePower(0)
	st.ClaimedPower[newMinerAddr] = abi.StoragePower(0)
	st.NominalPower[newMinerAddr] = abi.StoragePower(0)
	UpdateRelease(rt, h, st)

	return newMinerAddr
}

func (a *StoragePowerActor) DeleteMiner(rt Runtime, minerAddr addr.Address) {
	h, st := a.State(rt)

	minerPledgeBalance, ok := autil.BalanceTable_GetEntry(st.EscrowTable, minerAddr)
	if !ok {
		rt.AbortArgMsg("Miner address not found")
	}

	if minerPledgeBalance > abi.TokenAmount(0) {
		rt.AbortStateMsg("Deletion requested for miner with pledge balance still remaining")
	}

	minerPower, ok := st.PowerTable[minerAddr]
	Assert(ok)
	if minerPower > 0 {
		rt.AbortStateMsg("Deletion requested for miner with power still remaining")
	}

	Release(rt, h, st)

	ownerAddr, workerAddr := vmr.RT_GetMinerAccountsAssert(rt, minerAddr)
	rt.ValidateImmediateCallerInSet([]addr.Address{ownerAddr, workerAddr})

	a._rtDeleteMinerActor(rt, minerAddr)
}

func (a *StoragePowerActor) OnSectorProveCommit(rt Runtime, storageWeightDesc SectorStorageWeightDesc) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	a._rtAddPowerForSector(rt, rt.ImmediateCaller(), storageWeightDesc)
}

func (a *StoragePowerActor) OnSectorTerminate(
	rt Runtime, storageWeightDesc SectorStorageWeightDesc, terminationType SectorTerminationType) {

	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.ImmediateCaller()
	a._rtDeductClaimedPowerForSectorAssert(rt, minerAddr, storageWeightDesc)

	if terminationType != SectorTerminationType_NormalExpiration {
		cidx := rt.CurrIndices()
		amountToSlash := cidx.StoragePower_PledgeSlashForSectorTermination(storageWeightDesc, terminationType)
		a._rtSlashPledgeCollateral(rt, minerAddr, amountToSlash)
	}
}

func (a *StoragePowerActor) OnSectorTemporaryFaultEffectiveBegin(rt Runtime, storageWeightDesc SectorStorageWeightDesc) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	a._rtDeductClaimedPowerForSectorAssert(rt, rt.ImmediateCaller(), storageWeightDesc)
}

func (a *StoragePowerActor) OnSectorTemporaryFaultEffectiveEnd(rt Runtime, storageWeightDesc SectorStorageWeightDesc) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	a._rtAddPowerForSector(rt, rt.ImmediateCaller(), storageWeightDesc)
}

func (a *StoragePowerActor) OnSectorModifyWeightDesc(
	rt Runtime, storageWeightDescPrev SectorStorageWeightDesc, storageWeightDescNew SectorStorageWeightDesc) {

	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	a._rtDeductClaimedPowerForSectorAssert(rt, rt.ImmediateCaller(), storageWeightDescPrev)
	a._rtAddPowerForSector(rt, rt.ImmediateCaller(), storageWeightDescNew)
}

func (a *StoragePowerActor) OnMinerSurprisePoStSuccess(rt Runtime) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.ImmediateCaller()

	h, st := a.State(rt)
	delete(st.PoStDetectedFaultMiners, minerAddr)
	st._updatePowerEntriesFromClaimedPower(minerAddr)
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActor) OnMinerSurprisePoStFailure(rt Runtime, numConsecutiveFailures int64) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.ImmediateCaller()

	h, st := a.State(rt)

	st.PoStDetectedFaultMiners[minerAddr] = true
	st._updatePowerEntriesFromClaimedPower(minerAddr)

	minerClaimedPower, ok := st.ClaimedPower[minerAddr]
	Assert(ok)

	UpdateRelease(rt, h, st)

	if numConsecutiveFailures > indices.StoragePower_SurprisePoStMaxConsecutiveFailures() {
		a._rtDeleteMinerActor(rt, minerAddr)
	} else {
		cidx := rt.CurrIndices()
		amountToSlash := cidx.StoragePower_PledgeSlashForSurprisePoStFailure(minerClaimedPower, numConsecutiveFailures)
		a._rtSlashPledgeCollateral(rt, minerAddr, amountToSlash)
	}
}

func (a *StoragePowerActor) OnMinerEnrollCronEvent(rt Runtime, eventEpoch abi.ChainEpoch, sectorNumbers []abi.SectorNumber) {
	rt.ValidateImmediateCallerAcceptAnyOfType(builtin.StorageMinerActorCodeID)
	minerAddr := rt.ImmediateCaller()
	minerEvent := autil.MinerEvent{
		MinerAddr: minerAddr,
		Sectors:   sectorNumbers,
	}

	h, st := a.State(rt)
	if _, found := st.CachedDeferredCronEvents[eventEpoch]; !found {
		st.CachedDeferredCronEvents[eventEpoch] = autil.MinerEventSetHAMT_Empty()
	}
	st.CachedDeferredCronEvents[eventEpoch] = append(st.CachedDeferredCronEvents[eventEpoch], minerEvent)
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActor) ReportVerifiedConsensusFault(rt Runtime, slasheeAddr addr.Address, faultEpoch abi.ChainEpoch, faultType ConsensusFaultType) {
	TODO()
	panic("")
	// TODO: The semantics here are quite delicate:
	//
	// - (proof []block.Block) can't be validated in isolation; we must query the runtime to confirm
	//   that at least one of the blocks provided actually appeared in the current chain.
	// - We must prevent duplicate slashes on the same offense, taking into account that the blocks
	//   may appear in different orders.
	// - We must determine how to reward multiple reporters of the same fault within a single epoch.
	//
	// Deferring to followup after these security/mechanism design questions have been resolved.
	// Previous notes:
	//
	// validation checks to be done in runtime before calling this method
	// - there should be exactly two block headers in proof
	// - both blocks are mined by the same miner
	// - first block is of the same or lower block height as the second block
	//
	// Use EC's IsValidConsensusFault method to validate the proof

	// this method assumes that ConsensusFault has been checked in runtime
	slasherAddr := rt.ImmediateCaller()
	h, st := a.State(rt)

	claimedPower, powerOk := st.ClaimedPower[slasheeAddr]
	if !powerOk {
		rt.AbortArgMsg("spa.ReportConsensusFault: miner to slash has been slashed")
	}
	Assert(claimedPower > 0)

	currPledge, pledgeOk := st._getCurrPledgeForMiner(slasheeAddr)
	if !pledgeOk {
		rt.AbortArgMsg("spa.ReportConsensusFault: miner to slash has no pledge")
	}
	Assert(currPledge > 0)

	// elapsed epoch from the latter block which committed the fault
	elapsedEpoch := rt.CurrEpoch() - faultEpoch
	if elapsedEpoch <= 0 {
		rt.AbortArgMsg("spa.ReportConsensusFault: invalid block")
	}

	collateralToSlash := st._getPledgeSlashForConsensusFault(currPledge, faultType)
	slasherReward := _getConsensusFaultSlasherReward(elapsedEpoch, collateralToSlash)

	// request slasherReward to be deducted from EscrowTable
	amountToSlasher := st._slashPledgeCollateral(slasherAddr, slasherReward)
	Assert(slasherReward == amountToSlasher)

	UpdateRelease(rt, h, st)

	// reward slasher
	rt.SendFunds(slasherAddr, amountToSlasher)

	// burn the rest of pledge collateral
	// delete miner from power table
	a._rtDeleteMinerActor(rt, slasheeAddr)
}

// Called by Cron.
func (a *StoragePowerActor) OnEpochTickEnd(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.CronActorAddr)

	a._rtInitiateNewSurprisePoStChallenges(rt)
	a._rtProcessDeferredCronEvents(rt)
}

func (a *StoragePowerActor) Constructor(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)
	h := rt.AcquireState()

	st := &StoragePowerActorState{
		TotalNetworkPower:        abi.StoragePower(0),
		PowerTable:               PowerTableHAMT_Empty(),
		EscrowTable:              autil.BalanceTableHAMT_Empty(),
		CachedDeferredCronEvents: MinerEventsHAMT_Empty(),
		PoStDetectedFaultMiners:  autil.MinerSetHAMT_Empty(),
		ClaimedPower:             PowerTableHAMT_Empty(),
		NominalPower:             PowerTableHAMT_Empty(),
		NumMinersMeetingMinPower: 0,
	}

	UpdateRelease(rt, h, *st)
}

////////////////////////////////////////////////////////////////////////////////
// Method utility functions
////////////////////////////////////////////////////////////////////////////////

func (a *StoragePowerActor) _rtAddPowerForSector(rt Runtime, minerAddr addr.Address, storageWeightDesc SectorStorageWeightDesc) {
	h, st := a.State(rt)
	st._addClaimedPowerForSector(minerAddr, storageWeightDesc)
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActor) _rtDeductClaimedPowerForSectorAssert(rt Runtime, minerAddr addr.Address, storageWeightDesc SectorStorageWeightDesc) {
	h, st := a.State(rt)
	st._deductClaimedPowerForSectorAssert(minerAddr, storageWeightDesc)
	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActor) _rtInitiateNewSurprisePoStChallenges(rt Runtime) {
	provingPeriod := indices.StorageMining_SurprisePoStProvingPeriod()

	h, st := a.State(rt)

	// sample the actor addresses
	minerSelectionSeed := rt.GetRandomness(rt.CurrEpoch())
	randomness := crypto.DeriveRandWithEpoch(crypto.DomainSeparationTag_SurprisePoStSelectMiners, minerSelectionSeed, int(rt.CurrEpoch()))

	IMPL_FINISH() // BigInt arithmetic (not floating-point)
	challengeCount := math.Ceil(float64(len(st.PowerTable)) / float64(provingPeriod))
	surprisedMiners := st._selectMinersToSurprise(int(challengeCount), randomness)

	UpdateRelease(rt, h, st)

	for _, addr := range surprisedMiners {
		rt.Send(
			addr,
			builtin.Method_StorageMinerActor_OnSurprisePoStChallenge,
			nil,
			abi.TokenAmount(0))
	}
}

func (a *StoragePowerActor) _rtProcessDeferredCronEvents(rt Runtime) {
	epoch := rt.CurrEpoch()

	h, st := a.State(rt)
	minerEvents, found := st.CachedDeferredCronEvents[epoch]
	if !found {
		return
	}
	delete(st.CachedDeferredCronEvents, epoch)
	UpdateRelease(rt, h, st)

	minerEventsRetain := []autil.MinerEvent{}
	for _, minerEvent := range minerEvents {
		if _, found := st.PowerTable[minerEvent.MinerAddr]; found {
			minerEventsRetain = append(minerEventsRetain, minerEvent)
		}
	}

	for _, minerEvent := range minerEventsRetain {
		rt.Send(
			minerEvent.MinerAddr,
			builtin.Method_StorageMinerActor_OnDeferredCronEvent,
			serde.MustSerializeParams(
				minerEvent.Sectors,
			),
			abi.TokenAmount(0),
		)
	}
}

func (a *StoragePowerActor) _rtGetPledgeCollateralReqForMinerOrAbort(rt Runtime, minerAddr addr.Address) abi.TokenAmount {
	h, st := a.State(rt)
	minerNominalPower, found := st.NominalPower[minerAddr]
	if !found {
		rt.AbortArgMsg("Miner not found")
	}
	Release(rt, h, st)
	cidx := rt.CurrIndices()
	return cidx.PledgeCollateralReq(minerNominalPower)
}

func (a *StoragePowerActor) _rtSlashPledgeCollateral(rt Runtime, minerAddr addr.Address, amountToSlash abi.TokenAmount) {
	h, st := a.State(rt)
	amountSlashed := st._slashPledgeCollateral(minerAddr, amountToSlash)
	UpdateRelease(rt, h, st)

	rt.SendFunds(builtin.BurntFundsActorAddr, amountSlashed)
}

func (a *StoragePowerActor) _rtDeleteMinerActor(rt Runtime, minerAddr addr.Address) {
	h, st := a.State(rt)

	delete(st.PowerTable, minerAddr)
	delete(st.ClaimedPower, minerAddr)
	delete(st.NominalPower, minerAddr)
	delete(st.PoStDetectedFaultMiners, minerAddr)

	newTable, amountSlashed, ok := autil.BalanceTable_WithExtractAll(st.EscrowTable, minerAddr)
	Assert(ok)
	newTable, ok = autil.BalanceTable_WithDeletedAddressEntry(newTable, minerAddr)
	Assert(ok)
	st.EscrowTable = newTable

	UpdateRelease(rt, h, st)

	rt.Send(
		minerAddr,
		builtin.Method_StorageMinerActor_OnDeleteMiner,
		serde.MustSerializeParams(),
		abi.TokenAmount(0),
	)

	rt.SendFunds(builtin.BurntFundsActorAddr, amountSlashed)
}
The Power Table

The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Power Fraction over time. That is, a miner whose storage represents 1% of total storage on the network should mine 1% of blocks on expectation.

SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), for failed PoSts (decrementing miner power) or for other storage and consensus faults.

Sector ProveCommit is the first time power is proven to the network and hence power is first added upon successful sector ProveCommit. Power is also added when a sector’s TemporaryFault period has ended. Miners are expected to prove over all their sectors that contribute to their power.

Power is decremented when a sector expires, when a sector enters TemporaryFault, or when it is invoked by miners through Sector Termination. Miners can also extend the lifetime of a sector through ExtendSectorExpiration and thus modifying SectorStorageWeightDesc. This may or may not have an impact on power but the machinery is in place to preserve the flexibility.

The Miner lifecycle in the power table should be roughly as follows:

  • MinerRegistration: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker).
  • UpdatePower: These power increments and decrements are called by various storage actor (and must thus be verified by every full node on the network). Specifically:
    • Power is incremented at SectorProveCommit
    • All Power of a particular miner is decremented immediately after a missed SurprisePoSt (DetectedFault).
    • A particular sector’s power is decremented when its TemporaryFault begins.
    • A particular sector’s power is added back when its TemporaryFault ends and miner is expected to prove over this sector.
    • A particular sector’s power is removed when the sector is terminated through sector expiration or miner invocation.

To summarize, only sectors in the Active state will command power. A Sector becomes Active when it is added upon ProveCommit. Power is immediately decremented upon when TemporaryFault begins on an Active sector or when the miner is in Challenged or DetectedFault state. Power will be restored when TemporaryFault has ended and when the miner successfully responds to a SurprisePoSt challenge. A sector’s power is removed when it is terminated through either miner invocation or normal expiration.

Pledge Collateral

Consensus in Filecoin is secured in part by economic incentives enforced by Pledge Collateral.

Pledge collateral amount is committed based on power pledged to the system (i.e. proportional to number of sectors committed and sector size for a miner). It is a system-wide parameter and is committed to the StoragePowerActor. Pledge collateral can be posted by the StorageMinerActor at any time by a miner and its requirement is dependent on miner’s power. Details around pledge collateral will be announced soon.

Pledge Collateral will be slashed when Consensus Faults are reported to the StoragePowerActor's ReportConsensusFault method, when a miner fails a SurprisePoSt (DetectedFault), or when a miner terminates a sector earlier than its duration.

Pledge Collateral is slashed for any fault affecting storage-power consensus, these include:

  • faults to expected consensus in particular (see Consensus Faults) which will be reported by a slasher to the StoragePowerActor in exchange for a reward.
  • faults affecting consensus power more generally, specifically uncommitted power faults (i.e. Storage Faults) which will be reported by the CronActor automatically or when a miner terminates a sector earlier than its promised duration.

Token

FIL Wallet

Payment Channels

Payment Channels are used in the Filecoin Retrieval Market to enable efficient off-chain payments and accounting between parties for what is expected to be series of microtransactions, specifically those occurring as part of retrieval market data retrieval.

Note that the following provides a high-level overview of payment channels and an accompanying interface. The lotus implementation of vouchers and payment channels are also good references.

You can also read more about the Filecoin payment channel actor interface.

In short, the payment channel actor can be used to open long-lived, flexible payment channels between users. Each channel can be funded by adding to their balance. The goal of the payment channel actor is to enable a series of off-chain microtransactions to be reconciled on-chain at a later time with fewer messages. Accordingly, the expectation is From will send to To vouchers of successively greater Value and increasing Nonces. When they choose to, To can Update the channel to update the balance available ToSend to them in the channel, and can choose to Collect this balance at any time (incurring a gas cost). The channel is split into lanes created as part of updating the channel state with a payment voucher. Each lane has an associated nonce and amount of tokens it can be redeemed for. These lanes allow for a lot of accounting between parties to be done off chain and reconciled via single updates to the payment channel, merging these lanes to arrive at a desired outcome.

Over the course of a transaction cycle, each party to the payment channel can send the other vouchers. The payment channel’s From account holder will send a signed voucher with a given nonce to the To account holder. The latter can use the voucher to redeem part of the lane’s value, merging other lanes into it as needed.

For instance if From sends To the following vouchers (voucher_val, voucher_nonce) for a lane with 100 to be redeemed: (10, 1), (20, 2), (30, 3), then To could choose to redeem (30, 3) bringing the lane’s value to 70 (100 - 30). They could not redeem (10, 1) or (20, 2) thereafter. They could however redeem (20, 2) for 20, and then (30, 3) for 10 (30 - 20) thereafter.

The multiple lanes enable two parties to use a single payment channel to adjudicate multiple independent sets of payments.

Vouchers are signed by the sender and authenticated using a Secret, PreImage pair provided by the paying party. If the PreImage is indeed a pre-image of the Secret when used as input to some given algorithm (typically a one-way function like a hash), the Voucher is valid. The Voucher itself contains the PreImage but not the Secret (communicated separately to the receiving party). This enables multi-hop payments since an intermediary cannot redeem a voucher on their own. They can also be used to update the minimum height at which a channel will be closed. Likewise, vouchers can have TimeLocks to prevent they are being used too early, likewise a channel can have a MinCloseHeight to prevent it being closed prematurely (e.g. before the recipient has collected funds) by the sender.

Once their transactions have completed, either party can choose to Close the channel, the recipient can then Collect the ToPay amount from the channel. From will be refunded the remaining balance in the channel.

So we have:

  • [off-chain] - Two parties agree to a series of transactions (for instance as part of file retrieval) with party A paying party B up to some total sum of Filecoin over time.
  • [on-chain] - The Payment Channel Actor is used called by A to open a payment channel from A to B and a lane is opened to increase the balance of the channel, triggering a transaction between A and the payment channel actor. At any time, A can open new lanes to increase the total balance available in the channel (e.g. if A and B choose to do more transactions together).
  • [off-chain] - Throughout the transaction cycle (e.g. on every piece of data sent via a retrieval deal), party A sends a voucher to party B enabling B to redeem more payment from the payment lanes, and incentivizing B to continue providing a service (e.g. sending more data along).
  • [on-chain] - At regular intervals, B can Update the payment channel balance available ToSend with the vouchers received (past their timeLock), decreasing the remaining Value of the payment channel.
  • [on-chain] - At the end of the cycle, past the MinCloseHeight, A can choose to Close the payment channel.
  • [on-chain] - B can choose to Collect the amount ToSend triggering a payment between the payment channel actor and B.

Payment Channel Actor

Something's not right. The payment_channel_actor.id file was not found.

Multisig - Wallet requiring multiple signatures

Multisig Actor

package multisig

import (
	addr "github.com/filecoin-project/go-address"
	actor "github.com/filecoin-project/specs-actors/actors"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
)

type InvocOutput = vmr.InvocOutput
type Runtime = vmr.Runtime

var AssertMsg = autil.AssertMsg
var IMPL_FINISH = autil.IMPL_FINISH
var IMPL_TODO = autil.IMPL_TODO
var TODO = autil.TODO

type TxnID int64

type MultiSigTransaction struct {
	Proposer   addr.Address
	Expiration abi.ChainEpoch

	To     addr.Address
	Method abi.MethodNum
	Params abi.MethodParams
	Value  abi.TokenAmount
}

func (txn *MultiSigTransaction) Equals(MultiSigTransaction) bool {
	IMPL_FINISH()
	panic("")
}

type MultiSigTransactionHAMT map[TxnID]MultiSigTransaction
type MultiSigApprovalSetHAMT map[TxnID]autil.ActorIDSetHAMT

func MultiSigTransactionHAMT_Empty() MultiSigTransactionHAMT {
	IMPL_FINISH()
	panic("")
}

func MultiSigApprovalSetHAMT_Empty() MultiSigApprovalSetHAMT {
	IMPL_FINISH()
	panic("")
}

type MultiSigActor struct{}

func (a *MultiSigActor) State(rt Runtime) (vmr.ActorStateHandle, MultiSigActorState) {
	h := rt.AcquireState()
	stateCID := cid.Cid(h.Take())
	var state MultiSigActorState
	if !rt.IpldGet(stateCID, &state) {
		rt.AbortAPI("state not found")
	}
	return h, state
}

func (a *MultiSigActor) Propose(rt vmr.Runtime, txn MultiSigTransaction) TxnID {
	vmr.RT_ValidateImmediateCallerIsSignable(rt)
	callerAddr := rt.ImmediateCaller()
	a._rtValidateAuthorizedPartyOrAbort(rt, callerAddr)

	h, st := a.State(rt)
	txnID := st.NextTxnID
	st.NextTxnID += 1
	st.PendingTxns[txnID] = txn
	st.PendingApprovals[txnID] = autil.ActorIDSetHAMT_Empty()
	UpdateRelease_MultiSig(rt, h, st)

	// Proposal implicitly includes approval of a transaction.
	a._rtApproveTransactionOrAbort(rt, callerAddr, txnID, txn)

	TODO() // Ensure stability across reorgs (consider having proposer supply ID?)
	return txnID
}

func (a *MultiSigActor) Approve(rt vmr.Runtime, txnID TxnID, txn MultiSigTransaction) {
	vmr.RT_ValidateImmediateCallerIsSignable(rt)
	callerAddr := rt.ImmediateCaller()
	a._rtValidateAuthorizedPartyOrAbort(rt, callerAddr)
	a._rtApproveTransactionOrAbort(rt, callerAddr, txnID, txn)
}

func (a *MultiSigActor) AddAuthorizedParty(rt vmr.Runtime, actorID abi.ActorID) {
	// Can only be called by the multisig wallet itself.
	rt.ValidateImmediateCallerIs(rt.CurrReceiver())

	h, st := a.State(rt)
	st.AuthorizedParties[actorID] = true
	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) RemoveAuthorizedParty(rt vmr.Runtime, actorID abi.ActorID) {
	// Can only be called by the multisig wallet itself.
	rt.ValidateImmediateCallerIs(rt.CurrReceiver())

	h, st := a.State(rt)

	if _, found := st.AuthorizedParties[actorID]; !found {
		rt.AbortStateMsg("Party not found")
	}

	delete(st.AuthorizedParties, actorID)

	if len(st.AuthorizedParties) < st.NumApprovalsThreshold {
		rt.AbortStateMsg("Cannot decrease authorized parties below threshold")
	}

	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) SwapAuthorizedParty(rt vmr.Runtime, oldActorID abi.ActorID, newActorID abi.ActorID) {
	// Can only be called by the multisig wallet itself.
	rt.ValidateImmediateCallerIs(rt.CurrReceiver())

	h, st := a.State(rt)

	if _, found := st.AuthorizedParties[oldActorID]; !found {
		rt.AbortStateMsg("Party not found")
	}

	if _, found := st.AuthorizedParties[oldActorID]; !found {
		rt.AbortStateMsg("Party already present")
	}

	delete(st.AuthorizedParties, oldActorID)
	st.AuthorizedParties[newActorID] = true

	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) ChangeNumApprovalsThreshold(rt vmr.Runtime, newThreshold int) {
	// Can only be called by the multisig wallet itself.
	rt.ValidateImmediateCallerIs(rt.CurrReceiver())

	h, st := a.State(rt)

	if newThreshold <= 0 || newThreshold > len(st.AuthorizedParties) {
		rt.AbortStateMsg("New threshold value not supported")
	}

	st.NumApprovalsThreshold = newThreshold

	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) Constructor(rt vmr.Runtime, authorizedParties autil.ActorIDSetHAMT, numApprovalsThreshold int) {

	rt.ValidateImmediateCallerIs(builtin.InitActorAddr)
	h := rt.AcquireState()

	st := MultiSigActorState{
		AuthorizedParties:     authorizedParties,
		NumApprovalsThreshold: numApprovalsThreshold,
		PendingTxns:           MultiSigTransactionHAMT_Empty(),
		PendingApprovals:      MultiSigApprovalSetHAMT_Empty(),
	}

	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) _rtApproveTransactionOrAbort(rt Runtime, callerAddr addr.Address, txnID TxnID, txn MultiSigTransaction) {

	h, st := a.State(rt)

	txnCheck, found := st.PendingTxns[txnID]
	if !found || !txnCheck.Equals(txn) {
		rt.AbortStateMsg("Requested transcation not found or not matched")
	}

	expirationExceeded := (rt.CurrEpoch() > txn.Expiration)
	if expirationExceeded {
		rt.AbortStateMsg("Transaction expiration exceeded")

		TODO()
		// Determine what to do about state accumulation over time.
		// Cannot rely on proposer to delete unexecuted transactions;
		// there is no incentive (in fact, this costs gas).
		// Could potentially amortize cost of cleanup via Cron.
	}

	AssertMsg(callerAddr.Protocol() == addr.ID, "caller address does not have ID")
	actorID, err := addr.IDFromAddress(callerAddr)
	autil.Assert(err == nil)

	st.PendingApprovals[txnID][abi.ActorID(actorID)] = true
	thresholdMet := (len(st.PendingApprovals[txnID]) == st.NumApprovalsThreshold)

	UpdateRelease_MultiSig(rt, h, st)

	if thresholdMet {
		if !st._hasAvailable(rt.CurrentBalance(), txn.Value, rt.CurrEpoch()) {
			rt.AbortArgMsg("insufficient funds unlocked")
		}

		// A sufficient number of approvals have arrived and sufficient funds have been unlocked: relay the message and delete from pending queue.
		rt.Send(
			txn.To,
			txn.Method,
			txn.Params,
			txn.Value,
		)
		a._rtDeletePendingTransaction(rt, txnID)
	}
}

func (a *MultiSigActor) _rtDeletePendingTransaction(rt Runtime, txnID TxnID) {
	h, st := a.State(rt)
	delete(st.PendingTxns, txnID)
	delete(st.PendingApprovals, txnID)
	UpdateRelease_MultiSig(rt, h, st)
}

func (a *MultiSigActor) _rtValidateAuthorizedPartyOrAbort(rt Runtime, address addr.Address) {
	AssertMsg(address.Protocol() == addr.ID, "caller address does not have ID")
	actorID, err := addr.IDFromAddress(address)
	autil.Assert(err == nil)

	h, st := a.State(rt)
	if _, found := st.AuthorizedParties[abi.ActorID(actorID)]; !found {
		rt.AbortArgMsg("Party not authorized")
	}
	Release_MultiSig(rt, h, st)
}

func Release_MultiSig(rt Runtime, h vmr.ActorStateHandle, st MultiSigActorState) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(&st))
	h.Release(checkCID)
}

func UpdateRelease_MultiSig(rt Runtime, h vmr.ActorStateHandle, st MultiSigActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(&st))
	h.UpdateRelease(newCID)
}
package multisig

import (
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
)

type MultiSigActorState struct {
	// Linear unlock
	InitialBalance abi.TokenAmount
	StartEpoch     abi.ChainEpoch
	UnlockDuration abi.ChainEpoch

	AuthorizedParties     autil.ActorIDSetHAMT
	NumApprovalsThreshold int
	NextTxnID             TxnID
	PendingTxns           MultiSigTransactionHAMT
	PendingApprovals      MultiSigApprovalSetHAMT
}

func (st *MultiSigActorState) AmountLocked(elapsedEpoch abi.ChainEpoch) abi.TokenAmount {
	if elapsedEpoch >= st.UnlockDuration {
		return abi.TokenAmount(0)
	}

	TODO() // BigInt
	lockedProportion := (st.UnlockDuration - elapsedEpoch) / st.UnlockDuration
	return abi.TokenAmount(uint64(st.InitialBalance) * uint64(lockedProportion))
}

// return true if MultiSig maintains required locked balance after spending the amount
func (st *MultiSigActorState) _hasAvailable(currBalance abi.TokenAmount, amountToSpend abi.TokenAmount, currEpoch abi.ChainEpoch) bool {
	if amountToSpend < 0 || currBalance < amountToSpend {
		return false
	}

	if currBalance-amountToSpend < st.AmountLocked(currEpoch-st.StartEpoch) {
		return false
	}

	return true
}

func (st *MultiSigActorState) CID() cid.Cid {
	panic("TODO")
}

Storage Mining System - proving storage for producing blocks

The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data, producing proof artifacts that demonstrate correct storage behavior, and managing the work involved.

Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable choices. Miners should explore the design space to arrive at something that (a) satisfies protocol and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in Deals), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for various optimizations for implementers, miners, and users to make. In some parts, we describe algorithms that could be replaced by other, more optimized versions, but in those cases it is important that the protocol constraints are satisfied. The protocol constraints are spelled out in clear detail (an unclear, unmentioned constraint is a “spec error”). It is up to implementers who deviate from the algorithms presented here to ensure their modifications satisfy those constraints, especially those relating to protocol security.

Storage Miner

Filecoin Storage Mining Subsystem

The Filecoin Storage Mining Subsystem ensures a storage miner can effectively commit storage to the Filecoin protocol in order to both:

  • participate in the Filecoin Storage Market by taking on client data and participating in storage deals.
  • participate in Filecoin Storage Power Consensus, verifying and generating blocks to grow the Filecoin blockchain and earning block rewards and fees for doing so.

The above involves a number of steps to putting on and maintaining online storage, such as:

  • Committing new storage (see Sealing and PoRep)
  • Continously proving storage (see Election PoSt)
  • Declaring storage faults and recovering from them.
Sector Types

There are two types of sectors, Regular Sectors with storage deals in them and Committed Capacity (CC) Sectors with no deals. All sectors require an expiration epoch that is declared upon PreCommit and sectors are assigned a StartEpoch at ProveCommit. Start and Expiration epoch collectively define the lifetime of a Sector. Length and size of active deals in a sector’s lifetime determine the DealWeight of the sector. SectorSize, Duration, and DealWeight statically determine the power assigned to a sector that will remain constant throughout its lifetime. More details on cost and reward for different sector types will be announced soon.

Sector States

When managing their storage sectors as part of Filecoin mining, storage providers will account for where in the Storage Mining Cycle their sectors are. For instance, has a sector been committed? Does it need a new PoSt? Most of these operations happen as part of cycles of chain epochs called Proving Periods each of which yield high confidence that every miner in the chain has proven their power (see Election PoSt).

There are three states that an individual sector can be in:

  • PreCommit when a sector has been added through a PreCommit message.
  • Active when a sector has been proven through a ProveCommit message and when a sector’s TemporaryFault period has ended.
  • TemporaryFault when a miner declares fault on a particular sector.

Sectors enter Active from PreCommit through a ProveCommit message that serves as the first proof for the sector. PreCommit requires a PreCommit deposit which will be returned upon successful and timely ProveCommit. However, if there is no matching ProveCommit for a particular PreCommit message, the deposit will be burned at PreCommit expiration.

A particular sector enters TemporaryFault from Active through DeclareTemporaryFault with a specified period. Power associated with the sector will be lost immediately and miner needs to pay a TemporaryFaultFee determined by the power suspended and the duration of suspension. At the end of the declared duration, faulted sectors automatically regain power and enter Active. Miners are expected to prove over this recovered sector. Failure to do so may result in failing ElectionPoSt or DetectedFault from failing SurprisePoSt.

Sector State Machine (open in new tab)
Sector State Machine Legend (open in new tab)
Miner PoSt State

MinerPoStState keeps track of a miner’s state in responding to PoSt and there are three states in MinerPoStState:

  • OK miner has passed either a ElectionPoSt or a SurprisePoSt sufficiently recently.
  • Challenged miner has been selected to prove its storage via SurprisePoSt and is currently in the Challenged state
  • DetectedFault miner has failed at least one SurprisePoSt, indicating that all claimed storage may not be proven. Miner has lost power on its sector and recovery can only proceed by a successful response to a subsequent SurprisePoSt challenge, up until the limit of number of consecutive failures.

DetectedFault is a miner-wide PoSt state when all sectors are considered inactive. All power is lost immediately and pledge collateral is slashed. If a miner remains in DetectedFault for more than MaxConsecutiveFailures, all sectors will be terminated, both power and market actors will be notified for slashing and return of client deal collateral.

ProvingSet consists of sectors that miners are required to generate proofs against and is what counts towards miners’ power. In other words, ProvingSet is a set of all Active sectors for a particular miner. ProvingSet is only relevant when the miner is in OK stage of its MinerPoStState. When a miner is in the Challenged state, ChallengedSectors specify the list of sectors to be challenged which is the ProvingSet before the challenge is issued thus allowing more sectors to be added while it is in the Challenged state.

Miners can call ProveCommit to commit a sector and add to their Claimed Power. However, a miner’s Nominal Power and Consensus Power will be zero when it is in either Challenged or DetectedFault state. Note also that miners can call DeclareTemporaryFault when they are in Challenged or DetectedFault state. This does not change the list of sectors that are currently challenged which is a snapshot of all active sectors (ProvingSet) at the time of challenge.

Miner PoSt State Machine (open in new tab)
Miner PoSt State Machine Legend (open in new tab)

Storage Mining Cycle

Block miners should constantly be performing Proofs of SpaceTime using Election PoSt, and checking the outputted partial tickets to run Secret Leader Election and determine whether they can propose a block at each epoch. Epochs are currently set to take around X seconds, in order to account for election PoSt and network propagation around the world. The details of the mining cycle are defined here.

Active Miner Mining Cycle

In order to mine blocks on the Filecoin blockchain a miner must be running Block Validation at all times, keeping track of recent blocks received and the heaviest current chain (based on Expected Consensus).

With every new tipset, the miner can use their committed power to attempt to craft a new block.

For additional details around how consensus works in Filecoin, see Expected Consensus. For the purposes of this section, there is a consensus protocol (Expected Consensus) that guarantees a fair process for determining what blocks have been generated in a round, whether a miner is eligible to mine a block itself, and other rules pertaining to the production of some artifacts required of valid blocks (e.g. Tickets, ElectionPoSt).

Continuous Mining Cycle

After the chain has caught up to the current head using ChainSync - synchronizing the Blockchain, the mining process is as follows:

  • The node continuously receives and transmits messages using the Message Syncer
  • At the same time it continuously receives blocks
    • Each block has an associated timestamp and epoch (quantized time window in which it was crafted)
    • Blocks are validated as they come in during an epoch (provided it is their epoch, see validation)
  • At the end of a given epoch, the miner should take all the valid blocks received for this epoch and assemble them into tipsets according to tipset validation rules
  • The miner then attempts to mine atop the heaviest tipset (as calculated with EC's weight function) using its smallest ticket to run leader election
    • The miner runs an Election PoSt on their sectors in order to generate partial tickets
    • The miner uses these tickets in order to run Secret Leader Election
      • if successful, the miner generates a new randomness ticket for inclusion in the block
      • the miner then assembles a new block (see “block creation” below) and broadcasts it

This process is repeated until either the Election PoSt process yields a winning ticket (in EC) and a block published or a new valid comes in from the network.

At any height H, there are three possible situations:

  • The miner is eligible to mine a block: they produce their block and propagate it. They then resume mining at the next height H+1.
  • The miner is not eligible to mine a block but has received blocks: they form a Tipset with them and resume mining at the next height H+1.
  • The miner is not eligible to mine a block and has received no blocks: prompted by their clock they run leader election again, incrementing the epoch number.

Anytime a miner receives new valid blocks, it should evaluate what is the heaviest Tipset it knows about and mine atop it.

Timing
Mining Cycle Timing (open in new tab)

The mining cycle relies on receiving and producing blocks concurrently. The sequence of these events in time is given by the timing diagram above. The upper row represents the conceptual consumption channel consisting of successive receiving periods Rx during which nodes validate and select blocks as chain heads. The lower row is the conceptual production channel made up of a period of mining M followed by a period of transmission Tx. The lengths of the periods are not to scale.

Blocks are received and validated during Rx up to the end of the epoch. At the beginning of the next epoch, the heaviest tipset is computed from the blocks received during Rx, used as the head to build on during M. If mining is successful a block is transmitted during Tx. The epoch boundaries are as shown.

In a fully synchronized network most of period Rx does not see any network traffic, only the period lined up with Tx. In practice we expect blocks from previous epochs to propagate during the remainder of Rx. We also expect differences in operator mining time to cause additional variance.

This sequence of events applies only when the node is in the CHAIN_FOLLOW syncing mode. Nodes in other syncing modes do not mine blocks.

Full Miner Lifecycle
Step 0: Registration and Market participation

To initially become a miner, a miner first register a new miner actor on-chain. This is done through the storage power actor’s CreateStorageMiner method. The call will then create a new miner actor instance and return its address.

The next step is to place one or more storage market asks on the market. This is done off-chain as part of storage market functions. A miner may create a single ask for their entire storage, or partition their storage up in some way with multiple asks (at potentially different prices).

After that, they need to make deals with clients and begin filling up sectors with data. For more information on making deals, see the Storage Market. The miner will need to put up storage deal collateral for the deals they have entered into.

When they have a full sector, they should seal it. This is done by invoking the Sector Sealer.

Owner/Worker distinction

The miner actor has two distinct ‘controller’ addresses. One is the worker, which is the address which will be responsible for doing all of the work, submitting proofs, committing new sectors, and all other day to day activities. The owner address is the address that created the miner, paid the collateral, and has block rewards paid out to it. The reason for the distinction is to allow different parties to fulfil the different roles. One example would be for the owner to be a multisig wallet, or a cold storage key, and the worker key to be a ‘hot wallet’ key.

Changing Worker Addresses

Note that any change to worker keys after registration must be appropriately delayed in relation to randomness lookback for SEALing data (see this issue).

Step 1: Committing Sectors

When the miner has completed their first seal, they should post it on-chain using the Storage Miner Actor’s ProveCommitSector function. The miner will need to put up pledge collateral in proportion to the amount of storage they commit on chain. Miner will now gain power for this particular sector upon successful ProveCommitSector.

You can read more about sectors here and how sector relates to power here.

Step 2: Running Elections

Once the miner has power on the network, they can begin to submit ElectionPoSts. To do so, the miner must run a PoSt on a subset of their sectors in every round, using the outputted partial tickets to run leader election.

If the miner finds winning tickets, they are eligible to generate a new block and earn block rewards using the Block Producer.

Every successful PoSt submission will delay the next SurprisePoSt challenge the miner will receive.

In this period, the miner can still:

  • commit new sectors
  • be challenged with a SurprisePoSt
  • declare faults
Faults

If a miner detects Storage Faults among their sectors (any sort of storage failure that would prevent them from crafting a PoSt), they should declare these faults with the DeclareTemporaryFaults() method of the Storage Miner Actor.

The miner will be unable to craft valid PoSts over faulty sectors, thereby reducing their chances of winning Election and SurprisePoSts. By declaring a fault, the miner will no longer be challenged on that sector, and will lose power accordingly. The miner can specify how long the duration of their TemporaryFault and pay a TemporaryFaultFee.

A miner will no longer be able to declare faults after being challenged for a SurprisePoSt.

Step 3: Deal/Sector Expiration

In order to stop mining, a miner must complete all of its storage deals. Once all deals in a sector have expired, the sector itself will expire thereby enabling the miner to remove the associated collateral from their account.

Future Work

There are many ideas for improving upon the storage miner, here are ideas that may be potentially implemented in the future.

  • Sector Resealing: Miners should be able to ‘re-seal’ sectors, to allow them to take a set of sectors with mostly expired pieces, and combine the not-yet-expired pieces into a single (or multiple) sectors.
  • Sector Transfer: Miners should be able to re-delegate the responsibility of storing data to another miner. This is tricky for many reasons, and will not be implemented in the initial release of Filecoin, but could provide interesting capabilities down the road.

Storage Miner Actor

StorageMinerActorState implementation
package storage_miner

import (
	"math/big"

	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
	peer "github.com/libp2p/go-libp2p-core/peer"
)

// Balance of a StorageMinerActor should equal exactly the sum of PreCommit deposits
// that are not yet returned or burned.
type StorageMinerActorState struct {
	Sectors    SectorsAMT
	PoStState  MinerPoStState
	ProvingSet SectorNumberSetHAMT
	Info       MinerInfo
}

type MinerPoStState struct {
	// Epoch of the last succesful PoSt, either election post or surprise post.
	LastSuccessfulPoSt abi.ChainEpoch

	// If >= 0 miner has been challenged and not yet responded successfully.
	// SurprisePoSt challenge state: The miner has not submitted timely ElectionPoSts,
	// and as a result, the system has fallen back to proving storage via SurprisePoSt.
	//  `epochUndefined` if not currently challeneged.
	SurpriseChallengeEpoch abi.ChainEpoch

	// Not empty iff the miner is challenged.
	ChallengedSectors []abi.SectorNumber

	// Number of surprised post challenges that have been failed since last successful PoSt.
	// Indicates that the claimed storage power may not actually be proven. Recovery can proceed by
	// submitting a correct response to a subsequent SurprisePoSt challenge, up until
	// the limit of number of consecutive failures.
	NumConsecutiveFailures int64
}

func (mps *MinerPoStState) Is_Challenged() bool {
	return mps.SurpriseChallengeEpoch != epochUndefined
}

func (mps *MinerPoStState) Is_OK() bool {
	return !mps.Is_Challenged() && !mps.Is_DetectedFault()
}

func (mps *MinerPoStState) Is_DetectedFault() bool {
	return mps.NumConsecutiveFailures > 0
}

type SectorState int64

const (
	PreCommit SectorState = iota
	Active
	TemporaryFault
)

type SectorOnChainInfo struct {
	State                 SectorState
	Info                  SectorPreCommitInfo // Also contains Expiration field.
	PreCommitDeposit      abi.TokenAmount
	PreCommitEpoch        abi.ChainEpoch
	ActivationEpoch       abi.ChainEpoch // -1 if still in PreCommit state.
	DeclaredFaultEpoch    abi.ChainEpoch // -1 if not currently declared faulted.
	DeclaredFaultDuration abi.ChainEpoch // -1 if not currently declared faulted.
	DealWeight            big.Int        // -1 if not yet validated with StorageMarketActor.
}

type SectorPreCommitInfo struct {
	SectorNumber abi.SectorNumber
	SealedCID    abi.SealedSectorCID // CommR
	SealEpoch    abi.ChainEpoch
	DealIDs      abi.DealIDs
	Expiration   abi.ChainEpoch
}

type SectorProveCommitInfo struct {
	SectorNumber     abi.SectorNumber
	RegisteredProof  abi.RegisteredProof
	Proof            abi.SealProof
	InteractiveEpoch abi.ChainEpoch
	Expiration       abi.ChainEpoch
}

// TODO AMT
type SectorsAMT map[abi.SectorNumber]SectorOnChainInfo

// TODO HAMT
type SectorNumberSetHAMT map[abi.SectorNumber]bool

type MinerInfo struct {
	// Account that owns this miner.
	// - Income and returned collateral are paid to this address.
	// - This address is also allowed to change the worker address for the miner.
	Owner addr.Address // Must be an ID-address.

	// Worker account for this miner.
	// This will be the key that is used to sign blocks created by this miner, and
	// sign messages sent on behalf of this miner to commit sectors, submit PoSts, and
	// other day to day miner activities.
	Worker       addr.Address // Must be an ID-address.
	WorkerVRFKey addr.Address // Must be a SECP or BLS address

	// Libp2p identity that should be used when connecting to this miner.
	PeerId peer.ID

	// Amount of space in each sector committed to the network by this miner.
	SectorSize             abi.SectorSize
	SealPartitions         int64
	ElectionPoStPartitions int64
	SurprisePoStPartitions int64
}

func (st *StorageMinerActorState) CID() cid.Cid {
	panic("TODO")
}

func (st *StorageMinerActorState) _getSectorOnChainInfo(sectorNo abi.SectorNumber) (info SectorOnChainInfo, ok bool) {
	sectorInfo, found := st.Sectors[sectorNo]
	if !found {
		return SectorOnChainInfo{}, false
	}
	return sectorInfo, true
}

func (st *StorageMinerActorState) _getSectorDealIDsAssert(sectorNo abi.SectorNumber) abi.DealIDs {
	sectorInfo, found := st._getSectorOnChainInfo(sectorNo)
	Assert(found)
	return sectorInfo.Info.DealIDs
}

func SectorsAMT_Empty() SectorsAMT {
	IMPL_FINISH()
	panic("")
}

func SectorNumberSetHAMT_Empty() SectorNumberSetHAMT {
	IMPL_FINISH()
	panic("")
}

func (st *StorageMinerActorState) GetStorageWeightDescForSectorMaybe(sectorNumber abi.SectorNumber) (ret SectorStorageWeightDesc, ok bool) {
	sectorInfo, found := st.Sectors[sectorNumber]
	if !found {
		ret = autil.SectorStorageWeightDesc{}
		ok = false
		return
	}

	ret = autil.SectorStorageWeightDesc{
		SectorSize: st.Info.SectorSize,
		DealWeight: sectorInfo.DealWeight,
		Duration:   sectorInfo.Info.Expiration - sectorInfo.ActivationEpoch,
	}
	ok = true
	return
}

func (st *StorageMinerActorState) _getStorageWeightDescForSector(sectorNumber abi.SectorNumber) SectorStorageWeightDesc {
	ret, found := st.GetStorageWeightDescForSectorMaybe(sectorNumber)
	Assert(found)
	return ret
}

func (st *StorageMinerActorState) _getStorageWeightDescsForSectors(sectorNumbers []abi.SectorNumber) []SectorStorageWeightDesc {
	ret := []SectorStorageWeightDesc{}
	for _, sectorNumber := range sectorNumbers {
		ret = append(ret, st._getStorageWeightDescForSector(sectorNumber))
	}
	return ret
}

func (x *SectorOnChainInfo) Is_TemporaryFault() bool {
	ret := (x.State == TemporaryFault)
	if ret {
		Assert(x.DeclaredFaultEpoch != epochUndefined)
		Assert(x.DeclaredFaultDuration != epochUndefined)
	}
	return ret
}

// Must be significantly larger than DeclaredFaultEpoch, since otherwise it may be possible
// to declare faults adaptively in order to exempt challenged sectors.
func (x *SectorOnChainInfo) EffectiveFaultBeginEpoch() abi.ChainEpoch {
	Assert(x.Is_TemporaryFault())
	return x.DeclaredFaultEpoch + indices.StorageMining_DeclaredFaultEffectiveDelay()
}

func (x *SectorOnChainInfo) EffectiveFaultEndEpoch() abi.ChainEpoch {
	Assert(x.Is_TemporaryFault())
	return x.EffectiveFaultBeginEpoch() + x.DeclaredFaultDuration
}

func MinerInfo_New(
	ownerAddr addr.Address, workerAddr addr.Address, sectorSize abi.SectorSize, peerId peer.ID) MinerInfo {

	ret := &MinerInfo{
		Owner:      ownerAddr,
		Worker:     workerAddr,
		PeerId:     peerId,
		SectorSize: sectorSize,
	}

	TODO() // TODO: determine how to generate/validate VRF key and initialize other fields

	return *ret
}

func (st *StorageMinerActorState) VerifySurprisePoStMeetsTargetReq(candidate abi.PoStCandidate) bool {
	// TODO: Determine what should be the acceptance criterion for sector numbers proven in SurprisePoSt proofs.
	TODO()
	panic("")
}

func SectorNumberSetHAMT_Items(x SectorNumberSetHAMT) []abi.SectorNumber {
	IMPL_FINISH()
	panic("")
}
StorageMinerActorCode implementation
package storage_miner

import (
	"bytes"
	"math/big"

	addr "github.com/filecoin-project/go-address"
	abi "github.com/filecoin-project/specs-actors/actors/abi"
	builtin "github.com/filecoin-project/specs-actors/actors/builtin"
	crypto "github.com/filecoin-project/specs-actors/actors/crypto"
	vmr "github.com/filecoin-project/specs-actors/actors/runtime"
	indices "github.com/filecoin-project/specs-actors/actors/runtime/indices"
	serde "github.com/filecoin-project/specs-actors/actors/serde"
	autil "github.com/filecoin-project/specs-actors/actors/util"
	cid "github.com/ipfs/go-cid"
	peer "github.com/libp2p/go-libp2p-core/peer"
)

const epochUndefined = abi.ChainEpoch(-1)

type StorageMinerActor struct{}

func (a *StorageMinerActor) State(rt Runtime) (vmr.ActorStateHandle, StorageMinerActorState) {
	h := rt.AcquireState()
	stateCID := cid.Cid(h.Take())
	var state StorageMinerActorState
	if !rt.IpldGet(stateCID, &state) {
		rt.AbortAPI("state not found")
	}
	return h, state
}

//////////////////
// SurprisePoSt //
//////////////////

// Called by StoragePowerActor to notify StorageMiner of SurprisePoSt Challenge.
func (a *StorageMinerActor) OnSurprisePoStChallenge(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)

	h, st := a.State(rt)

	// If already challenged, do not challenge again.
	// Failed PoSt will automatically reset the state to not-challenged.
	if st.PoStState.Is_Challenged() {
		Release(rt, h, st)
		return
	}

	// Do not challenge if the last successful PoSt was recent enough.
	noChallengePeriod := indices.StorageMining_PoStNoChallengePeriod()
	if st.PoStState.LastSuccessfulPoSt >= rt.CurrEpoch()-noChallengePeriod {
		Release(rt, h, st)
		return
	}

	var curRecBuf bytes.Buffer
	err := rt.CurrReceiver().MarshalCBOR(&curRecBuf)
	autil.Assert(err == nil)

	randomnessK := rt.GetRandomness(rt.CurrEpoch() - builtin.SPC_LOOKBACK_POST)
	challengedSectorsRandomness := crypto.DeriveRandWithMinerAddr(crypto.DomainSeparationTag_SurprisePoStSampleSectors, randomnessK, rt.CurrReceiver())

	challengedSectors := _surprisePoStSampleChallengedSectors(
		challengedSectorsRandomness,
		SectorNumberSetHAMT_Items(st.ProvingSet),
	)

	st.PoStState = MinerPoStState{
		LastSuccessfulPoSt:     st.PoStState.LastSuccessfulPoSt,
		SurpriseChallengeEpoch: rt.CurrEpoch(),
		ChallengedSectors:      challengedSectors,
		NumConsecutiveFailures: st.PoStState.NumConsecutiveFailures,
	}

	UpdateRelease(rt, h, st)

	// Request deferred Cron check for SurprisePoSt challenge expiry.
	provingPeriod := indices.StorageMining_SurprisePoStProvingPeriod()
	a._rtEnrollCronEvent(rt, rt.CurrEpoch()+provingPeriod, []abi.SectorNumber{})
}

// Invoked by miner's worker address to submit a response to a pending SurprisePoSt challenge.
func (a *StorageMinerActor) SubmitSurprisePoStResponse(rt Runtime, onChainInfo abi.OnChainSurprisePoStVerifyInfo) {
	h, st := a.State(rt)
	rt.ValidateImmediateCallerIs(st.Info.Worker)

	if !st.PoStState.Is_Challenged() {
		rt.AbortStateMsg("Not currently challenged")
	}

	Release(rt, h, st)

	a._rtVerifySurprisePoStOrAbort(rt, &onChainInfo)

	newPostSt := MinerPoStState{
		LastSuccessfulPoSt:     rt.CurrEpoch(),
		SurpriseChallengeEpoch: epochUndefined,
		ChallengedSectors:      nil,
		NumConsecutiveFailures: 0,
	}
	a._rtUpdatePoStState(rt, newPostSt)

	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnMinerSurprisePoStSuccess,
		nil,
		abi.TokenAmount(0),
	)
}

// Called by StoragePowerActor.
func (a *StorageMinerActor) OnDeleteMiner(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
	minerAddr := rt.CurrReceiver()
	rt.DeleteActor(minerAddr)
}

//////////////////
// ElectionPoSt //
//////////////////

// Called by the VM interpreter once an ElectionPoSt has been verified.
func (a *StorageMinerActor) OnVerifiedElectionPoSt(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.SystemActorAddr)

	// The receiver must be the miner who produced the block for which this message is created.
	Assert(rt.ToplevelBlockWinner() == rt.CurrReceiver())

	h, st := a.State(rt)
	updateSuccessEpoch := st.PoStState.Is_OK()
	Release(rt, h, st)

	// Advance the timestamp of the most recent PoSt success, provided the miner is currently
	// in normal state. (Cannot do this if SurprisePoSt mechanism already underway.)
	if updateSuccessEpoch {
		newPostSt := MinerPoStState{
			LastSuccessfulPoSt:     rt.CurrEpoch(),
			SurpriseChallengeEpoch: st.PoStState.SurpriseChallengeEpoch, // expected to be undef because PoStState is OK
			ChallengedSectors:      st.PoStState.ChallengedSectors,      // expected to be empty
			NumConsecutiveFailures: st.PoStState.NumConsecutiveFailures, // expected to be 0
		}
		a._rtUpdatePoStState(rt, newPostSt)
	}
}

///////////////////////
// Sector Commitment //
///////////////////////

// Deals must be posted on chain via sma.PublishStorageDeals before PreCommitSector.
// Optimization: PreCommitSector could contain a list of deals that are not published yet.
func (a *StorageMinerActor) PreCommitSector(rt Runtime, info SectorPreCommitInfo) {
	h, st := a.State(rt)
	rt.ValidateImmediateCallerIs(st.Info.Worker)

	if _, found := st.Sectors[info.SectorNumber]; found {
		rt.AbortStateMsg("Sector number already exists in table")
	}

	Release(rt, h, st)

	cidx := rt.CurrIndices()
	depositReq := cidx.StorageMining_PreCommitDeposit(st.Info.SectorSize, info.Expiration)
	RT_ConfirmFundsReceiptOrAbort_RefundRemainder(rt, depositReq)

	// Verify deals with StorageMarketActor; abort if this fails.
	// (Note: committed-capacity sectors contain no deals, so in that case verification will pass trivially.)
	rt.Send(
		builtin.StorageMarketActorAddr,
		builtin.Method_StorageMarketActor_OnMinerSectorPreCommit_VerifyDealsOrAbort,
		serde.MustSerializeParams(
			info.DealIDs,
			info,
		),
		abi.TokenAmount(0),
	)

	h, st = a.State(rt)

	newSectorInfo := &SectorOnChainInfo{
		State:            PreCommit,
		Info:             info,
		PreCommitDeposit: depositReq,
		PreCommitEpoch:   rt.CurrEpoch(),
		ActivationEpoch:  epochUndefined,
		DealWeight:       *big.NewInt(-1),
	}
	st.Sectors[info.SectorNumber] = *newSectorInfo

	UpdateRelease(rt, h, st)

	// Request deferred Cron check for PreCommit expiry check.
	expiryBound := rt.CurrEpoch() + builtin.MAX_PROVE_COMMIT_SECTOR_EPOCH + 1
	a._rtEnrollCronEvent(rt, expiryBound, []abi.SectorNumber{info.SectorNumber})

	if info.Expiration <= rt.CurrEpoch() {
		rt.AbortArgMsg("PreCommit sector must have positive lifetime")
	}

	a._rtEnrollCronEvent(rt, info.Expiration, []abi.SectorNumber{info.SectorNumber})
}

func (a *StorageMinerActor) ProveCommitSector(rt Runtime, info SectorProveCommitInfo) {
	h, st := a.State(rt)
	workerAddr := st.Info.Worker
	rt.ValidateImmediateCallerIs(workerAddr)

	preCommitSector, found := st.Sectors[info.SectorNumber]
	if !found || preCommitSector.State != PreCommit {
		rt.AbortArgMsg("Sector not valid or not in PreCommit state")
	}

	if rt.CurrEpoch() > preCommitSector.PreCommitEpoch+builtin.MAX_PROVE_COMMIT_SECTOR_EPOCH || rt.CurrEpoch() < preCommitSector.PreCommitEpoch+builtin.MIN_PROVE_COMMIT_SECTOR_EPOCH {
		rt.AbortStateMsg("Invalid ProveCommitSector epoch")
	}

	TODO()
	// TODO: How are SealEpoch, InteractiveEpoch determined (and intended to be used)?
	// Presumably they cannot be derived from the SectorProveCommitInfo provided by an untrusted party.

	a._rtVerifySealOrAbort(rt, &abi.OnChainSealVerifyInfo{
		SealedCID:        preCommitSector.Info.SealedCID,
		SealEpoch:        preCommitSector.Info.SealEpoch,
		InteractiveEpoch: info.InteractiveEpoch,
		RegisteredProof:  info.RegisteredProof,
		Proof:            info.Proof,
		DealIDs:          preCommitSector.Info.DealIDs,
		SectorNumber:     preCommitSector.Info.SectorNumber,
	})

	UpdateRelease(rt, h, st)

	// Check (and activate) storage deals associated to sector. Abort if checks failed.
	rt.Send(
		builtin.StorageMarketActorAddr,
		builtin.Method_StorageMarketActor_OnMinerSectorProveCommit_VerifyDealsOrAbort,
		serde.MustSerializeParams(
			preCommitSector.Info.DealIDs,
			info,
		),
		abi.TokenAmount(0),
	)

	res := rt.SendQuery(
		builtin.StorageMarketActorAddr,
		builtin.Method_StorageMarketActor_GetWeightForDealSet,
		serde.MustSerializeParams(
			preCommitSector.Info.DealIDs,
		),
	)
	var dealWeight *big.Int
	err := serde.Deserialize(res, dealWeight)
	Assert(err == nil)

	h, st = a.State(rt)

	st.Sectors[info.SectorNumber] = SectorOnChainInfo{
		State:           Active,
		Info:            preCommitSector.Info,
		PreCommitEpoch:  preCommitSector.PreCommitEpoch,
		ActivationEpoch: rt.CurrEpoch(),
		DealWeight:      *dealWeight,
	}

	st.ProvingSet[info.SectorNumber] = true

	UpdateRelease(rt, h, st)

	// Request deferred Cron check for sector expiry.
	a._rtEnrollCronEvent(
		rt, preCommitSector.Info.Expiration, []abi.SectorNumber{info.SectorNumber})

	// Notify SPA to update power associated to newly activated sector.
	storageWeightDesc := a._rtGetStorageWeightDescForSector(rt, info.SectorNumber)
	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnSectorProveCommit,
		serde.MustSerializeParams(
			storageWeightDesc,
		),
		abi.TokenAmount(0),
	)

	// Return PreCommit deposit to worker upon successful ProveCommit.
	rt.SendFunds(workerAddr, preCommitSector.PreCommitDeposit)
}

/////////////////////////
// Sector Modification //
/////////////////////////

func (a *StorageMinerActor) ExtendSectorExpiration(rt Runtime, sectorNumber abi.SectorNumber, newExpiration abi.ChainEpoch) {
	storageWeightDescPrev := a._rtGetStorageWeightDescForSector(rt, sectorNumber)

	h, st := a.State(rt)
	rt.ValidateImmediateCallerIs(st.Info.Worker)

	sectorInfo, found := st.Sectors[sectorNumber]
	if !found {
		rt.AbortStateMsg("Sector not found")
	}

	extensionLength := newExpiration - sectorInfo.Info.Expiration
	if extensionLength < 0 {
		rt.AbortStateMsg("Cannot reduce sector expiration")
	}

	sectorInfo.Info.Expiration = newExpiration
	st.Sectors[sectorNumber] = sectorInfo
	UpdateRelease(rt, h, st)

	storageWeightDescNew := storageWeightDescPrev
	storageWeightDescNew.Duration = storageWeightDescPrev.Duration + extensionLength

	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnSectorModifyWeightDesc,
		serde.MustSerializeParams(
			storageWeightDescPrev,
			storageWeightDescNew,
		),
		abi.TokenAmount(0),
	)
}

func (a *StorageMinerActor) TerminateSector(rt Runtime, sectorNumber abi.SectorNumber) {
	h, st := a.State(rt)
	rt.ValidateImmediateCallerIs(st.Info.Worker)
	Release(rt, h, st)

	a._rtTerminateSector(rt, sectorNumber, autil.UserTermination)
}

////////////
// Faults //
////////////

func (a *StorageMinerActor) DeclareTemporaryFaults(rt Runtime, sectorNumbers []abi.SectorNumber, duration abi.ChainEpoch) {
	if duration <= abi.ChainEpoch(0) {
		rt.AbortArgMsg("Temporary fault duration must be positive")
	}

	storageWeightDescs := a._rtGetStorageWeightDescsForSectors(rt, sectorNumbers)
	cidx := rt.CurrIndices()
	requiredFee := cidx.StorageMining_TemporaryFaultFee(storageWeightDescs, duration)

	RT_ConfirmFundsReceiptOrAbort_RefundRemainder(rt, requiredFee)
	rt.SendFunds(builtin.BurntFundsActorAddr, requiredFee)

	effectiveBeginEpoch := rt.CurrEpoch() + indices.StorageMining_DeclaredFaultEffectiveDelay()
	effectiveEndEpoch := effectiveBeginEpoch + duration

	h, st := a.State(rt)
	rt.ValidateImmediateCallerIs(st.Info.Worker)

	for _, sectorNumber := range sectorNumbers {
		sectorInfo, found := st.Sectors[sectorNumber]
		if !found || sectorInfo.State != Active {
			continue
		}

		sectorInfo.State = TemporaryFault
		sectorInfo.DeclaredFaultEpoch = rt.CurrEpoch()
		sectorInfo.DeclaredFaultDuration = duration
		st.Sectors[sectorNumber] = sectorInfo
	}

	UpdateRelease(rt, h, st)

	// Request deferred Cron invocation to update temporary fault state.
	a._rtEnrollCronEvent(rt, effectiveBeginEpoch, sectorNumbers)
	a._rtEnrollCronEvent(rt, effectiveEndEpoch, sectorNumbers)
}

//////////
// Cron //
//////////

func (a *StorageMinerActor) OnDeferredCronEvent(rt Runtime, sectorNumbers []abi.SectorNumber) {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)

	for _, sectorNumber := range sectorNumbers {
		a._rtCheckTemporaryFaultEvents(rt, sectorNumber)
		a._rtCheckSectorExpiry(rt, sectorNumber)
	}

	a._rtCheckSurprisePoStExpiry(rt)
}

/////////////////
// Constructor //
/////////////////

func (a *StorageMinerActor) Constructor(
	rt Runtime, ownerAddr addr.Address, workerAddr addr.Address, sectorSize abi.SectorSize, peerId peer.ID) {

	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)
	h := rt.AcquireState()

	initPostState := MinerPoStState{
		LastSuccessfulPoSt:     epochUndefined,
		SurpriseChallengeEpoch: epochUndefined,
		ChallengedSectors:      nil,
		NumConsecutiveFailures: 0,
	}

	st := &StorageMinerActorState{
		Sectors:    SectorsAMT_Empty(),
		PoStState:  initPostState,
		ProvingSet: SectorNumberSetHAMT_Empty(),
		Info:       MinerInfo_New(ownerAddr, workerAddr, sectorSize, peerId),
	}

	UpdateRelease(rt, h, *st)
}

////////////////////////////////////////////////////////////////////////////////
// Method utility functions
////////////////////////////////////////////////////////////////////////////////

func (a *StorageMinerActor) _rtCheckTemporaryFaultEvents(rt Runtime, sectorNumber abi.SectorNumber) {
	h, st := a.State(rt)
	checkSector, found := st.Sectors[sectorNumber]
	Release(rt, h, st)

	if !found {
		return
	}

	storageWeightDesc := a._rtGetStorageWeightDescForSector(rt, sectorNumber)

	if checkSector.State == Active && rt.CurrEpoch() == checkSector.EffectiveFaultBeginEpoch() {
		checkSector.State = TemporaryFault

		rt.Send(
			builtin.StoragePowerActorAddr,
			builtin.Method_StoragePowerActor_OnSectorTemporaryFaultEffectiveBegin,
			serde.MustSerializeParams(
				storageWeightDesc,
			),
			abi.TokenAmount(0),
		)

		delete(st.ProvingSet, sectorNumber)
	}

	if checkSector.Is_TemporaryFault() && rt.CurrEpoch() == checkSector.EffectiveFaultEndEpoch() {
		checkSector.State = Active
		checkSector.DeclaredFaultEpoch = epochUndefined
		checkSector.DeclaredFaultDuration = epochUndefined

		rt.Send(
			builtin.StoragePowerActorAddr,
			builtin.Method_StoragePowerActor_OnSectorTemporaryFaultEffectiveEnd,
			serde.MustSerializeParams(
				storageWeightDesc,
			),
			abi.TokenAmount(0),
		)

		st.ProvingSet[sectorNumber] = true
	}

	h, st = a.State(rt)
	st.Sectors[sectorNumber] = checkSector
	UpdateRelease(rt, h, st)
}

func (a *StorageMinerActor) _rtCheckSectorExpiry(rt Runtime, sectorNumber abi.SectorNumber) {
	h, st := a.State(rt)
	checkSector, found := st.Sectors[sectorNumber]
	Release(rt, h, st)

	if !found {
		return
	}

	if checkSector.State == PreCommit {
		if rt.CurrEpoch()-checkSector.PreCommitEpoch > builtin.MAX_PROVE_COMMIT_SECTOR_EPOCH {
			a._rtDeleteSectorEntry(rt, sectorNumber)
			rt.SendFunds(builtin.BurntFundsActorAddr, checkSector.PreCommitDeposit)
		}
		return
	}

	// Note: the following test may be false, if sector expiration has been extended by the worker
	// in the interim after the Cron request was enrolled.
	if rt.CurrEpoch() >= checkSector.Info.Expiration {
		a._rtTerminateSector(rt, sectorNumber, autil.NormalExpiration)
	}
}

func (a *StorageMinerActor) _rtTerminateSector(rt Runtime, sectorNumber abi.SectorNumber, terminationType SectorTerminationType) {
	h, st := a.State(rt)
	checkSector, found := st.Sectors[sectorNumber]
	Assert(found)
	Release(rt, h, st)

	storageWeightDesc := a._rtGetStorageWeightDescForSector(rt, sectorNumber)

	if checkSector.State == TemporaryFault {
		// To avoid boundary-case errors in power accounting, make sure we explicitly end
		// the temporary fault state first, before terminating the sector.
		rt.Send(
			builtin.StoragePowerActorAddr,
			builtin.Method_StoragePowerActor_OnSectorTemporaryFaultEffectiveEnd,
			serde.MustSerializeParams(
				storageWeightDesc,
			),
			abi.TokenAmount(0),
		)
	}

	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnSectorTerminate,
		serde.MustSerializeParams(
			storageWeightDesc,
			terminationType,
		),
		abi.TokenAmount(0),
	)

	a._rtDeleteSectorEntry(rt, sectorNumber)
	delete(st.ProvingSet, sectorNumber)
}

func (a *StorageMinerActor) _rtCheckSurprisePoStExpiry(rt Runtime) {
	rt.ValidateImmediateCallerIs(builtin.StoragePowerActorAddr)

	h, st := a.State(rt)

	if !st.PoStState.Is_Challenged() {
		// Already exited challenged state successfully prior to expiry.
		Release(rt, h, st)
		return
	}

	provingPeriod := indices.StorageMining_SurprisePoStProvingPeriod()
	if rt.CurrEpoch() < st.PoStState.SurpriseChallengeEpoch+provingPeriod {
		// Challenge not yet expired.
		Release(rt, h, st)
		return
	}

	numConsecutiveFailures := st.PoStState.NumConsecutiveFailures + 1

	Release(rt, h, st)

	if numConsecutiveFailures > indices.StoragePower_SurprisePoStMaxConsecutiveFailures() {
		// Terminate all sectors, notify power and market actors to terminate
		// associated storage deals, and reset miner's PoSt state to OK.
		terminatedSectors := []abi.SectorNumber{}
		for sectorNumber := range st.Sectors {
			terminatedSectors = append(terminatedSectors, sectorNumber)
		}
		a._rtNotifyMarketForTerminatedSectors(rt, terminatedSectors)
	} else {
		// Increment count of consecutive failures, and continue.
		h, st = a.State(rt)

		st.PoStState = MinerPoStState{
			LastSuccessfulPoSt:     st.PoStState.LastSuccessfulPoSt,
			SurpriseChallengeEpoch: epochUndefined,
			ChallengedSectors:      nil,
			NumConsecutiveFailures: numConsecutiveFailures,
		}
		UpdateRelease(rt, h, st)
	}

	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnMinerSurprisePoStFailure,
		serde.MustSerializeParams(
			numConsecutiveFailures,
		),
		abi.TokenAmount(0))
}

func (a *StorageMinerActor) _rtEnrollCronEvent(
	rt Runtime, eventEpoch abi.ChainEpoch, sectorNumbers []abi.SectorNumber) {

	rt.Send(
		builtin.StoragePowerActorAddr,
		builtin.Method_StoragePowerActor_OnMinerEnrollCronEvent,
		serde.MustSerializeParams(
			eventEpoch,
			sectorNumbers,
		),
		abi.TokenAmount(0),
	)
}

func (a *StorageMinerActor) _rtDeleteSectorEntry(rt Runtime, sectorNumber abi.SectorNumber) {
	h, st := a.State(rt)
	delete(st.Sectors, sectorNumber)
	UpdateRelease(rt, h, st)
}

func (a *StorageMinerActor) _rtUpdatePoStState(rt Runtime, state MinerPoStState) {
	h, st := a.State(rt)
	st.PoStState = state
	UpdateRelease(rt, h, st)
}

func (a *StorageMinerActor) _rtGetStorageWeightDescForSector(
	rt Runtime, sectorNumber abi.SectorNumber) autil.SectorStorageWeightDesc {

	h, st := a.State(rt)
	ret := st._getStorageWeightDescForSector(sectorNumber)
	Release(rt, h, st)
	return ret
}

func (a *StorageMinerActor) _rtGetStorageWeightDescsForSectors(
	rt Runtime, sectorNumbers []abi.SectorNumber) []autil.SectorStorageWeightDesc {

	h, st := a.State(rt)
	ret := st._getStorageWeightDescsForSectors(sectorNumbers)
	Release(rt, h, st)
	return ret
}

func (a *StorageMinerActor) _rtNotifyMarketForTerminatedSectors(rt Runtime, sectorNumbers []abi.SectorNumber) {
	h, st := a.State(rt)
	dealIDItems := []abi.DealID{}
	for _, sectorNo := range sectorNumbers {
		dealIDItems = append(dealIDItems, st._getSectorDealIDsAssert(sectorNo).Items...)
	}
	dealIDs := &abi.DealIDs{Items: dealIDItems}

	Release(rt, h, st)

	rt.Send(
		builtin.StorageMarketActorAddr,
		builtin.Method_StorageMarketActor_OnMinerSectorsTerminate,
		serde.MustSerializeParams(
			dealIDs,
		),
		abi.TokenAmount(0),
	)
}

func (a *StorageMinerActor) _rtVerifySurprisePoStOrAbort(rt Runtime, onChainInfo *abi.OnChainSurprisePoStVerifyInfo) {
	h, st := a.State(rt)
	Assert(st.PoStState.Is_Challenged())
	sectorSize := st.Info.SectorSize
	challengeEpoch := st.PoStState.SurpriseChallengeEpoch
	challengedSectors := st.PoStState.ChallengedSectors

	// verify no duplicate tickets
	challengeIndices := make(map[int64]bool)
	for _, tix := range onChainInfo.Candidates {
		if _, ok := challengeIndices[tix.ChallengeIndex]; ok {
			rt.AbortStateMsg("Invalid Surprise PoSt. Duplicate ticket included.")
		}
		challengeIndices[tix.ChallengeIndex] = true
	}

	TODO(challengedSectors)
	// TODO: Determine what should be the acceptance criterion for sector numbers
	// proven in SurprisePoSt proofs.
	//
	// Previous note:
	// Verify the partialTicket values
	// if !a._rtVerifySurprisePoStMeetsTargetReq(rt) {
	// 	rt.AbortStateMsg("Invalid Surprise PoSt. Tickets do not meet target.")
	// }

	randomnessK := rt.GetRandomness(challengeEpoch - builtin.SPC_LOOKBACK_POST)
	// regenerate randomness used. The PoSt Verification below will fail if
	// the same was not used to generate the proof
	postRandomness := crypto.DeriveRandWithMinerAddr(crypto.DomainSeparationTag_SurprisePoStChallengeSeed, randomnessK, rt.CurrReceiver())

	UpdateRelease(rt, h, st)

	// Get public inputs

	pvInfo := abi.PoStVerifyInfo{
		Candidates: onChainInfo.Candidates,
		Proofs:     onChainInfo.Proofs,
		Randomness: abi.PoStRandomness(postRandomness),
		// EligibleSectors_: FIXME: verification needs these.
	}

	// Verify the PoSt Proof
	isVerified := rt.Syscalls().VerifyPoSt(sectorSize, pvInfo)

	if !isVerified {
		rt.AbortStateMsg("Surprise PoSt failed to verify")
	}
}

func (a *StorageMinerActor) _rtVerifySealOrAbort(rt Runtime, onChainInfo *abi.OnChainSealVerifyInfo) {
	h, st := a.State(rt)
	info := st.Info
	sectorSize := info.SectorSize
	Release(rt, h, st)

	var pieceInfos abi.PieceInfos
	err := serde.Deserialize(rt.SendQuery(
		builtin.StorageMarketActorAddr,
		builtin.Method_StorageMarketActor_GetPieceInfosForDealIDs,
		serde.MustSerializeParams(
			sectorSize,
			onChainInfo.DealIDs,
		),
	), &pieceInfos)
	Assert(err == nil)

	// Unless we enforce a minimum padding amount, this totalPieceSize calculation can be removed.
	// Leaving for now until that decision is entirely finalized.
	var totalPieceSize int64
	for _, pieceInfo := range pieceInfos.Items {
		pieceSize := pieceInfo.Size
		totalPieceSize += pieceSize
	}

	unsealedCID, err := rt.Syscalls().ComputeUnsealedSectorCID(sectorSize, pieceInfos.Items)
	if err != nil {
		rt.AbortStateMsg("invalid sector piece infos")
	}

	minerActorID, err := addr.IDFromAddress(rt.CurrReceiver())
	if err != nil {
		rt.AbortStateMsg("receiver must be ID address")
	}

	IMPL_TODO() // Use randomness APIs
	var svInfoRandomness abi.Randomness
	var svInfoInteractiveRandomness abi.Randomness

	svInfo := abi.SealVerifyInfo{
		SectorID: abi.SectorID{
			Miner:  abi.ActorID(minerActorID),
			Number: onChainInfo.SectorNumber,
		},
		OnChain:               *onChainInfo,
		Randomness:            abi.SealRandomness(svInfoRandomness),
		InteractiveRandomness: abi.InteractiveSealRandomness(svInfoInteractiveRandomness),
		UnsealedCID:           unsealedCID,
	}

	isVerified := rt.Syscalls().VerifySeal(sectorSize, svInfo)

	if !isVerified {
		rt.AbortStateMsg("Sector seal failed to verify")
	}
}

func getSectorNums(m map[abi.SectorNumber]SectorOnChainInfo) []abi.SectorNumber {
	var l []abi.SectorNumber
	for i, _ := range m {
		l = append(l, i)
	}
	return l
}

func _surprisePoStSampleChallengedSectors(
	sampleRandomness abi.Randomness, provingSet []abi.SectorNumber) []abi.SectorNumber {

	IMPL_TODO()
	panic("")
}

Sector

The Sector is a fundamental “storage container” abstraction used in Filecoin Storage Mining. It is the basic unit of storage, and serves to make storage conform to a set of expectations.

New sectors are empty upon creation. As the miner receives client data, they fill or “pack” the piece(s) into an unsealed sector.

Once a sector is full, the unsealed sector is combined by a proving tree into a single root UnsealedSectorCID. The sealing process then encodes (using CBOR) an unsealed sector into a sealed sector, with the root SealedSectorCID.

This diagram shows the composition of an unsealed sector and a sealed sector.

Unsealed Sectors and Sealed Sectors (open in new tab)
import abi "github.com/filecoin-project/specs-actors/actors/abi"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import smarkact "github.com/filecoin-project/specs-actors/actors/builtin/storage_market"

type Bytes32 Bytes
type Commitment Bytes32  // TODO

type FaultSet CompactSectorSet
type StorageFaultType int

// SectorInDetail describes all the bits of information associated
// with each sector.
// - ID   - a unique identifier assigned once the Sector is registered on chain
// - Size - the size of the sector. there are a set of allowable sizes
//
// NOTE: do not use this struct. It is for illustrative purposes only.
type SectorInDetail struct {
    ID    abi.SectorID
    Size  abi.SectorSize

    Unsealed struct {
        CID     abi.UnsealedSectorCID
        Deals   [smarkact.StorageDeal]
        Pieces  [piece.Piece]
        // Pieces Tree<Piece> // some tree for proofs
        Bytes
    }

    Sealed struct {
        CID              abi.SealedSectorCID
        Bytes
        RegisteredProof  abi.RegisteredProof
    }
}

// SectorInfo is an object that gathers all the information miners know about their
// sectors. This is meant to be used for a local index.
type SectorInfo struct {
    ID                  abi.SectorID
    UnsealedInfo        UnsealedSectorInfo
    SealedInfo          SealedSectorInfo
    SealVerifyInfo      abi.SealVerifyInfo
    PersistentProofAux
}

// UnsealedSectorInfo is an object that tracks the relevant data to keep in a sector
type UnsealedSectorInfo struct {
    UnsealedCID  abi.UnsealedSectorCID  // CommD
    Size         abi.SectorSize
    PieceCount   UVarint  // number of pieces in this sector (can get it from len(Pieces) too)
    Pieces       [piece.PieceInfo]  // wont get externalized easy, -- it's big
    // Deals       [smarkact.StorageDeal]
}

// SealedSectorInfo keeps around information about a sector that has been sealed.
type SealedSectorInfo struct {
    SealedCID  abi.SealedSectorCID
    Size       abi.SectorSize
    SealArgs   SealArguments
}

TODO:

  • describe sizing ranges of sectors
  • describe “storage/shipping container” analogy

Sector Set

import abi "github.com/filecoin-project/specs-actors/actors/abi"

// sector sets
type SectorSet [abi.SectorID]
type UnsealedSectorSet SectorSet
type SealedSectorSet SectorSet

// compact sector sets
type Bitfield Bytes
type RLEpBitfield Bitfield
type CompactSectorSet RLEpBitfield

Sector PoSting

//import abi "github.com/filecoin-project/specs-actors/actors/abi"

// type PoStWitness struct {
//     Candidates [abi.PoStCandidate]
// }

Sector Sealing

import abi "github.com/filecoin-project/specs-actors/actors/abi"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"

type Path struct {}  // TODO

// SealSeed is unique to each Sector
// SealSeed is:
//    SealSeedHash(MinerID, SectorNumber, SealRandomness, abi.UnsealedSectorCID)
type SealSeed Bytes

// SealCommitment is the information kept in the state tree about a sector.
// SealCommitment is a subset of OnChainSealVerifyInfo.
type SealCommitment struct {
    SealedCID   abi.SealedSectorCID  // CommR
    DealIDs     abi.DealIDs
    Expiration  abi.ChainEpoch
}

// PersistentProofAux is meta data required to generate certain proofs
// for a sector, for example PoSt.
// These should be stored and indexed somewhere by CommR.
type PersistentProofAux struct {
    CommC              Commitment
    CommQ              Commitment
    CommRLast          Commitment

    // TODO: This may be a partially-cached tree.
    // this may be empty
    CommRLastTreePath  file.Path
}

type ProofAuxTmp struct {
    RegisteredProof  abi.RegisteredProof  // FIXME: Make sure this is supplied.
    PersistentAux    PersistentProofAux  // TODO: Move this to sealer.SealOutputs.

    SectorID         abi.SectorID
    CommD            Commitment
    CommR            abi.SealedSectorCID
    CommDTreePath    file.Path
    CommCTreePath    file.Path
    CommQTreePath    file.Path

    Seed             SealSeed
    KeyLayers        [Bytes]
}

type SealArguments struct {
    RegisteredProof  abi.RegisteredProof
    OutputArtifacts  SealOutputArtifacts
}

// TODO: move into proofs lib
type FilecoinSNARKProof