Filecoin Specification
Filecoin Specification
protocol version: v0.1.0
spec doc version: v1.1-e0d49510
2019-11-12_11:36:04Z

Introduction

Warning: This draft of the Filecoin protocol specification is a work in progress. It is intended to establish the rough overall structure of the document, enabling experts to fill in different sections in parallel. However, within each section, content may be out-of-order, incorrect, and/or incomplete. The reader is advised to refer to the official Filecoin spec document for specification and implementation questions.

Filecoin is a distributed storage network based on a blockchain mechanism. Filecoin miners can elect to provide storage capacity for the network, and thereby earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify that they are providing the capacity specified. In addition, Filecoin enables parties to exchange FIL currency through transactions recorded in a shared ledger on the Filecoin blockchain. Rather than using Nakamoto-style proof of work to maintain consensus on the chain, however, Filecoin uses proof of storage itself: a miner’s power in the consensus protocol is proportional to the amount of storage it provides.

The Filecoin blockchain not only maintains the ledger for FIL transactions and accounts, but also implements the Filecoin VM, a replicated state machine which executes a variety of cryptographic contracts and market mechanisms among participants on the network. These contracts include storage deals, in which clients pay FIL currency to miners in exchange for storing the specific file data that the clients request. Via the distributed implementation of the Filecoin VM, storage deals and other contract mechanisms recorded on the chain continue to be processed over time, without requiring further interaction from the original parties (such as the clients who requested the data storage).

Architecture Diagrams

Filecoin Systems

Status Legend:

  • πŸ›‘ Bare - Very incomplete at this time.
    • Implementors: This is far from ready for you.
  • ⚠️ Rough – work in progress, heavy changes coming, as we put in place key functionality.
    • Implementors: This will be ready for you soon.
  • πŸ” Refining - Key functionality is there, some small things expected to change. Some big things may change.
    • Implementors: Almost ready for you. You can start building these parts, but beware there may be changes still.
  • βœ… Stable - Mostly complete, minor things expected to change, no major changes expected.
    • Implementors: Ready for you. You can build these parts.

[Show / Hide ] status indicators

Overview Diagram

TODO:

  • cleanup / reorganize
    • this diagram is accurate, and helps lots to navigate, but it’s still a bit confusing
    • the arrows and lines make it a bit hard to follow. We should have a much cleaner version (maybe based on C4)
  • reflect addition of Token system
    • move data_transfers into Token
Protocol Overview Diagram (open in new tab)

Protocol Flow Diagram – deals off chain

Protocol Sequence Diagram - Deals off Chain (open in new tab)

Protocol Flow Diagram – deals on chain

Protocol Sequence Diagram - Deals on Chain (open in new tab)

Parameter Calculation Dependency Graph

This is a diagram of the model for parameter calculation. This is made with orient, our tool for modeling and solving for constraints.

Parameter Calculation Dependency Graph (open in new tab)

Key Concepts

For clarity, we refer the following types of entities to describe implementations of the Filecoin protocol:

  • Data structures are collections of semantically-tagged data members (e.g., structs, interfaces, or enums).

  • Functions are computational procedures that do not depend on external state (i.e., mathematical functions, or programming language functions that do not refer to global variables).

  • Components are sets of functionality that are intended to be represented as single software units in the implementation structure. Depending on the choice of language and the particular component, this might correspond to a single software module, a thread or process running some main loop, a disk-backed database, or a variety of other design choices. For example, the is a component: it could be implemented as a process or thread running a single specified main loop, which waits for network messages and responds accordingly by recording and/or forwarding block data.

  • APIs are messages that can be sent to components. A client’s view of a given sub-protocol, such as a request to a miner node’s {% {} %} component to store files in the storage market, may require the execution of a series of APIs.

  • Nodes are complete software and hardware systems that interact with the protocol. A node might be constantly running several of the above components, participating in several subsystems, and exposing APIs locally and/or over the network, depending on the node configuration. The term full node refers to a system that runs all of the above components, and supports all of the APIs detailed in the spec.

  • Subsystems are conceptual divisions of the entire Filecoin protocol, either in terms of complete protocols (such as the Storage Market or Retrieval Market), or in terms of functionality (such as the VM - Virtual Machine). They do not necessarily correspond to any particular node or software component.

  • Actors are virtual entities embodied in the state of the Filecoin VM. Protocol actors are analogous to participants in smart contracts; an actor carries a FIL currency balance and can interact with other actors via the operations of the VM, but does not necessarily correspond to any particular node or software component.

Filecoin VM

The majority of Filecoin’s user facing functionality (payments, storage market, power table, etc) is managed through the Filecoin Virtual Machine (Filecoin VM). The network generates a series of blocks, and agrees which ‘chain’ of blocks is the correct one. Each block contains a series of state transitions called messages, and a checkpoint of the current global state after the application of those messages.

The global state here consists of a set of actors, each with their own private state.

An actor is the Filecoin equivalent of Ethereum’s smart contracts, it is essentially an ‘object’ in the filecoin network with state and a set of methods that can be used to interact with it. Every actor has a Filecoin balance attributed to it, a state pointer, a code CID which tells the system what type of actor it is, and a nonce which tracks the number of messages sent by this actor. (TODO: the nonce is really only needed for external user interface actors, AKA account actors. Maybe we should find a way to clean that up?)

There are two routes to calling a method on an actor. First, to call a method as an external participant of the system (aka, a normal user with Filecoin) you must send a signed message to the network, and pay a fee to the miner that includes your message. The signature on the message must match the key associated with an account with sufficient Filecoin to pay for the messages execution. The fee here is equivalent to transaction fees in Bitcoin and Ethereum, where it is proportional to the work that is done to process the message (Bitcoin prices messages per byte, Ethereum uses the concept of ‘gas’. We also use ‘gas’).

Second, an actor may call a method on another actor during the invocation of one of its methods. However, the only time this may happen is as a result of some actor being invoked by an external users message (note: an actor called by a user may call another actor that then calls another actor, as many layers deep as the execution can afford to run for).

For full implementation details, see the VM subsystem.

Filecoin Spec Process (v1)

πŸš€ Pre-launch mode

Until we launch, we are making lots of changes to the spec to finish documenting the current version of the protocol. Changes will be made to the spec by a simple PR process, with approvals by key stakeholders. Some refinements are still to happen and testnet is expected to bring a few significant fixes/improvements. Most changes now are changing the document, NOT changing the protocol, at least not in a major way.

Until we launch, if something is missing, PR it in. If something is wrong, PR a fix. If something needs to be elaborated, PR in updates. What is in the top level of this repo, in master, is the spec, is the Filecoin Protocol. Nothing else matters (ie. no other documents, issues contain “the protocol”).

New Proposals -> Drafts -> Spec

⚠️ WARNING: Filecoin is in pre-launch mode, and we are finishing protocol spec and implementations of the current construction/version of the protocol only. We are highly unlikely to merge anything new into the Filecoin Protocol until after mainnet. Feel free to explore ideas anyway and propeare improvements for the future.

For anything that is not part of the currently speced systems (like ‘repair’, for example) the process we will use is:

  • (1) First, discuss the problem(s) and solution(s) in an issue
    • Or several issues, if the space is large and multithreaded enough.
    • Work out all the details required to make this proposal work.
  • (2) Write a draft with all the details.
    • When you feel like a solution is near, write up a draft document that contains all the details, and includes what changes would need to happen to the spec
    • E.g. “Add a System called X with …”, or “Add a library called Y, …”, or “Modify vm/state_tree to include …”
    • Place this document inside the src/drafts/ directory.
    • Anybody is welcome to contribute well-reasoned and detailed drafts.
    • (Note: these drafts will give way to FIPs in the future)
  • (3) Seek approval to merge this into the specification.
    • To seek approval, open an issue and discuss it.
    • If the draft approved by the owners of the filecoin-spec, then the changes to the spec will need to be made in a PR.
    • Once changes make it into the spec, remove the draft.

It is acceptable for a PR for a draft to stay open for quite a while, as thought and discussion on the topic happens. At some point, if the reviewers and the author feel that the current state of the draft is stable enough (though not ‘done’) then it should be merged into the repo. Further changes to the draft are additional PRs, which may generate more discussion. Comments on these drafts are welcome from anyone, but if you wish to be involved in the actual research process, you will need to devote very considerable time and energy to the process.

On merging

For anything in the drafts or notes folder, merge yourself after a review from a relevant person. For anything in the top level (canonical spec), @whyrusleeping or @jbenet will merge after proper review.

Issues

Issues in the specs repo will be high signal, they will either be proposals, or issues directly relating to problems in the spec. More speculative research questions and discussion will happen in the research repo.

About this specification

TODO

FIPs - Filecoin Improvement Proposals

TODO

Contributing to the Filecoin spec

TODO

Change Log - Version History

v1.1 - 2019-10-30 - c3f6a6dd

  • Deals on chain
    • Storage Deals
    • Full StorageMarketActor logic:
      • client and miner balances: deposits, locking, charges, and withdrawls
      • collateral slashing
    • Full StorageMinerActor logic:
      • sector states, state transitions, state accounting, power accounting
      • DeclareFaults + RecoverSectors flow
      • CommitSector flow
      • SubmitPost flow
        • Sector proving, faults, recovery, and expiry
      • OnMissedPost flow
        • Fault sectors, drop power, expiry, and more
    • StoragePowerActor
      • power accounting based on StorageMinerActor state changes
      • Collaterals: deposit, locking, withdrawal
      • Slashing collaerals
    • Interactive-Post
      • StorageMinerActor: PrecommitSector and CommitSector
    • Surprise-Post
      • Challenge flow through CronActor -> StoragePowerActor -> StorageMiner
  • Virtual Machine
    • Extracted VM system out of blockchain
    • Addresses
    • Actors
      • Separation of code and state
    • Messages
      • Method invocation representation
    • Runtime
      • Slimmed down interface
      • Safer state Acquire, Release, Commit flow
      • Exit codes
      • Full invocation flow
      • Safer recursive context construction
      • Error levels and handling
      • Detecting and handling out of gas errors
    • Interpreter
      • ApplyMessage
      • {Deduct,Deposit} -> Transfer - safer
      • Gas accounting
    • VM system actors
      • InitActor basic flow, plug into Runtime
      • CronActor full flow, static registry
    • AccountActor basic flow
  • Data Transfer
    • Full Data Transfer flows
      • push, pull, 1-RTT pull
    • protocol, data structures, interface
    • diagrams
  • blockchain/ChainSync:
    • first version of ChainSync protocol description
    • Includes protocol state machine description
    • Network bootstrap – connectivity and state
    • Progressive Block Validation
    • Progressive Block Propagation
  • Other
    • Spec section status indicators
    • Changelog

v1.0 - 2019-10-07 - 583b1d06

  • Full spec reorganization
  • Tooling
    • added a build system to compile tools
    • added diagraming tools (dot, mermaid, etc)
    • added dependency installation
    • added Orient to calculate protocol parameters
  • Content
    • filecoin_nodes
      • types - an overview of different filecoin node types
      • repository - local data-structure storage
      • network interface - connecting to libp2p
      • clock - a wall clock
    • files & data
      • file - basic representation of data
      • piece - representation of data to store in filecoin
    • blockchain
      • blocks - basic blockchain data structures (block, tipset, chain, etc)
      • storage power consensus - basic algorithms and crypto artifacts for SPC
      • StoragePowerActor basics
    • token
      • skeleton of sections
    • storage mining
      • storage miner: module that controls and coordinates storage mining
      • sector: unit of storage, sealing, crypto artifacts, etc.
      • sector index: accounting sectors and metadata
      • storage proving: seals, posts, and more
    • market
      • deals: storage market deal basics
      • storage market: StorageMarketActor basics
    • orient
      • orient models for proofs and block sizes
    • libraries
      • filcrypto - sealing, PoRep, PoSt algorithms
      • ipld - cids, ipldstores
      • libp2p - host/node representation
      • ipfs - graphsync and bitswap
      • multiformats - multihash, multiaddr
    • diagrams
      • system overview
      • full protocol mermaid flow

pre v1.0

System Decomposition

What are Systems? How do they work?

Filecoin decouples and modularizes functionality into loosely-joined systems. Each system adds significant functionality, usually to achieve a set of important and tightly related goals.

For example, the Blockchain System provides structures like Block, Tipset, and Chain, and provides functionality like Block Sync, Block Propagation, Block Validation, Chain Selection, and Chain Access. This is separated from the Files, Pieces, Piece Preparation, and Data Transfer. Both of these systems are separated from the Markets, which provide Orders, Deals, Market Visibility, and Deal Settlement.

Why is System decoupling useful?

This decoupling is useful for:

  • Implementation Boundaries: it is possible to build implementations of Filecoin that only implement a subset of systems. This is especially useful for Implementation Diversity: we want many implementations of security critical systems (eg Blockchain), but do not need many implementations of Systems that can be decoupled.
  • Runtime Decoupling: system decoupling makes it easier to build and run Filecoin Nodes that isolate Systems into separate programs, and even separate physical computers.
  • Security Isolation: some systems require higher operational security than others. System decoupling allows implementations to meet their security and functionality needs. A good example of this is separating Blockchain processing from Data Transfer.
  • Scalability: systems and various use cases may drive different performance requirements for different opertators. System decoupling makes it easier for operators to scale their deployments along system boundaries.

Filecoin Nodes don’t need all the systems

Filecoin Nodes vary significantly, and do not need all the systems. Most systems are only needed for a subset of use cases.

For example, the Blockchain System is required for synchronizing the chain, participating in secure consensus, storage mining, and chain validation. Many Filecoin Nodes do not need the chain and can perform their work by just fetching content from the latest StateTree, from a node they trust. Of course, such nodes

Note: Filecoin does not use the “full node” or “light client” terminology, in wide use in Bitcoin and other blockchain networks. In filecoin, these terms are not well defined. It is best to define nodes in terms of their capabilities, and therefore, in terms of the Systems they run. For example:

  • Chain Verifier Node: Runs the Blockchain system. Can sync and validate the chain. Cannot mine or produce blocks.
  • Client Node: Runs the Blockchain, Market, and Data Transfer systems. Can sync and validate the chain. Cannot mine or produce blocks.
  • Retrieval Miner Node: Runs the Market and Data Transfer systems. Does not need the chain. Can make Retrieval Deals (Retrieval Provider side). Can send Clients data, and get paid for it.
  • Storage Miner Node: Runs the Blockchain, Storage Market, Storage Mining systems. Can sync and validate the chain. Can make Storage Deals (Storage Provider side). Can seal stored data into sectors. Can acquire storage consensus power. Can mine and produce blocks.

Separating Systems

How do we determine what functionality belongs in one system vs another?

Drawing boundaries between systems is the art of separating tightly related functionality from unrelated parts. In a sense, we seek to keep tightly integrated components in the same system, and away from other unrelated components. This is sometimes straightforward, the boundaries naturally spring from the data structures or functionality. For example, it is straightforward to observe that Clients and Miners negotiating a deal with each other is very unrelated to VM Execution.

Sometimes this is harder, and it requires detangling, adding, or removing abstractions. For example, the StoragePowerActor and the StorageMarketActor were a single Actor previously. This caused a large coupling of functionality across StorageDeal making, the StorageMarket, markets in general, with Storage Mining, Sector Sealing, PoSt Generation, and more. Detangling these two sets of related functionality requried breaking apart the one actor into two.

Decomposing within a System

Systems themselves decompose into smaller subunits. These are sometimes called “subsystems” to avoid confusion with the much larger, first-class Systems. Subsystems themselves may break down further. The naming here is not strictly enforced, as these subdivisions are more related to protocol and implementation engineering concerns than to user capabilities.

Implementing Systems

System Requirements

In order to make it easier to decouple functionality into systems, the Filecoin Protocol assumes a set of functionality available to all systems. This functionality can be achieved by implementations in a variety of ways, and should take the guidance here as a recommendation (SHOULD).

All Systems, as defined in this document, require the following:

  • Repository:
    • Local IpldStore. Some amount of persistent local storage for data structures (small structured objects). Systems expect to be initialized with an IpldStore in which to store data structures they expect to persist across crashes.
    • User Configuration Values. A small amount of user-editable configuration values. These should be easy for end-users to access, view, and edit.
    • Local, Secure KeyStore. A facility to use to generate and use cryptographic keys, which MUST remain secret to the Filecoin Node. Systems SHOULD NOT access the keys directly, and should do so over an abstraction (ie the KeyStore) which provides the ability to Encrypt, Decrypt, Sign, SigVerify, and more.
  • Local FileStore. Some amount of persistent local storage for files (large byte arrays). Systems expect to be initialized with a FileStore in which to store large files. Some systems (like Markets) may need to store and delete large volumes of smaller files (1MB - 10GB). Other systems (like Storage Mining) may need to store and delete large volumes of large files (1GB - 1TB).
  • Network. Most systems need access to the network, to be able to connect to their counterparts in other Filecoin Nodes. Systems expect to be initialized with a libp2p.Node on which they can mount their own protocols.
  • Clock. Some systems need access to current network time, some with low tolerance for drift. Systems expect to be initialized with a Clock from which to tell network time. Some systems (like Blockchain) require very little clock drift, and require secure time.

For this purpose, we use the FilecoinNode data structure, which is passed into all systems at initialization:

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

type FilecoinNode struct {
    Node        libp2p.Node

    Repository  repo.Repository
    FileStore   filestore.FileStore
    Clock       clock.UTCClock
}
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    config          config.Config
    ipldStore       ipld.Store
    keyStore        key.Store

    // CreateRepository(config Config, ipldStore IPLDDagStore, keyStore KeyStore) &Repository
    GetIPLDStore()  ipld.Store
    GetKeyStore()   key.Store
    GetConfig()     config.Config
}

System Limitations

Further, Systems MUST abide by the following limitations:

  • Random crashes. A Filecoin Node may crash at any moment. Systems must be secure and consistent through crashes. This is primarily achived by limiting the use of persistent state, persisting such state through Ipld data structures, and through the use of initialization routines that check state, and perhaps correct errors.
  • Isolation. Systems must communicate over well-defined, isolated interfaces. They must not build their critical functionality over a shared memory space. (Note: for performance, shared memory abstractions can be used to power IpldStore, FileStore, and libp2p, but the systems themselves should not require it). This is not just an operational concern; it also significantly simplifies the protocol and makes it easier to understand, analyze, debug, and change.
  • No direct access to host OS Filesystem or Disk. Systems cannot access disks directly – they do so over the FileStore and IpldStore abstractions. This is to provide a high degree of portability and flexibility for end-users, especially storage miners and clients of large amounts of data, which need to be able to easily replace how their Filecoin Nodes access local storage.
  • No direct access to host OS Network stack or TCP/IP. Systems cannot access the network directly – they do so over the libp2p library. There must not be any other kind of network access. This provides a high degree of portability across platforms and network protocols, enabling Filecoin Nodes (and all their critical systems) to run in a wide variety of settings, using all kinds of protocols (eg Bluetooth, LANs, etc).

Systems

Filecoin Nodes

Node Types

Node Interface

import repo "github.com/filecoin-project/specs/systems/filecoin_nodes/repository"
import filestore "github.com/filecoin-project/specs/systems/filecoin_files/file"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

type FilecoinNode struct {
    Node        libp2p.Node

    Repository  repo.Repository
    FileStore   filestore.FileStore
    Clock       clock.UTCClock
}

Examples

There are many kinds of Filecoin Nodes …

This section should contain:

  • what all nodes must have, and why
  • examples of using different systems

Chain Verifier Node

type ChainVerifierNode interface {
  FilecoinNode

  systems.Blockchain
}

Client Node

type ClientNode struct {
  FilecoinNode

  systems.Blockchain
  markets.StorageMarketClient
  markets.RetrievalMarketClient
  markets.MarketOrderBook
  markets.DataTransfers
}

Storage Miner Node

type StorageMinerNode interface {
  FilecoinNode

  systems.Blockchain
  systems.Mining
  markets.StorageMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Retrieval Miner Node

type RetrievalMinerNode interface {
  FilecoinNode

  blockchain.Blockchain
  markets.RetrievalMarketProvider
  markets.MarketOrderBook
  markets.DataTransfers
}

Relayer Node

type RelayerNode interface {
  FilecoinNode

  blockchain.MessagePool
  markets.MarketOrderBook
}

Repository - Local Storage for Chain Data and Systems

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import key "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/key"
import config "github.com/filecoin-project/specs/systems/filecoin_nodes/repository/config"

type Repository struct {
    config          config.Config
    ipldStore       ipld.Store
    keyStore        key.Store

    // CreateRepository(config Config, ipldStore IPLDDagStore, keyStore KeyStore) &Repository
    GetIPLDStore()  ipld.Store
    GetKeyStore()   key.Store
    GetConfig()     config.Config
}

Config - Local Storage for ConfigurationValues

Filecoin Node configuration

type ConfigKey string
type ConfigVal Bytes

type Config struct {
    Get(k ConfigKey) union {c ConfigVal, e error}
    Put(k ConfigKey, v ConfigVal) error

    Subconfig(k ConfigKey) Config
}

KeyStore & user keys

type Key struct {
    //  Algo Algorithm
    Data Bytes
}

// key.Name
type Name string

// key.Store
// TODO: redo this providing access to enc, dec, sign, sigverify operations, and not the keys.
type Store struct {
    Put(n Name, key Key) error
    Get(n Name) union {k Key, e error}
    //  Sign(n Name, data Bytes) Signature
}

type Algorithm union {
    Sig SignatureAlgorithm
}

type SignatureAlgoC struct {
    Sign(b Bytes) union {s Signature, e error}
    Verify(b Bytes, s Signature) union {b bool, e error}
}

type EdDSASignatureAlgorithm SignatureAlgoC
type Secp256k1SignatureAlgorithm SignatureAlgoC
type BLSAggregateSignatureAlgorithm SignatureAlgoC

type SignatureAlgorithm union {
    EdDSASigAlgo      EdDSASignatureAlgorithm
    Secp256k1SigAlgo  Secp256k1SignatureAlgorithm
    BLSSigAlgo        BLSAggregateSignatureAlgorithm
}

type Signature struct {
    Algo           SignatureAlgorithm
    Data           Bytes

    Verify(k Key)  union {b bool, e error}
}

IpldStore - Local Storage for hash-linked data

type Store GraphStore

// imported as ipld.Object
type Object interface {
    CID() CID

    // Populate(v interface{}) error
}

TODO:

  • What is IPLD
    • hash linked data
    • from IPFS
  • Why is it relevant to filecoin
    • all network datastructures are definitively IPLD
    • all local datastructures can be IPLD
  • What is an IpldStore
    • local storage of dags
  • How to use IpldStores in filecoin
    • pass it around
  • One ipldstore or many
    • temporary caches
    • intermediately computed state
  • Garbage Collection

Usage in Systems

TODO: - Explain how repo is used with systems and subsystems - compartmentalized local storage - store ipld datastructures of stateful objects

Network Interface

import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

type Node libp2p.Node

Filecoin nodes use the libp2p protocol for peer discovery, peer routing, and message multicast, and so on. Libp2p is a set of modular protocols common to the peer-to-peer networking stack. Nodes open connections with one another and mount different protocols or streams over the same connection. In the initial handshake, nodes exchange the protocols that each of them supports and all Filecoin related protcols will be mounted under /filecoin/... protocol identifiers.

Here is the list of libp2p protocols used by Filecoin.

  • Graphsync: TODO
  • Bitswap: TODO
  • Gossipsub: block headers and messages are broadcasted through a Gossip PubSub protocol where nodes can subscribe to topics such as NewBlock, BlockHeader, BlockMessage, etc and receive messages in those topics. When receiving messages related to a topic, nodes processes the message and forwards it to its peers who also subscribed to the same topic.
  • Kad-DHT: Kademlia DHT is a distributed hash table with a logarithmic bound on the maximum number of lookups for a particular node. Kad DHT is used primarily for peer routing as well as peer discovery in the Filecoin protocol.
  • Bootstrap: Bootstrap is a list of nodes that a new node attempts to connect upon joining the network. The list of bootstrap nodes and their addresses are defined by the users.

Clock

type Time string  // ISO nano timestamp
type UnixTime int64  // unix timestamp
type UnixTimeNano int64  // unix timestamp in nanoseconds

// UTCClock is a normal, system clock reporting UTC time.
// It should be kept in sync, with drift less than 1 second.
type UTCClock struct {
    NowUTC()          Time
    NowUTCUnix()      UnixTime
    NowUTCUnixNano()  UnixTimeNano
}

// ChainEpoch represents a round of a blockchain protocol.
type ChainEpoch UVarint

// ChainEpochClock is a clock that represents epochs of the protocol.
type ChainEpochClock struct {
    // GenesisTime is the time of the first block. EpochClock counts
    // up from there.
    GenesisTime          Time

    EpochAtTime(t Time)  ChainEpoch
}
package clock

import "time"

// UTCMaxDrift is how large the allowable drift is in Filecoin's use of UTC time.
var UTCMaxDrift = time.Second

// UTCSyncPeriod notes how often to sync the UTC clock with an authoritative
// source, such as NTP, or a very precise hardware clock.
var UTCSyncPeriod = time.Hour

// EpochDuration is a constant that represents the UTC time duration
// of a blockchain epoch.
var EpochDuration = time.Second * 15

// ISOFormat is the ISO timestamp format we use, in Go time package notation.
var ISOFormat = "2006-01-02T15:04:05.999999999Z"

func (_ *UTCClock_I) NowUTC() Time {
	return Time(time.Now().Format(ISOFormat))
}

func (_ *UTCClock_I) NowUTCUnix() UnixTime {
	return UnixTime(time.Now().Unix())
}

func (_ *UTCClock_I) NowUTCUnixNano() UnixTimeNano {
	return UnixTimeNano(time.Now().UnixNano())
}

// EpochAtTime returns the ChainEpoch corresponding to t.
// It first subtracts GenesisTime, then divides by EpochDuration
// and returns the resulting number of epochs.
func (c *ChainEpochClock_I) EpochAtTime(t Time) ChainEpoch {
	g1 := c.GenesisTime()
	g2, err := time.Parse(ISOFormat, string(g1))
	if err != nil {
		// an implementation should probably not panic here
		// this is for simplicity of the spec
		panic(err)
	}

	t2, err := time.Parse(ISOFormat, string(t))
	if err != nil {
		panic(err)
	}

	difference := t2.Sub(g2)
	epochs := difference / EpochDuration
	return ChainEpoch(epochs)
}

Filecoin assumes weak clock synchrony amongst participants in the system. That is, the system relies on participants having access to a globally synchronized clock, tolerating bounded delay in honest clock lower than epoch time (more on this in a forthcoming paper).

Filecoin relies on this system clock in order to secure consensus, specifically ensuring that participants are only running leader elections once per epoch and enabling miners to catch such deviations from the protocol. Given a system start and epoch time by the genesis block, the system clock allows miners to associate epoch and wall clock time, thereby enabling them to reason about block validity and give the protocol liveness.

Clock uses

Specifically, the Filecoin system clock is used:

  • to validate incoming blocks and ensure they were mined in the appropriate round, looking at the wall clock time in conjunction with the block’s ElectionProof (which contains the epoch number) (see Secret Leader Election and Block Validation).
  • to help protocol convergence by giving miners a specific cutoff after which to reject incoming blocks in this round (see ChainSync - synchronizing the Blockchain).
  • to maintain protocol liveness by allowing participants to try leader election in the next round if no one has produced a block in this round (see Storage Power Consensus).

In order to allow miners to do the above, the system clock must:

  1. have low clock drift: at most on the order of 1s (i.e. markedly lower than epoch time) at any given time.
  2. maintain accurate network time over many epochs: resyncing and enforcing accurate network time.
  3. set epoch number on client initialization equal to epoch ~= (current_time - genesis_time) / epoch_time

It is expected that other subsystems will register to a NewRound() event from the clock subsystem.

Clock Requirements

Computer-grade clock crystals can be expected to have drift rates on the order of 1ppm (i.e. 1 microsecond every second or .6 seconds a week), therefore, in order to respect the first above-requirement,

  • clients SHOULD query an NTP server (pool.ntp.org is recommended) on an hourly basis to adjust clock skew.
  • clients CAN consider using cesium clocks instead for accurate synchrony within larger mining operations

Assuming a majority of rational participants, the above should lead to relatively low skew over time, with seldom more than 10-20% clock skew that should be rectified periodically by the network, as is the case in other networks. This assumption can be tested over time by ensuring that:

  • (real-time) epoch time is as dictated by the protocol
  • (historical) the current epoch number is as expected
Future work

If either of the above metrics show significant network skew over time, future versions of Filecoin may include potential timestamp/epoch correction periods at regular intervals.

More generally, future versions of the Filecoin protocol will use Verifiable Delay Functions (VDFs) to strongly enforce block time and fulfill this leader election requirement; we choose to explicitly assume clock synchrony until hardware VDF security has been proven more extensively.

Key Store

The Key Store is a fundamental abstraction in any full Filecoin node used to store the keypairs associated to a given miner’s address and distinct workers (should the miner choose to run multiple workers).

Node security depends in large part on keeping these keys secure. To that end we recommend keeping keys separate from any given subsystem and using a separate key store to sign requests as required by subsystems as well as keeping those keys not used as part of mining in cold storage.

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type KeyStore struct {
    ownerAddress  address.Address
    ownerKey      filcrypto.SigKeyPair
    Workers       [Worker]

    // SignWithKey(key, sigType, input)
}

type Worker struct {
    Address     address.Address
    VRFKeyPair  filcrypto.VRFKeyPair
}
package key_store

TODO:

  • describe the different types of keys used in the protocol and their usage
  • clean interfaces for getting signatures for full filecoin mining cycles
  • potential reccomendations or clear disclaimers with regards to consequences of failed key security
  • protocol for changing worker keys in filecoin

Files & Data

Filecoin’s primary aim is to store client’s Files and Data. This section details data structures and tooling related to working with files, chunking, encoding, graph representations, Pieces, storage abstractions, and more.

File

// Path is an opaque locator for a file (e.g. in a unix-style filesystem).
type Path string

// File is a variable length data container.
// The File interface is modeled after a unix-style file, but abstracts the
// underlying storage system.
type File interface {
    Path()   Path
    Size()   int
    Close()  error

    // Read reads from File into buf, starting at offset, and for size bytes.
    Read(offset int, size int, buf Bytes) struct {size int, e error}

    // Write writes from buf into File, starting at offset, and for size bytes.
    Write(offset int, size int, buf Bytes) struct {size int, e error}
}

FileStore - Local Storage for Files

The FileStore is an abstraction used to refer to any underlying system or device that Filecoin will store its data to. It is based on Unix filesystem semantics, and includes the notion of Paths. This abstraction is here in order to make sure Filecoin implementations make it easy for end-users to replace the underlying storage system with whatever suits their needs. The simplest version of FileStore is just the host operating system’s file system.

// FileStore is an object that can store and retrieve files by path.
type FileStore struct {
    Open(p Path)           union {f File, e error}
    Create(p Path)         union {f File, e error}
    Store(p Path, f File)  error
    Delete(p Path)         error

    // maybe add:
    // Copy(SrcPath, DstPath)
}
Varying user needs

Filecoin user needs vary significantly, and many users – especially miners – will implement complex storage architectures underneath and around Filecoin. The FileStore abstraction is here to make it easy for these varying needs to be easy to satisfy. All file and sector local data storage in the Filecoin Protocol is defined in terms of this FileStore interface, which makes it easy for implementations to make swappable, and for end-users to swap out with their system of choice.

Implementation examples

The FileStore interface may be implemented by many kinds of backing data storage systems. For example:

  • The host Operating System file system
  • Any Unix/Posix file system
  • RAID-backed file systems
  • Networked of distributed file systems (NFS, HDFS, etc)
  • IPFS
  • Databases
  • NAS systems
  • Raw serial or block devices
  • Raw hard drives (hdd sectors, etc)

Implementations SHOULD implement support for the host OS file system. Implementations MAY implement support for other storage systems.

Piece - a part of a file

A Piece is an object that represents a whole or part of a File, and is used by Clients and Miners in Deals. Clients hire Miners to store Pieces.

import ipld "github.com/filecoin-project/specs/libraries/ipld"

// PieceCID is the main reference to pieces in Filecoin. It is the CID
// of the Piece.
type PieceCID ipld.CID

type NumBytes UVarint  // TODO: move into util

// PieceSize is the size of a piece, in bytes
type PieceSize struct {
    PayloadSize   NumBytes
    OverheadSize  NumBytes

    Total()       NumBytes
}

// PieceInfo is an object that describes details about a piece, and allows
// decoupling storage of this information from the piece itself.
type PieceInfo struct {
    ID    PieceID
    Size  PieceSize
    // TODO: store which algorithms were used to construct this piece.
}

// Piece represents the basic unit of tradeable data in Filecoin. Clients
// break files and data up into Pieces, maybe apply some transformations,
// and then hire Miners to store the Pieces.
//
// The kinds of transformations that may ocurr include erasure coding,
// encryption, and more.
//
// Note: pieces are well formed.
type Piece struct {
    Info       PieceInfo

    // tree is the internal representation of Piece. It is a tree
    // formed according to a sequence of algorithms, which make the
    // piece able to be verified.
    tree       PieceTree

    // Payload is the user's data.
    Payload()  Bytes

    // Data returns the serialized representation of the Piece.
    // It includes the payload data, and intermediate tree objects,
    // formed according to relevant storage algorithms.
    Data()     Bytes
}

// // LocalPieceRef is an object used to refer to pieces in local storage.
// // This is used by subsystems to store and locate pieces.
// type LocalPieceRef struct {
//   ID   PieceID
//   Path file.Path
// }

// PieceTree is a data structure used to form pieces. The algorithms involved
// in the storage proofs determine the shape of PieceTree and how it must be
// constructed.
//
// Usually, a node in PieceTree will include either Children or Data, but not
// both.
//
// TODO: move this into filproofs -- use a tree from there, as that's where
// the algorightms are defined. Or keep this as an interface, met by others.
type PieceTree struct {
    Children  [PieceTree]
    Data      Bytes
}

PieceStore - storing and indexing pieces

A PieceStore is an object that can store and retrieve pieces from some local storage. The PieceStore additionally keeps an index of pieces.

import ipld "github.com/filecoin-project/specs/libraries/ipld"

type PieceID UVarint

// PieceStore is an object that stores pieces into some local storage.
// it is internally backed by an IpldStore.
type PieceStore struct {
    Store              ipld.Store
    Index              {PieceID: Piece}

    Get(i PieceID)     struct {p Piece, e error}
    Put(p Piece)       error
    Delete(i PieceID)  error
}

Data Transfer in Filecoin

Data Transfer is a system for transferring all or part of a Piece across the network when a deal is made.

Modules

This diagram shows how Data Tranfer and its modules fit into the picture with the Storage and Retrieval Markets. In particular, note how the Data Transfer Request Validators from the markets are plugged into the Data Transfer module, but their code belongs in the Markets system.

Data Transfer - Push Flow (open in new tab)

Terminology

  • Push Request: A request to send data to the other party
  • Pull Request: A request to have the other party send data
  • Requestor: The party that initiates the data transfer request (whether Push or Pull)
  • Responder: The party that receives the data transfer request
  • Data Transfer Voucher: A wrapper around storage or retrieval data that can identify and validate the transfer request to the other party
  • Request Validator: The data transfer module only initiates a transfer when the responder can validate that the request is tied directly to either an existing storage deal or retrieval deal. Validation is not performed by the data transfer module itself. Instead, a request validator inspects the data transfer voucher to determine whether to respond to the request.
  • Scheduler: Once a request is negotiated and validated, actual transfer is managed by a scheduler on both sides. The scheduler is part of the data transfer module but is isolated from the negotiation process. It has access to an underlying verifiable transport protocol and uses it to send data and track progress.
  • Subscriber: An external component that monitors progress of a data transfer by subscribing to data transfer events, such as progress or completion.
  • GraphSync: The default underlying transfer protocol used by the Scheduler. The full graphsync specification can be found at https://github.com/ipld/specs/blob/master/block-layer/graphsync/graphsync.md

Request Phases

There are two basic phases to any data transfer:

  1. Negotiation - the requestor and responder agree to the transfer by validating with the data transfer voucher
  2. Transfer - Once both parties have negotiated and agreed upon, the data is actually transferred. The default protocol used to do the transfer is Graphsync

Note that the Negotiation and Transfer stages can occur in seperate round trips, or potentially the same round trip, where the requesting party implicitly agrees by sending the request, and the responding party can agree and immediately send or receive data.

Example Flows

Push Flow
Data Transfer - Push Flow (open in new tab)
  1. A requestor initiates a Push transfer when it wants to send data to another party.
  2. The requestors’ data transfer module will send a push request to the responder along with the data transfer voucher. It also puts the data transfer in the scheduler queue, meaning it expects the responder to initiate a transfer once the request is verified
  3. The responder’s data transfer module validates the data transfer request via the Validator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer
  5. The responder makes a GraphSync request for the data
  6. The requestor receives the graphsync request, verifies it’s in the scheduler and begins sending data
  7. The responder receives data and can produce an indication of progress
  8. The responder completes receiving data, and notifies any listeners

The push flow is ideal for storage deals, where the client initiates the push once it verifies the the deal is signed and on chain

Pull Flow
Data Transfer - Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  3. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  4. The responder’s data transfer module schedules the transfer (meaning it is expecting the requestor to initiate the actual transfer)
  5. The responder’s data transfer module sends a response to the requestor saying it has accepted the transfer and is waiting for the requestor to initiate the transfer
  6. The requestor schedules the data transfer
  7. The requestor makes a GraphSync request for the data
  8. The responder receives the graphsync request, verifies it’s in the scheduler and begins sending data
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

The pull flow is ideal for retrieval deals, where the client initiates the pull when the deal is agreed upon.

Alternater Pull Flow - Single Round Trip

Data Transfer - Single Round Trip Pull Flow (open in new tab)
  1. A requestor initiates a Pull transfer when it wants to receive data from another party.
  2. The requestor’s DTM schedules the data transfer
  3. The requestor makes a Graphsync request to the responder with a data transfer request
  4. The responder receives the graphsync request, and forwards the data transfer request to the data transfer module
  5. The requestors’ data transfer module will send a pull request to the responder along with the data transfer voucher.
  6. The responder’s data transfer module validates the data transfer request via a PullValidator provided as a dependency by the responder
  7. The responder’s data transfer module schedules the transfer
  8. The responder sends a graphsync response along with a data transfer accepted response piggypacked
  9. The requestor receives data and can produce an indication of progress
  10. The requestor completes receiving data, and notifies any listeners

Protocol

A data transfer CAN be negotiated over the network via the Data Transfer Protocol, a Libp2p protocol type

A Pull request expects a response. The requestor does not initiate the transfer until they know the request is accepted.

The responder should send a response to a push request as well so the requestor can release the resources (if not accepted). However, if the Responder accepts the request they can immediately initiate the transfer

Using the Data Transfer Protocol as an independent libp2p communciation mechanism is not a hard requirement – as long as both parties have an implementation of the Data Transfer Subsystem that can talk to the other, any transport mechanism (including offline mechanisms) is acceptable.

Data Structures

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"

import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"

type StorageDeal struct {}
type RetrievalDeal struct {}

// A DataTransferVoucher is used to validate
// a data transfer request against the underlying storage or retrieval deal
// that precipitated it
type DataTransferVoucher union {
    StorageDealVoucher
    RetrievalDealVoucher
}

type StorageDealVoucher struct {
    deal StorageDeal
}

type RetrievalDealVoucher struct {
    deal RetrievalDeal
}

type Ongoing struct {}
type Completed struct {}
type Failed struct {}
type ChannelNotFoundError struct {}

type DataTransferStatus union {
    Ongoing
    Completed
    Failed
    ChannelNotFoundError
}

type TransferID UInt

type ChannelID struct {
    to libp2p.PeerID
    id TransferID
}

// All immutable data for a channel
type DataTransferChannel struct {
    // an identifier for this channel shared by request and responder, set by requestor through protocol
    transferID  TransferID
    // base CID for the piece being transferred
    PieceRef    ipld.CID
    // portion of Piece to return, specified by an IPLD selector
    Selector    ipld.Selector
    // used to verify this channel
    voucher     DataTransferVoucher
    // the party that is sending the data (not who initiated the request)
    sender      libp2p.PeerID
    // the party that is receiving the data (not who initiated the request)
    recipient   libp2p.PeerID
    // expected amount of data to be transferred
    totalSize   UVarint
}

// DataTransferState is immutable channel data plus mutable state
type DataTransferState struct @(mutable) {
    DataTransferChannel
    // total bytes sent from this node (0 if receiver)
    sent                 UVarint
    // total bytes received by this node (0 if sender)
    received             UVarint
}

type Open struct {}
type Progress struct {}
type Error struct {}
type Complete struct {}

type DataTransferEvent union {
    Open
    Progress
    Error
    Complete
}

type DataTransferSubscriber struct {
    OnEvent(event DataTransferEvent, channelState DataTransferState)
}

// RequestValidator is an interface implemented by the client of the data transfer module to validate requests
type RequestValidator struct {
    ValidatePush(
        sender    libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    )
    ValidatePull(
        receiver  libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    )
}

type DataTransferSubsystem struct @(mutable) {
    host              libp2p.Node
    dataTransfers     {ChannelID: DataTransferState}
    requestValidator  RequestValidator
    pieceStore        piece.PieceStore

    // open a data transfer that will send data to the recipient peer and
    // open a data transfer that will send data to the recipient peer and
    // transfer parts of the piece that match the selector
    OpenPushDataChannel(
        to        libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    ) ChannelID

    // open a data transfer that will request data from the sending peer and
    // transfer parts of the piece that match the selector
    OpenPullDataChannel(
        to        libp2p.PeerID
        voucher   DataTransferVoucher
        PieceRef  ipld.CID
        Selector  ipld.Selector
    ) ChannelID

    // close an open channel (effectively a cancel)
    CloseDataTransferChannel(x ChannelID)

    // get status of a transfer
    TransferChannelStatus(x ChannelID) DataTransferStatus

    // get notified when certain types of events happen
    SubscribeToEvents(subscriber DataTransferSubscriber)

    // get all in progress transfers
    InProgressChannels() {ChannelID: DataTransferState}
}

VM - Virtual Machine

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"

// VM is the object that controls execution.
// It is a stateless, pure function. It uses no local storage.
//
// TODO: make it just a function: VMExec(...) ?
type VM struct {
    // Execute computes and returns outTree, a new StateTree which is the
    // application of msgs to inTree.
    //
    // *Important:* Execute is intended to be a pure function, with no side-effects.
    // however, storage of the new parts of the computed outTree may exist in
    // local storage.
    //
    // *TODO:* define whether this should take 0, 1, or 2 IpldStores:
    // - (): storage of IPLD datastructures is assumed implicit
    // - (store): get and put to same IpldStore
    // - (inStore, outStore): get from inStore, put new structures into outStore
    //
    // This decision impacts callers, and potentially impacts how we reason about
    // local storage, and intermediate storage. It is definitely the case that
    // implementations may want to operate on this differently, depending on
    // how their IpldStores work.
    Execute(inTree st.StateTree, msgs [msg.Message]) union {outTree st.StateTree, err error}
}

VM Actor Interface

// This contains actor things that are _outside_ of VM exection.
// The VM uses this to execute actors.

import ipld "github.com/filecoin-project/specs/libraries/ipld"

// TokenAmount is an amount of Filecoin tokens. This type is used within
// the VM in message execution, to account movement of tokens, payment
// of VM gas, and more.
type TokenAmount UVarint  // TODO: bigint or varint?

// MethodNum is an integer that represents a particular method
// in an actor's function table. These numbers are used to compress
// invocation of actor code, and to decouple human language concerns
// about method names from the ability to uniquely refer to a particular
// method.
//
// Consider MethodNum numbers to be similar in concerns as for
// offsets in function tables (in programming languages), and for
// tags in ProtocolBuffer fields. Tags in ProtocolBuffers recommend
// assigning a unique tag to a field and never reusing that tag.
// If a field is no longer used, the field name may change but should
// still remain defined in the code to ensure the tag number is not
// reused accidentally. The same should apply to the MethodNum
// associated with methods in Filecoin VM Actors.
type MethodNum UVarint

// MethodParams is an array of objects to pass into a method. This
// is the list of arguments/parameters.
//
// TODO: serialized or deserialized? (serialized for now)
// TODO: force CIDs or by value is fine?
type MethodParam Bytes
type MethodParams [MethodParam]

// CallSeqNum is an invocation (Call) sequence (Seq) number (Num).
// This is a value used for securing against replay attacks:
// each AccountActor (user) invocation must have a unique CallSeqNum
// value. The sequenctiality of the numbers is used to make it
// easy to verify, and to order messages.
//
// Q&A
// - > Does it have to be sequential?
//   No, a random nonce could work against replay attacks, but
//   making it sequential makes it much easier to verify.
// - > Can it be used to order events?
//   Yes, a user may submit N separate messages with increasing
//   sequence number, causing them to execute in order.
//
type CallSeqNum UVarint

// Code is a serialized object that contains the code for an Actor.
// Until we accept external user-provided contracts, this is the
// serialized code for the actor in the Filecoin Specification.
type Code Bytes

// CodeCID represents a CID for a Code object.
type CodeCID ipld.CID

// Actor is a base computation object in the Filecoin VM. Similar
// to Actors in the Actor Model (programming), or Objects in Object-
// Oriented Programming, or Ethereum Contracts in the EVM.
//
// Questions for id language:
// - we should not do inheritance, we should do composition.
//   but we should make including actor state nicer.
//
// TODO: do we need this type? what is the difference between this and
// ActorState
type Actor struct {
    State ActorState
}

// ActorState represents the on-chain storage actors keep.
type ActorState struct {
    // common fields for all actors

    CodeCID
    // use a CID here, load it in interpreter.
    // Alternative is to use ActorState here but tricky w/ the type system
    State       ActorSubstateCID

    Balance     TokenAmount
    CallSeqNum  // FKA Nonce
}

// ActorID is a sequential number assigned to actors in a Filecoin Chain.
// ActorIDs are assigned by the InitActor, when an Actor is introduced into
// the Runtime.
type ActorID UVarint

type StateCID ipld.CID

type ActorSubstateCID ipld.CID

// ActorState represents the on-chain storage actors keep. This type is a
// union of concrete types, for each of the Actors:
// - InitActor
// - CronActor
// - AccountActor
// - PaymentChannelActor
// - StoragePowerActor
// - StorageMinerActor
// - StroageMarketActor
//
// TODO: move this into a directory inside the VM that patches in all
// the actors from across the system. this will be where we declare/mount
// all actors in the VM.
// type ActorState union {
//     Init struct {
//         AddressMap  {addr.Address: ActorID}
//         NextID      ActorID
//     }
// }
package actor

import ipld "github.com/filecoin-project/specs/libraries/ipld"

const (
	MethodSend        = MethodNum(0)
	MethodConstructor = MethodNum(1)
	MethodCron        = MethodNum(2)
)

func (st *ActorState_I) CID() ipld.CID {
	panic("TODO")
}

Address

// Address is defined here because this is where addresses start to make sense.
// Addresses refer to actors defined in the StateTree, so Addresses are defined
// on top of the StateTree.
//
// TODO: potentially move into a library, or its own directory.
type Address struct {
    NetworkID enum {
        Testnet
        Mainnet
    }

    Type enum {
        ID
        Secp256k1
        Actor
        BLS
    }

    VerifySyntax(addrType Address_Type) bool
    String() AddressString
    IsKeyType() bool
}

type AddressString string

State Tree

The State Tree is the output of applying operations on the Filecoin Blockchain.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

// Epoch is an epoch in time in the StateTree.
// It corresponds to rounds in the blockchain, but it
// is defined here as actors need a notion of time.
type Epoch UVarint

// TODO: move this into a directory w/ all the actors + states
type ActorName enum {
    StoragePowerActor
    StorageMarketActor
    StorageMinerActor
    PaymentChannelActor
    InitActor
    AccountActor
    CronActor
}

type StateTree struct {
    SystemActors              {ActorName: addr.Address}
    ActorStates               {addr.Address: actor.ActorState}

    // TODO: API ConvenienceAPI

    // TODO: rename to GetActorState and change return type?
    GetActor(a addr.Address)  actor.Actor

    // Balance returns the balance of a given actor
    Balance(a addr.Address)   actor.TokenAmount
}

TODO

  • Add ConvenienceAPI state to provide more user-friendly views.

VM Message - Actor Method Invocation

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"

// GasAmount is a quantity of gas.
type GasAmount UVarint

// GasPrice is a Gas-to-FIL cost
type GasPrice actor.TokenAmount

type Message union {
    UnsignedMessage
    SignedMessage
}  // representation keyed

type UnsignedMessage struct {
    From        addr.Address
    To          addr.Address

    Method      actor.MethodNum
    Params      actor.MethodParams  // Serialized parameters to the method.

    // When receiving a message from a user account the nonce in the message must match
    // the expected nonce in the "from" actor. This prevents replay attacks.
    CallSeqNum  actor.CallSeqNum
    Value       actor.TokenAmount

    GasPrice
    GasLimit    GasAmount
}  // representation tuple

type SignedMessage struct {
    Message    UnsignedMessage
    Signature  filcrypto.Signature
}  // representation tuple

type InvocInput struct {
    To      addr.Address
    Method  actor.MethodNum
    Params  actor.MethodParams
    Value   actor.TokenAmount
}

type InvocOutput struct {
    ExitCode     exitcode.ExitCode
    ReturnValue  Bytes
}

type MessageReceipt struct {
    ExitCode     exitcode.ExitCode
    ReturnValue  Bytes
    GasUsed      GasAmount
}  // representation tuple
package message

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import util "github.com/filecoin-project/specs/util"

func MessageReceipt_Make(output InvocOutput, gasUsed GasAmount) MessageReceipt {
	return &MessageReceipt_I{
		ExitCode_:    output.ExitCode(),
		ReturnValue_: output.ReturnValue(),
		GasUsed_:     gasUsed,
	}
}

func (x GasAmount) Add(y GasAmount) GasAmount {
	panic("TODO")
}

func (x GasAmount) Subtract(y GasAmount) GasAmount {
	panic("TODO")
}

func (x GasAmount) LessThan(y GasAmount) bool {
	panic("TODO")
}

func GasAmount_Zero() GasAmount {
	panic("TODO")
}

func InvocInput_Make(to addr.Address, method actor.MethodNum, params actor.MethodParams, value actor.TokenAmount) InvocInput {
	return &InvocInput_I{
		To_:     to,
		Method_: method,
		Params_: params,
		Value_:  value,
	}
}

func InvocOutput_Make(exitCode exitcode.ExitCode, returnValue util.Bytes) InvocOutput {
	return &InvocOutput_I{
		ExitCode_:    exitCode,
		ReturnValue_: returnValue,
	}
}

func MessageReceipt_MakeSystemError(errCode exitcode.SystemErrorCode, gasUsed GasAmount) MessageReceipt {
	return MessageReceipt_Make(
		InvocOutput_Make(exitcode.SystemError(errCode), nil),
		gasUsed,
	)
}

VM Runtime Environment (Inside the VM)

vm/runtime interface

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

// Randomness is a string of random bytes
type Randomness Bytes

// Runtime is the VM's internal runtime object.
// this is everything that is accessible to actors, beyond parameters.
// FKA as vm.Context
type Runtime interface {
    CurrEpoch() block.ChainEpoch

    // Randomness returns a (pseudo)random stream (indexed by offset) for the current epoch.
    Randomness(e block.ChainEpoch, offset UInt) Randomness

    Caller() addr.Address
    ValidateCallerIs(caller addr.Address)
    ValidateCallerMatches(CallerPattern)

    AcquireState()      ActorStateHandle

    SuccessReturn()     msg.InvocOutput
    ValueReturn(Bytes)  msg.InvocOutput
    ErrorReturn(exitCode exitcode.ExitCode) msg.InvocOutput

    // Throw an error indicating a failure condition has occurred, from which the given actor
    // code is unable to recover. If an error is thrown in actor code, and not handled by any
    // of its callers, then the VM will not apply the state transition.
    //
    // Note: this should only be used for exceptional conditions, such as inconsistent state
    // values or precondition violations. Operations that may fail during normal execution
    // should use error return values, not call this method.
    Abort(string)

    // Check that the given condition is true (and call Abort if not).
    Assert(bool)

    CurrentBalance()  actor.TokenAmount
    ValueSupplied()   actor.TokenAmount

    // Send allows the current execution context to invoke methods on other actors in the system.
    SendPropagatingErrors(input msg.InvocInput) msg.InvocOutput
    SendCatchingErrors(input msg.InvocInput) msg.InvocOutput

    // Create an actor in the state tree. May only be called by InitActor.
    CreateActor(
        cid                actor.StateCID
        a                  addr.Address
        constructorParams  actor.MethodParams
    )

    IpldGet(c ipld.CID) union {Bytes, error}
    IpldPut(x ipld.Object) ipld.CID
}

vm/runtime implementation

package runtime

import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import util "github.com/filecoin-project/specs/util"

type ActorSubstateCID = actor.ActorSubstateCID
type InvocInput = msg.InvocInput
type InvocOutput = msg.InvocOutput
type ExitCode = exitcode.ExitCode
type RuntimeError = exitcode.RuntimeError

var EnsureErrorCode = exitcode.EnsureErrorCode
var SystemError = exitcode.SystemError
var TODO = util.TODO

func ActorSubstateCID_Equals(x, y ActorSubstateCID) bool {
	panic("TODO")
}

// ActorCode is the interface that all actor code types should satisfy.
// It is merely a method dispatch interface.
type ActorCode interface {
	InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput
}

type ActorStateHandle struct {
	_initValue *ActorSubstateCID
	_rt        *VMContext
}

func (h *ActorStateHandle) UpdateRelease(newStateCID ActorSubstateCID) {
	h._rt._updateReleaseActorState(newStateCID)
}

func (h *ActorStateHandle) Release(checkStateCID ActorSubstateCID) {
	h._rt._releaseActorState(checkStateCID)
}

func (h *ActorStateHandle) Take() ActorSubstateCID {
	if h._initValue == nil {
		h._rt._apiError("Must call Take() only once on actor substate object")
	}
	ret := *h._initValue
	h._initValue = nil
	return ret
}

// Concrete instantiation of the Runtime interface. This should be instantiated by the
// interpreter once per actor method invocation, and responds to that method's Runtime
// API calls.
type VMContext struct {
	_globalStateInit        st.StateTree
	_globalStatePending     st.StateTree
	_running                bool
	_actorAddress           addr.Address
	_actorStateAcquired     bool
	_actorStateAcquiredInit actor.ActorSubstateCID

	_valueSupplied    actor.TokenAmount
	_gasRemaining     msg.GasAmount
	_numValidateCalls int
	_output           msg.InvocOutput
}

func VMContext_Make(
	globalState st.StateTree,
	actorAddress addr.Address,
	valueSupplied actor.TokenAmount,
	gasRemaining msg.GasAmount) *VMContext {

	actorStateInit := globalState.GetActor(actorAddress).State()

	return &VMContext{
		_globalStateInit:        globalState,
		_globalStatePending:     globalState,
		_running:                false,
		_actorAddress:           actorAddress,
		_actorStateAcquired:     false,
		_actorStateAcquiredInit: actorStateInit.State(),

		_valueSupplied:    valueSupplied,
		_gasRemaining:     gasRemaining,
		_numValidateCalls: 0,
		_output:           nil,
	}
}

func _generateActorAddress(creator addr.Address, nonce actor.CallSeqNum) addr.Address {
	// _generateActorAddress computes the address of the contract,
	// based on the creator (invoking address) and nonce given.
	// TODO: why is this needed? -- InitActor
	// TODO: this has to be the origin call. and it's broken: could yield the same address
	//       need a truly unique way to assign an address.
	panic("TODO")
}

func (rt *VMContext) CreateActor(stateCID actor.StateCID, address addr.Address, constructorParams actor.MethodParams) Runtime_CreateActor_FunRet {
	rt.ValidateCallerIs(addr.InitActorAddr)

	// TODO: set actor state in global states
	// rt._globalStatePending.ActorStates()[address] = stateCID

	// TODO: call constructor
	// TODO: can constructors fail?
	// TODO: maybe do this directly form InitActor, and only do the StateTree.ActorStates() updating here?
	rt.SendPropagatingErrors(&msg.InvocInput_I{
		To_:     address,
		Method_: actor.MethodConstructor,
		Params_: constructorParams,
		Value_:  rt.ValueSupplied(),
	})

	// TODO: finish
	panic("TODO")
}

func (rt *VMContext) _updateReleaseActorState(newStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()
	newGlobalStatePending, err := rt._globalStatePending.Impl().WithActorState(rt._actorAddress, newStateCID)
	if err != nil {
		panic("Error in runtime implementation: failed to update actor state")
	}
	rt._globalStatePending = newGlobalStatePending
	rt._actorStateAcquired = false
}

func (rt *VMContext) _releaseActorState(checkStateCID ActorSubstateCID) {
	rt._checkRunning()
	rt._checkActorStateAcquired()

	prevState := rt._globalStatePending.GetActor(rt._actorAddress).State()
	prevStateCID := prevState.State()
	if !ActorSubstateCID_Equals(prevStateCID, checkStateCID) {
		rt.Abort("State CID differs upon release call")
	}

	rt._actorStateAcquired = false
}

func (rt *VMContext) Assert(cond bool) Runtime_Assert_FunRet {
	if !cond {
		rt.Abort("Runtime check failed")
	}
	return &Runtime_Assert_FunRet_I{}
}

func (rt *VMContext) _checkActorStateAcquired() {
	if !rt._running {
		panic("Error in runtime implementation: actor interface invoked without running actor")
	}

	if !rt._actorStateAcquired {
		rt.Abort("Actor state not acquired")
	}
}

func (rt *VMContext) Abort(errMsg string) Runtime_Abort_FunRet {
	rt._throwErrorFull(exitcode.SystemError(exitcode.MethodAbort), errMsg)
	return &Runtime_Abort_FunRet_I{}
}

func (rt *VMContext) Caller() addr.Address {
	panic("TODO")
}

func (rt *VMContext) ValidateCallerMatches(callerExpectedPattern CallerPattern) Runtime_ValidateCallerMatches_FunRet {
	rt._checkRunning()
	rt._checkNumValidateCalls(0)
	caller := rt.Caller()
	if !callerExpectedPattern.Matches(caller) {
		rt.Abort("Method invoked by incorrect caller")
	}
	rt._numValidateCalls += 1
	return &Runtime_ValidateCallerMatches_FunRet_I{}
}

type CallerPattern struct {
	Matches func(addr.Address) bool
}

func CallerPattern_MakeSingleton(x addr.Address) CallerPattern {
	return CallerPattern{
		Matches: func(y addr.Address) bool {
			return x == y
		},
	}
}

func (rt *VMContext) ValidateCallerIs(callerExpected addr.Address) Runtime_ValidateCallerIs_FunRet {
	rt.ValidateCallerMatches(CallerPattern_MakeSingleton(callerExpected))
	return &Runtime_ValidateCallerIs_FunRet_I{}
}

func (rt *VMContext) _checkNumValidateCalls(x int) {
	if rt._numValidateCalls != x {
		rt.Abort("Method must validate caller identity exactly once")
	}
}

func (rt *VMContext) _checkRunning() {
	if !rt._running {
		panic("Internal runtime error: actor API called with no actor code running")
	}
}
func (rt *VMContext) SuccessReturn() InvocOutput {
	return msg.InvocOutput_Make(exitcode.OK(), nil)
}

func (rt *VMContext) ValueReturn(value util.Bytes) InvocOutput {
	return msg.InvocOutput_Make(exitcode.OK(), value)
}

func (rt *VMContext) ErrorReturn(exitCode ExitCode) InvocOutput {
	exitCode = exitcode.EnsureErrorCode(exitCode)
	return msg.InvocOutput_Make(exitCode, nil)
}

func (rt *VMContext) _throwError(exitCode ExitCode) {
	rt._throwErrorFull(exitCode, "")
}

func (rt *VMContext) _throwErrorFull(exitCode ExitCode, errMsg string) {
	panic(exitcode.RuntimeError_Make(exitCode, errMsg))
}

func (rt *VMContext) _apiError(errMsg string) {
	rt._throwErrorFull(exitcode.SystemError(exitcode.RuntimeAPIError), errMsg)
}

func (rt *VMContext) _checkStateLock(expected bool) {
	if rt._actorStateAcquired != expected {
		rt._apiError("State update and message send blocks must be disjoint")
	}
}

func (rt *VMContext) _checkGasRemaining() {
	if rt._gasRemaining.LessThan(msg.GasAmount_Zero()) {
		rt._throwError(exitcode.SystemError(exitcode.OutOfGas))
	}
}

func (rt *VMContext) _deductGasRemaining(x msg.GasAmount) {
	// TODO: check x >= 0
	rt._checkGasRemaining()
	rt._gasRemaining = rt._gasRemaining.Subtract(x)
	rt._checkGasRemaining()
}

func (rt *VMContext) _refundGasRemaining(x msg.GasAmount) {
	// TODO: check x >= 0
	rt._checkGasRemaining()
	rt._gasRemaining = rt._gasRemaining.Add(x)
	rt._checkGasRemaining()
}

func (rt *VMContext) _transferFunds(from addr.Address, to addr.Address, amount actor.TokenAmount) error {
	rt._checkRunning()
	rt._checkStateLock(false)

	newGlobalStatePending, err := rt._globalStatePending.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		return err
	}

	rt._globalStatePending = newGlobalStatePending
	return nil
}

// TODO: This function should be private (not intended to be exposed to actors).
// (merging runtime and interpreter packages should solve this)
func (rt *VMContext) SendToplevelFromInterpreter(input InvocInput, catchErrors bool) (
	msg.MessageReceipt, st.StateTree) {

	rt._running = true
	ret := rt._sendInternal(input, catchErrors)
	rt._running = false
	return ret, rt._globalStatePending
}

func _catchRuntimeErrors(f func() msg.InvocOutput) (output msg.InvocOutput) {
	defer func() {
		if r := recover(); r != nil {
			switch r.(type) {
			case *RuntimeError:
				output = msg.InvocOutput_Make(EnsureErrorCode(r.(*RuntimeError).ExitCode), nil)
			default:
				output = msg.InvocOutput_Make(SystemError(exitcode.MethodPanic), nil)
			}
		}
	}()

	output = f()
	return
}

func _invokeMethodInternal(
	rt *VMContext,
	actorCode ActorCode,
	method actor.MethodNum,
	params actor.MethodParams) (ret InvocOutput, gasUsed msg.GasAmount) {

	if method == actor.MethodSend {
		ret = msg.InvocOutput_Make(exitcode.OK(), nil)
		gasUsed = msg.GasAmount_Zero() // TODO: verify
		return
	}

	rt._running = true
	ret = _catchRuntimeErrors(func() InvocOutput {
		methodOutput := actorCode.InvokeMethod(rt, method, params)
		rt._checkStateLock(false)
		rt._checkNumValidateCalls(1)
		return methodOutput
	})
	rt._running = false

	// TODO: Update gasUsed
	TODO()

	return
}

func (rtOuter *VMContext) _sendInternal(input InvocInput, catchErrors bool) msg.MessageReceipt {
	rtOuter._checkRunning()
	rtOuter._checkStateLock(false)

	toActor := rtOuter._globalStatePending.GetActor(input.To()).State()

	toActorCode, err := loadActorCode(toActor.CodeCID())
	if err != nil {
		rtOuter._throwError(exitcode.SystemError(exitcode.ActorCodeNotFound))
	}

	var toActorMethodGasBound msg.GasAmount
	TODO() // TODO: obtain from actor registry
	rtOuter._deductGasRemaining(toActorMethodGasBound)
	// TODO: gasUsed may be larger than toActorMethodGasBound if toActor itself makes sub-calls.
	// To prevent this, we would need to calculate the gas bounds recursively.

	err = rtOuter._transferFunds(rtOuter._actorAddress, input.To(), input.Value())
	if err != nil {
		rtOuter._throwError(exitcode.SystemError(exitcode.InsufficientFunds))
	}

	rtInner := VMContext_Make(
		rtOuter._globalStatePending,
		input.To(),
		input.Value(),
		rtOuter._gasRemaining,
	)

	invocOutput, gasUsed := _invokeMethodInternal(
		rtInner,
		toActorCode,
		input.Method(),
		input.Params(),
	)

	rtOuter._refundGasRemaining(toActorMethodGasBound)
	rtOuter._deductGasRemaining(gasUsed)

	if !catchErrors && invocOutput.ExitCode().IsError() {
		rtOuter._throwError(exitcode.SystemError(exitcode.MethodSubcallError))
	}

	if invocOutput.ExitCode().AllowsStateUpdate() {
		rtOuter._globalStatePending = rtInner._globalStatePending
	}

	return msg.MessageReceipt_Make(invocOutput, gasUsed)
}

func (rtOuter *VMContext) _sendInternalOutputOnly(input InvocInput, catchErrors bool) msg.InvocOutput {
	ret := rtOuter._sendInternal(input, catchErrors)
	return &msg.InvocOutput_I{
		ExitCode_:    ret.ExitCode(),
		ReturnValue_: ret.ReturnValue(),
	}
}

func (rt *VMContext) SendPropagatingErrors(input InvocInput) msg.InvocOutput {
	return rt._sendInternalOutputOnly(input, false)
}

func (rt *VMContext) SendCatchingErrors(input InvocInput) msg.InvocOutput {
	return rt._sendInternalOutputOnly(input, true)
}

func (rt *VMContext) CurrentBalance() actor.TokenAmount {
	panic("TODO")
}

func (rt *VMContext) ValueSupplied() actor.TokenAmount {
	return rt._valueSupplied
}

func (rt *VMContext) Randomness(e block.ChainEpoch, offset uint64) Randomness {
	// TODO: validate CurrEpoch() - K <= e <= CurrEpoch()?
	// TODO: finish
	panic("TODO")
}

func (rt *VMContext) IpldPut(x ipld.Object) ipld.CID {
	panic("TODO")
}

func (rt *VMContext) IpldGet(c ipld.CID) Runtime_IpldGet_FunRet {
	panic("TODO")
}

func (rt *VMContext) CurrEpoch() block.ChainEpoch {
	panic("TODO")
}

func (rt *VMContext) AcquireState() ActorStateHandle {
	panic("TODO")
}

Code Loading

package runtime

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

func loadActorCode(codeCID actor.CodeCID) (ActorCode, error) {

	panic("TODO")
	// TODO: resolve circular dependency

	// // load the code from StateTree.
	// // TODO: this is going to be enabled in the future.
	// // code, err := loadCodeFromStateTree(input.InTree, codeCID)
	// return staticActorCodeRegistry.LoadActor(codeCID)
}

VM Exit Code Constants

type ExitCode union {
    IsSuccess()          bool
    IsError()            bool
    AllowsStateUpdate()  bool

    Success              struct {}
    SystemError          SystemErrorCode
    UserDefinedError     UVarint
}
package exitcode

import util "github.com/filecoin-project/specs/util"

import (
	"fmt"
)

type SystemErrorCode util.UVarint

// TODO: assign all of these.
var (
	// // OK is the success return value, similar to unix exit code 0.
	// OK = SystemErrorCode(0)

	// ActorNotFound represents a failure to find an actor.
	ActorNotFound = SystemErrorCode(1)

	// ActorCodeNotFound represents a failure to find the code for a
	// particular actor in the VM registry.
	ActorCodeNotFound = SystemErrorCode(2)

	// InvalidMethod represents a failure to find a method in
	// an actor
	InvalidMethod = SystemErrorCode(3)

	// InsufficientFunds represents a failure to apply a message, as
	// it did not carry sufficient funds for its application.
	InsufficientFunds = SystemErrorCode(4)

	// InvalidCallSeqNum represents a message invocation out of sequence.
	// This happens when message.CallSeqNum is not exactly actor.CallSeqNum + 1
	InvalidCallSeqNum = SystemErrorCode(5)

	// OutOfGasError is returned when the execution of an actor method
	// (including its subcalls) uses more gas than initially allocated.
	OutOfGas = SystemErrorCode(6)

	// RuntimeAPIError is returned when an actor method invocation makes a call
	// to the runtime that does not satisfy its preconditions.
	RuntimeAPIError = SystemErrorCode(7)

	// MethodPanic is returned when an actor method invocation calls rt.Abort.
	MethodAbort = SystemErrorCode(8)

	// MethodPanic is returned when the runtime intercepts a panic within
	// an actor method invocation (not via rt.Abort).
	MethodPanic = SystemErrorCode(9)

	// MethodSubcallError is returned when an actor method's Send call has
	// returned with a failure error code (and the Send call did not specify
	// to ignore errors).
	MethodSubcallError = SystemErrorCode(10)
)

func OK() ExitCode {
	return ExitCode_Make_Success(&ExitCode_Success_I{})
}

func SystemError(x SystemErrorCode) ExitCode {
	return ExitCode_Make_SystemError(ExitCode_SystemError(x))
}

func (x *ExitCode_I) IsSuccess() bool {
	return x.Which() == ExitCode_Case_Success
}

func (x *ExitCode_I) IsError() bool {
	return !x.IsSuccess()
}

func (x *ExitCode_I) AllowsStateUpdate() bool {
	// TODO: Confirm whether this is the desired behavior

	// return x.IsSuccess() || x.Which() == ExitCode_Case_UserDefinedError
	return x.IsSuccess()
}

func EnsureErrorCode(x ExitCode) ExitCode {
	if !x.IsError() {
		// Throwing an error with a non-error exit code is itself an error
		x = SystemError(RuntimeAPIError)
	}
	return x
}

type RuntimeError struct {
	ExitCode ExitCode
	ErrMsg   string
}

func (x *RuntimeError) String() string {
	ret := fmt.Sprintf("Runtime error: %v", x.ExitCode)
	if x.ErrMsg != "" {
		ret += fmt.Sprintf(" (\"%v\")", x.ErrMsg)
	}
	return ret
}

func RuntimeError_Make(exitCode ExitCode, errMsg string) *RuntimeError {
	exitCode = EnsureErrorCode(exitCode)
	return &RuntimeError{
		ExitCode: exitCode,
		ErrMsg:   errMsg,
	}
}

func UserDefinedError(e util.UVarint) ExitCode {
	return ExitCode_Make_UserDefinedError(ExitCode_UserDefinedError(e))
}

VM Gas Cost Constants

package runtime

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

// TODO: assign all of these.
var (
	// SimpleValueSend is the amount of gas charged for sending value from one
	// contract to another, without executing any other code.
	SimpleValueSend = msg.GasAmount(1)

	// // ActorLookupFail is the amount of gas charged for a failure to lookup
	// // an actor
	// ActorLookupFail = msg.GasAmount(1)

	// CodeLookupFail is the amount of gas charged for a failure to lookup
	// code in the VM's code registry.
	CodeLookupFail = msg.GasAmount(1)

	// ApplyMessageFail represents the gas cost for failures to apply message.
	// These failures are basic failures encountered at first application.
	ApplyMessageFail = msg.GasAmount(1)
)

System Actors

  • There are two system actors required for VM processing:
  • There is one more VM level actor:

InitActor

(You can see the old InitActor here )

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type InitActorState struct {
    // responsible for create new actors
    AddressMap       {addr.Address: actor.ActorID}
    IDMap            {actor.ActorID: addr.Address}
    NextID           actor.ActorID

    _assignNextID()  actor.ActorID
}

type InitActorCode struct {
    Constructor(r vmr.Runtime)
    Exec(r vmr.Runtime, code actor.CodeCID, params actor.MethodParams) addr.Address
    GetActorIDForAddress(r vmr.Runtime, address addr.Address) actor.ActorID
}
package sysactors

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import util "github.com/filecoin-project/specs/util"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type InvocOutput = msg.InvocOutput
type Runtime = vmr.Runtime
type Bytes = util.Bytes

func (a *InitActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, InitActorState) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st InitActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *InitActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) InitActorState {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (a *InitActorCode_I) Constructor(rt Runtime) InvocOutput {
	panic("TODO")
}

func (a *InitActorCode_I) Exec(rt Runtime, codeCID actor.CodeCID, constructorParams actor.MethodParams) InvocOutput {

	// TODO: update

	// Make sure that only the actors defined in the spec can be launched.
	if !a._isBuiltinActor(codeCID) {
		rt.Abort("cannot launch actor instance that is not a builtin actor")
	}

	// Ensure that singeltons can only be launched once.
	// TODO: do we want to enforce this? yes
	//       If so how should actors be marked as such? in the method below (statically)
	if a._isSingletonActor(codeCID) {
		rt.Abort("cannot launch another actor of this type")
	}

	// Get the actor ID for this actor.
	h, st := a.State(rt)
	actorID := st._assignNextID()

	// This generates a unique address for this actor that is stable across message
	// reordering
	// TODO: where do `creator` and `nonce` come from?
	// TODO: CallSeqNum is not related to From -- it's related to Origin
	// addr := rt.ComputeActorAddress(rt.Invocation().FromActor(), rt.Invocation().CallSeqNum())
	addr := a._computeNewAddress(rt, actorID)

	initBalance := rt.ValueSupplied()

	// Set up the actor itself
	actorState := &actor.ActorState_I{
		CodeCID_: codeCID,
		// State_:   nil, // TODO: do we need to init the state? probably not
		Balance_:    initBalance,
		CallSeqNum_: 0,
	}

	stateCid := actor.StateCID(rt.IpldPut(actorState))

	// runtime.State().Storage().Set(actorID, actor)

	// Store the mappings of address to actor ID.
	st.AddressMap()[addr] = actorID
	st.IDMap()[actorID] = addr

	// TODO: adjust this to be proper state setting.
	UpdateRelease(rt, h, st)

	// TODO: can this fail?
	rt.CreateActor(stateCid, addr, constructorParams)

	return rt.ValueReturn([]byte(addr.String()))
}

func (s *InitActorState_I) _assignNextID() actor.ActorID {
	actorID := s.NextID_
	s.NextID_++
	return actorID
}

func (_ *InitActorCode_I) _computeNewAddress(rt Runtime, id actor.ActorID) addr.Address {
	// assign an address based on some randomness
	// we use the current epoch, and the actor id. this should be a unique identifier,
	// stable across reorgs.
	//
	// TODO: do we really need this? it's pretty confusing...
	r := rt.Randomness(rt.CurrEpoch(), uint64(id))

	_ = r // TODO: use r in a
	// a := &addr.Address_Type_Actor_I{}
	// n := &addr.Address_NetworkID_Testnet_I{}
	// return addr.MakeAddress(n, a)
	panic("TODO")
	return nil
}

func (a *InitActorCode_I) GetActorIDForAddress(rt Runtime, address addr.Address) InvocOutput {
	h, st := a.State(rt)
	s := st.AddressMap()[address]
	Release(rt, h, st)
	// return rt.ValueReturn(s)
	// TODO
	_ = s
	return rt.ValueReturn(nil)
}

// TODO: derive this OR from a union type
func (_ *InitActorCode_I) _isSingletonActor(codeCID actor.CodeCID) bool {
	return true
	// TODO: uncomment this
	// return codeCID == StorageMarketActor ||
	// 	codeCID == StoragePowerActor ||
	// 	codeCID == CronActor ||
	// 	codeCID == InitActor
}

// TODO: derive this OR from a union type
func (_ *InitActorCode_I) _isBuiltinActor(codeCID actor.CodeCID) bool {
	return true
	// TODO: uncomment this
	// return codeCID == StorageMarketActor ||
	// 	codeCID == StoragePowerActor ||
	// 	codeCID == CronActor ||
	// 	codeCID == InitActor ||
	// 	codeCID == StorageMinerActor ||
	// 	codeCID == PaymentChannelActor
}

// TODO
func (a *InitActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	// TODO: load state
	// var state InitActorState
	// storage := input.Runtime().State().Storage()
	// err := loadActorState(storage, input.ToActor().State(), &state)

	switch method {
	// case 0: -- disable: value send
	case 1:
		return a.Constructor(rt)
	// case 2: -- disable: cron. init has no cron action
	case 3:
		var codeid actor.CodeCID // TODO: cast params[0]
		params = params[1:]
		return a.Exec(rt, codeid, params)
	case 4:
		var address addr.Address // TODO: cast params[0]
		return a.GetActorIDForAddress(rt, address)
	default:
		return rt.ErrorReturn(exitcode.SystemError(exitcode.InvalidMethod))
	}
}

CronActor

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type CronActorState struct {
    // Cron has no internal state
}

type CronActorCode struct {
    // actors is a set of actors to call during EpochTick.
    // This can be done a bunch of ways. We do it this way here to make it easy to add
    // a handler to Cron elsewhere in the spec code. How to do this is implementation
    // specific.
    Actors [addr.Address]

    // EpochTick executes built-in periodic actions, run at every Epoch.
    // EpochTick(r) is called after all other messages in the epoch have been applied.
    // This can be seen as an implicit last message.
    EpochTick(r vmr.Runtime)
}
package sysactors

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

func (a *CronActorCode_I) Constructor(rt vmr.Runtime) {
	// Nothing. intentionally left blank.
}

func (a *CronActorCode_I) EpochTick(rt vmr.Runtime) InvocOutput {
	// Hook period actions in here.

	// a.actors is basically a static registry for now, loaded
	// in the interpreter static registry.
	for _, a := range a.Actors() {
		rt.SendCatchingErrors(&msg.InvocInput_I{
			To_:     a,
			Method_: actor.MethodCron,
			Params_: []actor.MethodParam{},
			Value_:  actor.TokenAmount(0),
		})
	}

	return rt.SuccessReturn()
}

func (a *CronActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	switch method {
	case actor.MethodCron:
		rt.Assert(len(params) == 0)
		return a.EpochTick(rt)
	default:
		return rt.ErrorReturn(exitcode.SystemError(exitcode.InvalidMethod))
	}
}

AccountActor

(You can see the old AccountActor here )

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

type AccountActorCode struct {
    VerifySignature(rt vmr.Runtime, sig filcrypto.Signature) InvocOutput
}

type AccountActorState struct {
    // normal keypair backed accounts
    Address addr.Address
}
package sysactors

import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import ipld "github.com/filecoin-project/specs/libraries/ipld"

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////

func (a *AccountActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, AccountActorState) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := AccDeserializeState(stateBytes.As_Bytes())
	return h, state
}

func AccRelease(rt Runtime, h vmr.ActorStateHandle, st AccountActorState) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func AccUpdateRelease(rt Runtime, h vmr.ActorStateHandle, st AccountActorState) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *AccountActorState_I) CID() ipld.CID {
	panic("TODO")
}
func AccDeserializeState(x Bytes) AccountActorState {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (a *AccountActorCode_I) Constructor(rt vmr.Runtime) {
	// Nothing. intentionally left blank.
}

func (a *AccountActorCode_I) VerifySignature(rt vmr.Runtime, sig filcrypto.Signature) InvocOutput {
	panic("TODO")
}

func (a *AccountActorCode_I) InvokeMethod(rt vmr.Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	switch method {
	case 3:
		var sig filcrypto.Signature // TODO: params[0]
		return a.VerifySignature(rt, sig)
	default:
		return rt.ErrorReturn(exitcode.SystemError(exitcode.InvalidMethod))
	}
}

VM Interpreter - Message Invocation (Outside VM)

(You can see the old VM interpreter here )

vm/interpreter interface

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type MessageRef struct {
    Message  msg.UnsignedMessage
    Miner    addr.Address
}

type VMInterpreter struct {
    ApplyMessageBatch(inTree st.StateTree, msgs [MessageRef]) struct {outTree st.StateTree, ret [msg.MessageReceipt]}
    ApplyMessage(inTree st.StateTree, msg msg.Message, minerAddr addr.Address) struct {outTree st.StateTree, ret msg.MessageReceipt}
}

vm/interpreter implementation

package interpreter

import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import gascost "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/gascost"
import util "github.com/filecoin-project/specs/util"

var TODO = util.TODO

func (vmi *VMInterpreter_I) ApplyMessageBatch(inTree st.StateTree, msgs []MessageRef) (outTree st.StateTree, ret []msg.MessageReceipt) {
	compTree := inTree
	for _, m := range msgs {
		oT, r := vmi.ApplyMessage(compTree, m.Message(), m.Miner())
		compTree = oT        // assign the current tree. (this call always succeeds)
		ret = append(ret, r) // add message receipt
	}
	return compTree, ret
}

func _applyError(errCode exitcode.SystemErrorCode) msg.MessageReceipt {
	// TODO: should this gasUsed value be zero?
	// If nonzero, there is not guaranteed to be a nonzero gas balance from which to deduct it.
	gasUsed := gascost.ApplyMessageFail
	TODO()
	return msg.MessageReceipt_MakeSystemError(errCode, gasUsed)
}

func _withTransferFundsAssert(tree st.StateTree, from addr.Address, to addr.Address, amount actor.TokenAmount) st.StateTree {
	// TODO: assert amount nonnegative
	retTree, err := tree.Impl().WithFundsTransfer(from, to, amount)
	if err != nil {
		panic("Interpreter error: insufficient funds (or transfer error) despite checks")
	} else {
		return retTree
	}
}

func (vmi *VMInterpreter_I) ApplyMessage(inTree st.StateTree, message msg.UnsignedMessage, minerAddr addr.Address) (
	st.StateTree, msg.MessageReceipt) {

	compTree := inTree
	var outTree st.StateTree
	var toActor actor.Actor
	var err error

	fromActor := compTree.GetActor(message.From())
	if fromActor == nil {
		// TODO: This was originally exitcode.InvalidMethod; which is correct?
		return inTree, _applyError(exitcode.ActorNotFound)
	}

	// make sure fromActor has enough money to run the max invocation
	maxGasCost := gasToFIL(message.GasLimit(), message.GasPrice())
	totalCost := message.Value() + actor.TokenAmount(maxGasCost)
	if fromActor.State().Balance() < totalCost {
		return inTree, _applyError(exitcode.InsufficientFunds)
	}

	// make sure this is the right message order for fromActor
	// (this is protection against replay attacks, and useful sequencing)
	if message.CallSeqNum() != fromActor.State().CallSeqNum()+1 {
		return inTree, _applyError(exitcode.InvalidCallSeqNum)
	}

	// WithActorForAddress may create new account actors
	compTree, toActor = compTree.Impl().WithActorForAddress(message.To())
	if toActor == nil {
		return inTree, _applyError(exitcode.ActorNotFound)
	}

	// deduct maximum expenditure gas funds first
	compTree = _withTransferFundsAssert(compTree, message.From(), addr.BurntFundsActorAddr, maxGasCost)

	rt := vmr.VMContext_Make(
		compTree,
		message.From(),
		actor.TokenAmount(0),
		message.GasLimit(),
	)

	sendRet, sendRetStateTree := rt.SendToplevelFromInterpreter(
		msg.InvocInput_Make(
			message.To(),
			message.Method(),
			message.Params(),
			message.Value(),
		),
		false,
	)

	if !sendRet.ExitCode().AllowsStateUpdate() {
		// error -- revert all state changes -- ie drop updates. burn used gas.
		outTree = inTree
		outTree = _withTransferFundsAssert(
			outTree,
			message.From(),
			addr.BurntFundsActorAddr,
			gasToFIL(sendRet.GasUsed(), message.GasPrice()),
		)
	} else {
		// success -- refund unused gas
		outTree = sendRetStateTree
		refundGas := message.GasLimit() - sendRet.GasUsed()
		TODO() // TODO: assert refundGas is nonnegative
		outTree = _withTransferFundsAssert(
			outTree,
			addr.BurntFundsActorAddr,
			message.From(),
			gasToFIL(refundGas, message.GasPrice()),
		)
	}

	outTree, err = outTree.Impl().WithIncrementedCallSeqNum(message.To())
	if err != nil {
		// TODO: if actor deletion is possible at some point, may need to allow this case
		panic("Internal interpreter error: failed to increment call sequence number")
	}

	// reward miner gas fees
	outTree = _withTransferFundsAssert(
		outTree,
		addr.BurntFundsActorAddr,
		minerAddr, // TODO: may not exist
		gasToFIL(sendRet.GasUsed(), message.GasPrice()),
	)

	return outTree, sendRet
}

func gasToFIL(gas msg.GasAmount, price msg.GasPrice) actor.TokenAmount {
	return actor.TokenAmount(util.UVarint(gas) * util.UVarint(price))
}

vm/interpreter/registry

package interpreter

import "errors"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import market "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
import spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
import sysactors "github.com/filecoin-project/specs/systems/filecoin_vm/sysactors"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

var (
	ErrActorNotFound = errors.New("Actor Not Found")
)

// CodeCIDs for system actors
var (
	InitActorCodeCID           = actor.CodeCID("filecoin/1.0/InitActor")
	CronActorCodeCID           = actor.CodeCID("filecoin/1.0/CronActor")
	AccountActorCodeCID        = actor.CodeCID("filecoin/1.0/AccountActor")
	StoragePowerActorCodeCID   = actor.CodeCID("filecoin/1.0/StoragePowerActor")
	StorageMinerActorCodeCID   = actor.CodeCID("filecoin/1.0/StorageMinerActor")
	StorageMarketActorCodeCID  = actor.CodeCID("filecoin/1.0/StorageMarketActor")
	PaymentChannelActorCodeCID = actor.CodeCID("filecoin/1.0/PaymentChannelActor")
)

var staticActorCodeRegistry = &actorCodeRegistry{}

type actorCodeRegistry struct {
	code map[actor.CodeCID]vmr.ActorCode
}

func (r *actorCodeRegistry) _registerActor(cid actor.CodeCID, actor vmr.ActorCode) {
	r.code[cid] = actor
}

func (r *actorCodeRegistry) _loadActor(cid actor.CodeCID) (vmr.ActorCode, error) {
	a, ok := r.code[cid]
	if !ok {
		return nil, ErrActorNotFound
	}
	return a, nil
}

func RegisterActor(cid actor.CodeCID, actor vmr.ActorCode) {
	staticActorCodeRegistry._registerActor(cid, actor)
}

func LoadActor(cid actor.CodeCID) (vmr.ActorCode, error) {
	return staticActorCodeRegistry._loadActor(cid)
}

// init is called in Go during initialization of a program.
// this is an idiomatic way to do this. Implementations should approach this
// howevery they wish. The point is to initialize a static registry with
// built in pure types that have the code for each actor. Once we have
// a way to load code from the StateTree, use that instead.
func init() {
	_registerBuiltinActors()
}

func _registerBuiltinActors() {
	// TODO

	cron := &sysactors.CronActorCode_I{}

	RegisterActor(InitActorCodeCID, &sysactors.InitActorCode_I{})
	RegisterActor(CronActorCodeCID, cron)
	RegisterActor(AccountActorCodeCID, &sysactors.AccountActorCode_I{})
	RegisterActor(StoragePowerActorCodeCID, &spc.StoragePowerActorCode_I{})
	RegisterActor(StorageMarketActorCodeCID, &market.StorageMarketActorCode_I{})

	// wire in CRON actions.
	// TODO: there's probably a better place to put this, but for now, do it here.
	cron.Actors_ = append(cron.Actors_, addr.StoragePowerActorAddr)
	cron.Actors_ = append(cron.Actors_, addr.StorageMarketActorAddr)
}

Blockchain

The Filecoin Blockchain is a distributed virtual machine that achieves consensus, processes messages, accounts for storage, and maintains security in the Filecoin Protocol.

It includes:

  • A Message Pool subsystem that nodes use to track and propagate messages related to the storage market throughout a gossip network.
  • A susbystem that tracks and propagates validated message blocks, assembling them into subchains corresponding to versions of the system state.
  • A VM - Virtual Machine subsystem used to interpret and execute messages in order to update system state.
  • A subsystem which manages the creation and maintenance of state trees (the system state) deterministically generated by the vm from a given subchain.
  • A Storage Power Consensus subsystem which tracks storage state for a given chain and helps the blockchain system choose subchains to extend and blocks to include in them.

At a high-level, the Filecoin blockchain grows through successive rounds of leader election in which a number of miners are elected to generate a block, whose inclusion in the chain will earn them block rewards.

Most of the functions of the Filecoin blockchain system are detailed in the code below. We focus here on particular points of interest.

Storage Power

Filecoin’s blockchain runs on storage power. That is, its consensus algorithm by which miners agree on which subchain to mine is predicated on the amount of storage backing that subchain. At a high-level, the Storage Power Consensus subsystem maintains a Power Table that tracks the amount of storage storage miner actors have contributed to the network through Sector commitments and Proofs of Spacetime.

Leader Election and Expected Consensus

The leader election process is likewise handled by the Storage Power Consensus subsystem (SPC). Using the Power Table it maintains, this subsystem runs a Nakamoto-style leader election process called Expected Consensus at every round to elect miners who can extend the block chain by generating new blocks.

Beyond participating in the Storage Market (see the Storage Market spec), participation in Filecoin’s consensus is the other way storage miners can earn Filecoin tokens.

Expected Consensus has two main components: a leader election process and a chain selection algorithm dependent on a weight function.

Tipsets

EC can elect multiple leaders in a given round meaning Filecoin chains can contain multiple blocks at each height (one per winning miner). This greatly increases chain throughput by allowing blocks to propagate through the network of nodes more efficiently but also means miners should coordinate how they select messages for inclusion in their blocks in order to avoid duplicates and maximize their earnings from transaction fees (see Message Pool).

Accordingly, blocks from a given round are assembled into Tipsets according to certain rules (they must share the same parents and have been mined at the same height). The Filecoin state tree is modified by the execution of all messages in a given Tipset. Different miners may mine on different Tipsets because of network propagation delay.

Due to this fact, adding new blocks to the chain actually validate those blocks’ parent Tipsets, that is: executing the messages of a new block, a miner cannot know exactly what state tree this will yield. That state tree is only known once all messages in that block’s Tipset have been executed. Accordingly, it is in the next round (and based on the number of blocks mined on a given Tipset) that a miner will be able to choose which state tree to extend.

Tipsets

All valid blocks generated in a round form a Tipset that participants will attempt to mine off of in the subsequent round (see above). Tipsets are valid so long as:

  • All blocks in a Tipset have the same parent Tipset
  • All blocks in a Tipset have the same number of tickets in their Tickets array

These conditions imply that all blocks in a Tipset were mined at the same height. This rule is key to helping ensure that EC converges over time. While multiple new blocks can be mined in a round, subsequent blocks all mine off of a Tipset bringing these blocks together. The second rule means blocks in a Tipset are mined in a same round.

The blocks in a tipset have no defined order in representation. During state computation, blocks in a tipset are processed in order of block ticket, breaking ties with the block CID bytes.

Due to network propagation delay, it is possible for a miner in round N+1 to omit valid blocks mined at round N from their Tipset. This does not make the newly generated block invalid, it does however reduce its weight and chances of being part of the canonical chain in the protocol as defined by EC’s Chain Selection function.

TODO – reorder this

The Filecoin blockchain is the main interface linking various actors in the Filecoin system. It ensures that the system’s state is verifiably updated over time and dictates how nodes are meant to extend the network through block reception and validation and extend it through block propagation.

Its components include the:

  • {{ }} – which receives and propagates blocks, maintaining sets of candidate chains on which the miner may mine and running syntactic validation on incoming blocks.
  • {{ }} – which maintains a given chain’s state, providing facilities to other blockchain subsystems which will query state about the latest chain in order to run, and ensuring incoming blocks are semantically validated before inclusion into the chain.
  • {{ }} – which is called in the event of a successful leader election in order to produce a new block that will extend the current heaviest chain before forwarding it to the syncer for propagation.

Blocks

Block

The Block is a unit of the Filecoin blockchain.

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"

import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type BlockCID ipld.CID
type MessageRoot ipld.CID
type ReceiptRoot ipld.CID
type ChainWeight UVarint
type ChainEpoch UVarint

type BlockHeader struct {
    // Chain linking
    Parents          Tipset
    Weight           ChainWeight
    Epoch            ChainEpoch

    // Miner Info
    MinerAddress     addr.Address

    // State
    StateTree        st.StateTree
    Messages         MessageRoot
    MessageReceipts  ReceiptRoot

    // Consensus things
    Timestamp        clock.Time
    Ticket
    ElectionProof

    // Signatures
    BlockSig         filcrypto.Signature
    BLSAggregateSig  filcrypto.Signature

    //	SerializeSigned()            []byte
    //	ComputeUnsignedFingerprint() []
}

// TODO: remove this. header is signed
type SignedBlock struct {
    BlockCID   &Block
    MinerAddr  addr.Address
    Signature  filcrypto.Signature
}

type Block struct {
    Header    BlockHeader
    Messages  [msg.Message]
    Receipts  [msg.MessageReceipt]
}

type Chain struct {
    HeadEpoch()         ChainEpoch
    HeadTipset()        Tipset
    LatestCheckpoint()  ChainEpoch

    TipsetAtEpoch(epoch ChainEpoch) Tipset
    TicketAtEpoch(epoch ChainEpoch) Ticket
}

// Checkpoint represents a particular block to use as a trust anchor
// in Consensus and ChainSync
//
// Note: a Block uniquely identifies a tipset (the parents)
// from here, we may consider many tipsets that _include_ Block
// but we must indeed include t and not consider tipsets that
// fork from Block.Parents, but do not include Block.
type Checkpoint BlockCID

// SoftCheckpoint is a checkpoint that Filecoin nodes may use as they
// gain confidence in the blockchain. It is a unilateral checkpoint,
// and derived algorithmically from notions of probabilistic consensus
// and finality.
type SoftCheckpoint Checkpoint

// TrustedCheckpoint is a Checkpoint that is trusted by the broader
// Filecoin Network. These TrustedCheckpoints are arrived at through
// the higher level economic consensus that surrounds Filecoin.
// TrustedCheckpoints:
// - MUST be at least 200,000 blocks old (>1mo)
// - MUST be at least
// - MUST be widely known and accepted
// - MAY ship with Filecoin software implementations
// - MAY be propagated through other side-channel systems
// For more, see the Checkpoints section.
// TODO: consider renaming as EconomicCheckpoint
type TrustedCheckpoint Checkpoint
package block

import (
	util "github.com/filecoin-project/specs/util"
)

func SmallerBytes(a, b util.Bytes) util.Bytes {
	if util.CompareBytesStrict(a, b) > 0 {
		return b
	}
	return a
}

func (chain *Chain_I) TipsetAtEpoch(epoch ChainEpoch) Tipset {

	dist := chain.HeadEpoch() - epoch
	current := chain.HeadTipset()
	parents := current.Parents()
	for i := 0; i < int(dist); i++ {
		current = parents
		parents = current.Parents()
	}

	return current
}

func (chain *Chain_I) TicketAtEpoch(epoch ChainEpoch) Ticket {
	ts := chain.TipsetAtEpoch(epoch)
	return ts.MinTicket()
}

func (chain *Chain_I) HeadEpoch() ChainEpoch {
	panic("")
}

func (chain *Chain_I) HeadTipset() Tipset {
	panic("")
}

// should return the tipset from the nearest epoch to epoch containing a Tipset
// that is from the closest epoch less than or equal to epoch
func (bl *Block_I) TipsetAtEpoch(epoch ChainEpoch) Tipset {

	current := bl.Header_.Parents()
	parents := current.Parents()
	for current.Epoch() > epoch {
		current = parents
		parents = current.Parents()
	}
	return current
}

// should return the ticket from the Tipset generated at the nearest height leq to epoch
func (bl *Block_I) TicketAtEpoch(epoch ChainEpoch) Ticket {
	ts := bl.TipsetAtEpoch(epoch)
	return ts.MinTicket()
}
Tipset

The Tipset is a group of blocks in the same exact round, that all share the exact same parents.

import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"

type Tipset struct {
    BlockCIDs          [BlockCID]
    Blocks             [BlockHeader]

    Has(block Block)   bool           @(cached)
    Parents            Tipset         @(cached)
    StateTree          st.StateTree   @(cached)
    Weight             ChainWeight    @(cached)
    Epoch              ChainEpoch     @(cached)

    ValidateSyntax()   bool           @(cached)
    LatestTimestamp()  clock.Time     @(cached)
    MinTicket()        Ticket         @(cached)
}
package block

import (
	"bytes"

	clock "github.com/filecoin-project/specs/systems/filecoin_nodes/clock"
)

func (ts *Tipset_I) MinTicket() Ticket {
	var ret Ticket

	for _, currBlock := range ts.Blocks() {
		tix := currBlock.Ticket()
		if ret == nil {
			ret = tix
		} else {
			smaller := SmallerBytes(tix.Output(), ret.Output())
			if bytes.Equal(smaller, tix.Output()) {
				ret = tix
			}
		}
	}

	return ret
}

func (ts *Tipset_I) ValidateSyntax() bool {

	if len(ts.Blocks_) <= 0 {
		return false
	}

	panic("TODO")

	// parents := ts.Parents_
	// grandparent := parents[0].Parents_
	// for i := 1; i < len(parents); i++ {
	// 	if grandparent != parents[i].Parents_ {
	// 		return false
	// 	}
	// }

	// numTickets := len(ts.Blocks_[0].Tickets_)
	// for i := 1; i < len(ts.Blocks_); i++ {
	// 	if numTickets != len(ts.Blocks_[i].Tickets_) {
	// 		return false
	// 	}
	// }

	return true
}

func (ts *Tipset_I) LatestTimestamp() clock.Time {
	var latest clock.Time
	panic("TODO")
	// for _, blk := range ts.Blocks_ {
	// 	if blk.Timestamp().After(latest) || latest.IsZero() {
	// 		latest = blk.Timestamp()
	// 	}
	// }
	return latest
}

// func (tipset *Tipset_I) StateTree() stateTree.StateTree {
// 	var currTree stateTree.StateTree = nil
// 	for _, block := range tipset.Blocks() {
// 		if currTree == nil {
// 			currTree = block.StateTree()
// 		} else {
// 			Assert(block.StateTree().CID().Equals(currTree.CID()))
// 		}
// 	}
// 	Assert(currTree != nil)
// 	for _, block := range tipset.Blocks() {
// 		currTree = UpdateStateTree(currTree, block)
// 	}
// 	return currTree
// }

Chain

A Chain is a sequence of tipsets, linked together. It is a single history of execution in the Filecoin blockchain.

Something's not right. The chain.id file was not found.

Something's not right. The chain.go file was not found.

Chain Manager

The Chain Manager is a central component in the blockchain system. It tracks and updates competing subchains received by a given node in order to select the appropriate blockchain head: the latest block of the heaviest subchain it is aware of in the system.

In so doing, the chain manager is the central subsystem that handles bookkeeping for numerous other systems in a Filecoin node and exposes convenience methods for use by those systems, enabling systems to sample randomness from the chain for instance, or to see which block has been finalized most recently.

The chain manager interfaces and functions are included here, but we expand on important details below for clarity.

Chain Expansion
Incoming blocks and semantic validation

Once a block has been received and syntactically validated by the , it must be semantically validated by the chain manager for inclusion on a given chain.

A semantically valid block:

  • must be from a valid miner with power on the chain
  • must only have valid parents in the tipset, meaning
    • that each parent itself must be a valid block
    • that they all have the same parents themselves
    • that they are all at the same height (i.e. include the same number of tickets)
  • must have a valid tickets generated from the minTicket in its parent tipset.
  • must only have valid state transitions:
    • all messages in the block must be valid
    • the execution of each message, in the order they are in the block, must produce a receipt matching the corresponding one in the receipt set of the block.
  • the resulting state root after all messages are applied, must match the one in the block
Once the block passes validation, it must be added to the local datastore, regardless whether it is understood as the best tip at this point. Future blocks from other miners may be mined on top of it and in that case we will want to have it around to avoid refetching.
To make certain validation checks simpler, blocks should be indexed by height and by parent set. That way sets of blocks with a given height and common parents may be quickly queried. It may also be useful to compute and cache the resultant aggregate state of blocks in these sets, this saves extra state computation when checking which state root to start a block at when it has multiple parents.

The following requires having and processing (executing) the messages

  • Messages can be checked by verifying the messages hash correctly to the value.
  • MessageAggregateSig can be checked by verifying the messages sign correctly
  • MessageReceipts can only be checked by executing the messages
  • StateRoot is the result of the execution of the messages, and can only be verified by executing them
Block reception algorithm

Chain selection is a crucial component of how the Filecoin blockchain works. Every chain has an associated weight accounting for the number of blocks mined on it and so the power (storage) they track. It is always preferable to mine atop a heavier Tipset rather than a lighter one. While a miner may be foregoing block rewards earned in the past, this lighter chain is likely to be abandoned by other miners forfeiting any block reward earned as miners converge on a final chain. For more on this, see chain selection in the Expected Consensus spec.

However, ahead of finality, a given subchain may be abandoned in order of another, heavier one mined in a given round. In order to rapidly adapt to this, the chain manager must maintain and update all subchains being considered up to finality.

That is, for every incoming block, even if the incoming block is not added to the current heaviest tipset, the chain manager should add it to the appropriate subchain it is tracking, or keep track of it independently until either: - it is able to do so, through the reception of another block in that subchain - it is able to discard it, as that block was mined before finality

We give an example of how this could work in the block reception algorithm.

ChainTipsManager

The Chain Tips Manager is a subcomponent of Filecoin consensus that is technically up to the implementer, but since the pseudocode in previous sections reference it, it is documented here for clarity.

The Chain Tips Manager is responsible for tracking all live tips of the Filecoin blockchain, and tracking what the current ‘best’ tipset is.

// Returns the ticket that is at round 'r' in the chain behind 'head'
func TicketFromRound(head Tipset, r Round) {}

// Returns the tipset that contains round r (Note: multiple rounds' worth of tickets may exist within a single block due to losing tickets being added to the eventually successfully generated block)
func TipsetFromRound(head Tipset, r Round) {}

// GetBestTipset returns the best known tipset. If the 'best' tipset hasn't changed, then this
// will return the previous best tipset.
func GetBestTipset()

// Adds the losing ticket to the chaintips manager so that blocks can be mined on top of it
func AddLosingTicket(parent Tipset, t Ticket)

Block Producer

Mining Blocks

Having registered as a miner, it’s time to start making and checking tickets. At this point, the miner should already be running chain validation, which includes keeping track of the latest tipsets seen on the network.

For additional details around how consensus works in Filecoin, see Expected Consensus. For the purposes of this section, there is a consensus protocol (Expected Consensus) that guarantees a fair process for determining what blocks have been generated in a round, whether a miner is eligible to mine a block itself, and other rules pertaining to the production of some artifacts required of valid blocks (e.g. Tickets, ElectionProofs).

Mining Cycle

At any height H, there are three possible situations:

  • The miner is eligible to mine a block: they produce their block and propagate it. They then resume mining at the next height H+1.
  • The miner is not eligible to mine a block but has received blocks: they form a Tipset with them and resume mining at the next height H+1.
  • The miner is not eligible to mine a block and has received no blocks: prompted by their clock they run leader election again, incrementing the epoch number.

This process is repeated until either a winning ticket is found (and block published) or a new valid Tipset comes in from the network.

Let’s illustrate this with an example.

Miner M is mining at Height H. Heaviest tipset at H-1 is {B0}

  • New Round:
    • M produces a ticket at H, from B0’s ticket (the min ticket at H-1)
    • M draws the ticket from height H-K to generate an ElectionProof
    • That ElectionProof is invalid
    • M has not heard about other blocks on the network.
  • New Round:
    • Epoch/Height is incremented to H + 1.
    • M generates a new ElectionProof with this new epoch number.
    • That ElectionProof is valid
    • M generates a block B1 using the new ElectionProof and the ticket drawn last round.
    • M has received blocks B2, B3 from the network with the same parents and same height.
    • M forms a tipset {B1, B2, B3}

Anytime a miner receives new blocks, it should evaluate what is the heaviest Tipset it knows about and mine atop it.

Block Creation

Scratching a winning ticket, and armed with a valid ElectionProof, a miner can now publish a new block!

To create a block, the eligible miner must compute a few fields:

  • Ticket - new ticket generated from that in the prior epoch (see Ticket Generation).
  • ElectionProof - A specific signature over the min_ticket from randomness_lokkback epochs back (see Secret Leader Election).
  • ParentWeight - The parent chain’s weight (see Chain Selection).
  • Parents - the CIDs of the parent blocks.
  • ParentState - Note that it will not end up in the newly generated block, but is necessary to compute to generate other fields. To compute this:
    • Take the ParentState of one of the blocks in the chosen parent set (invariant: this is the same value for all blocks in a given parent set).
    • For each block in the parent set, ordered by their tickets:
    • Apply each message in the block to the parent state, in order. If a message was already applied in a previous block, skip it.
    • Transaction fees are given to the miner of the block that the first occurance of the message is included in. If there are two blocks in the parent set, and they both contain the exact same set of messages, the second one will receive no fees.
    • It is valid for messages in two different blocks of the parent set to conflict, that is, A conflicting message from the combined set of messages will always error. Regardless of conflicts all messages are applied to the state.
    • TODO: define message conflicts in the state-machine doc, and link to it from here
  • MsgRoot - To compute this:
    • Select a set of messages from the mempool to include in the block.
    • Separate the messages into BLS signed messages and secpk signed messages
    • For the BLS messages:
    • Strip the signatures off of the messages, and insert all the bare Messages for them into a sharray.
    • Aggregate all of the bls signatures into a single signature and use this to fill out the BLSAggregate field
    • For the secpk messages:
    • Insert each of the secpk SignedMessages into a sharray
    • Create a TxMeta object and fill each of its fields as follows:
    • blsMessages: the root cid of the bls messages sharray
    • secpkMessages: the root cid of the secp messages sharray
    • The cid of this TxMeta object should be used to fill the MsgRoot field of the block header.
  • BLSAggregate - The aggregated signatures of all messages in the block that used BLS signing.
  • StateRoot - Apply each chosen message to the ParentState to get this.
    • Note: first apply bls messages in the order that they appear in the blsMsgs sharray, then apply secpk messages in the order that they appear in the secpkMessages sharray.
  • ReceiptsRoot - To compute this:
    • Apply the set of messages to the parent state as described above, collecting invocation receipts as this happens.
    • Insert them into a sharray and take its root.
  • Timestamp - A Unix Timestamp generated at block creation. We use an unsigned integer to represent a UTC timestamp (in seconds). The Timestamp in the newly created block must satisfy the following conditions:
    • the timestamp on the block corresponds to the current epoch (it is neither in the past nor in the future) as defined by the clock subsystem.
  • BlockSig - A signature with the miner’s private key (must also match the ticket signature) over the entire block. This is to ensure that nobody tampers with the block after it propagates to the network, since unlike normal PoW blockchains, a winning ticket is found independently of block generation.

An eligible miner can start by filling out Parents, Tickets and ElectionProof with values from the ticket checking process.

Next, they compute the aggregate state of their selected parent blocks, the ParentState. This is done by taking the aggregate parent state of the blocks’ parent Tipset, sorting the parent blocks by their tickets, and applying each message in each block to that state. Any message whose nonce is already used (duplicate message) in an earlier block should be skipped (application of this message should fail anyway). Note that re-applied messages may result in different receipts than they produced in their original blocks, an open question is how to represent the receipt trie of this tipset’s messages (one can think of a tipset as a ‘virtual block’ of sorts).

Once the miner has the aggregate ParentState, they must apply the block reward. This is done by adding the correct block reward amount to the miner owner’s account balance in the state tree. The reward will be spendable immediately in this block.

Now, a set of messages is selected to put into the block. For each message, the miner subtracts msg.GasPrice * msg.GasLimit from the sender’s account balance, returning a fatal processing error if the sender does not have enough funds (this message should not be included in the chain).

They then apply the messages state transition, and generate a receipt for it containing the total gas actually used by the execution, the executions exit code, and the return value . Then, they refund the sender in the amount of (msg.GasLimit - GasUsed) * msg.GasPrice. In the event of a message processing error, the remaining gas is refunded to the user, and all other state changes are reverted. (Note: this is a divergence from the way things are done in Ethereum)

Each message should be applied on the resultant state of the previous message execution, unless that message execution failed, in which case all state changes caused by that message are thrown out. The final state tree after this process will be the block’s StateRoot.

The miner merklizes the set of messages selected, and put the root in MsgRoot. They gather the receipts from each execution into a set, merklize them, and put that root in ReceiptsRoot. Finally, they set the StateRoot field with the resultant state.

Note that the ParentState field from the expected consensus document is left out, this is to help minimize the size of the block header. The parent state for any given parent set should be computed by the client and cached locally.

Finally, the miner can generate a Unix Timestamp to add to their block, to show that the block generation was appropriately delayed.

The miner will wait until BLOCK_DELAY has passed since the latest block in the parent set was generated to timestamp and send out their block. We recommend using NTP or another clock synchronization protocol to ensure that the timestamp is correctly generated (lest the block be rejected). While this timestamp does not provide a hard proof that the block was delayed (we rely on the VDF in the ticket-chain to do so), it provides some softer form of block delay by ensuring that honest miners will reject undelayed blocks.

Now the block is complete, all that’s left is to sign it. The miner serializes the block now (without the signature field), takes the sha256 hash of it, and signs that hash. They place the resultant signature in the BlockSig field.

Block Broadcast

An eligible miner broadcasts the completed block to the network and assuming everything was done correctly, the network will accept it and other miners will mine on top of it, earning the miner a block reward!

Block Rewards

Over the entire lifetime of the protocol, 1,400,000,000 FIL (TotalIssuance) will be given out to miners. The rate at which the funds are given out is set to halve every six years, smoothly (not a fixed jump like in Bitcoin). These funds are initially held by the network account actor, and are transferred to miners in blocks that they mine. Over time, the reward will eventually become close zero as the fractional amount given out at each step shrinks the network account’s balance to 0.

The equation for the current block reward is of the form:

Reward = (IV * RemainingInNetworkActor) / TotalIssuance

IV is the initial value, and is set to:

IV = 153856861913558700202 attoFIL // 153.85 FIL

IV was derived from:

// Given one block every 30 seconds, this is how many blocks are in six years
HalvingPeriodBlocks = 6 * 365 * 24 * 60 * 2 = 6,307,200 blocks
Ξ» = ln(2) / HalvingPeriodBlocks
IV = TotalIssuance * (1-e^(-Ξ»)) // Converted to attoFIL (10e18)

Note: Due to jitter in EC, and the gregorian calendar, there may be some error in the issuance schedule over time. This is expected to be small enough that it’s not worth correcting for. Additionally, since the payout mechanism is transferring from the network account to the miner, there is no risk of minting too much FIL.

TODO: Ensure that if a miner earns a block reward while undercollateralized, then min(blockReward, requiredCollateral-availableBalance) is garnished (transfered to the miner actor instead of the owner).

Notes on Block Reward Application

As mentioned above, every round, a miner checks to see if they have been selected as the leader for that particular round. Thus, it is possible that multiple miners may be selected as winners in a given round, and thus, that there will be multiple blocks with the same parents that are produced at the same block height (forming a Tipset). Each of the winning miners will apply the block reward directly to their actor’s state in their state tree.

Other nodes will receive these blocks and form a Tipset out of the eligible blocks (those that have the same parents and are at the same block height). These nodes will then validate the Tipset. To validate Tipset state, the validating node will, for each block in the Tipset, first apply the block reward value directly to the mining node’s account and then apply the messages contained in the block.

Thus, each of the miners who produced a block in the Tipset will receive a block reward. There will be no lockup. These rewards can be spent immediately.

Messages in Filecoin also have an associated transaction fee (based on the gas costs of executing the message). In the case where multiple winning miners included the same message in their blocks, only the first miner will be paid this transaction fee. The first miner is the miner with the lowest ticket value (sorted lexicographically).

Open Questions
  • How should receipts for tipsets be referenced? It is common for applications to provide the merkleproof of a receipt to prove that a transaction was successfully executed.

Message Pool

The Message Pool is a subsystem in the Filecoin blockchain system. The message pool is acts as the interface between Filecoin nodes and a peer-to-peer network used for off-chain message transmission. It is used by nodes to maintain a set of messages to transmit to the Filecoin VM (for “on-chain” execution).

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type MessagePoolSubsystem struct {
    // needs access to:
    // - BlockchainSubsystem
    //   - needs access to StateTree
    //   - needs access to Messages mined into blocks (probably past finality)
    //     to remove from the MessagePool
    // - NetworkSubsystem
    //   - needs access to MessagePubsub
    //
    // Important remaining questions:
    // - how does BlockchainSubsystem.BlockReceiver handle asking for messages?
    // - how do we note messages are now part of the blockchain
    //   - how are they cleared from the mempool
    // - do we need to have some sort of purge?

    // AddNewMessage is called to add messages created at this node,
    // or to be propagated by this node. All messages enter the network
    // through one of these calls, in at least one filecoin node. They
    // are then propagated to other filecoin nodes via the MessagePool
    // subsystem. Other nodes receive and propagate Messages via their
    // own MessagePools.
    AddNewMessage(m msg.Message)

    // Stats returns information about the MessagePool contents.
    Stats() MessagePoolStats

    // FindMessage receives a descriptor query q, and returns a set of
    // messages currently in the mempool that match the Query constraints.
    // q may have all, any, or no constraints specified.
    // FindMessage(q MessageQuery) union {
    //  [base.Message],
    //  Error
    // }

    // MostProfitableMessages returns messages that are most profitable
    // to mine for this miner.
    //
    // Note: This is where algorithms about chosing best messages given
    //       many leaders should go.
    GetMostProfitableMessages(miner addr.Address) [msg.Message]
}

type MessagePoolStats struct {
    // Size is the amount of messages in the MessagePool
    Size UInt
}

// MessageQuery is a descriptor used to find messages matching one or more
// of the constraints specified.
type MessageQuery struct {
    /*
  From   base.Address
  To     base.Address
  Method ActorMethodId
  Params ActorMethodParams

  ValueMin    TokenAmount
  ValueMax    TokenAmount
  GasPriceMin TokenAmount
  GasPriceMax TokenAmount
  GasLimitMin TokenAmount
  GasLimitMax TokenAmount
  */
}

Clients that use a message pool include:

  • storage market provider and client nodes - for transmission of deals on chain
  • storage miner nodes - for transmission of PoSts, sector commitments, deals, and other operations tracked on chain
  • verifier nodes - for transmission of potential faults on chain
  • relayer nodes - for forwarding and discarding messages appropriately.

The message pool subsystem is made of two components:

  • The message syncer Message Syncer – which receives and propagates messages.
  • Message storage Message Storage – which caches messages according to a given policy.

TODOs:

  • discuss how messages are meant to propagate slowly/async
  • explain algorithms for choosing profitable txns

Message Syncer

TODO:

  • explain message syncer works
  • include the message syncer code
Message Propagation

Messages are propagated over the libp2p pubsub channel /fil/messages. On this channel, every serialised SignedMessage is announced.

Upon receiving the message, its validity must be checked: the signature must be valid, and the account in question must have enough funds to cover the actions specified. If the message is not valid it should be dropped and must not be forwarded.

discuss checking signatures and account balances, some tricky bits that need consideration. Does the fund check cause improper dropping? E.g. I have a message sending funds then use the newly constructed account to send funds, as long as the previous wasn’t executed the second will be considered “invalid” … though it won’t be at the time of execution.

Message Storage

TODO:

  • give sample algorithm for miner message selection in block production (to avoid dups)
  • give sample algorithm for message storage caching/purging policies.

ChainSync - synchronizing the Blockchain

What is blockchain synchronization?

Blockchain synchronization (“sync”) is a key part of a blockchain system. It handles retrieval and propagation of blocks and transactions (messages), and thus in charge of distributed state replication. This process is security critical – problems here can be catastrophic to the operation of a blockchain.

What is ChainSync?

ChainSync is the protocol Filecoin uses to synchronize its blockchain. It is specific to Filecoin’s choices in state representation and consensus rules, but is general enough that it can serve other blockchains. ChainSync is a group of smaller protocols, which handle different parts of the sync process.

Terms and Concepts

  • LastCheckpoint the last hard social-consensus oriented checkpoint that ChainSync is aware of. This consensus checkpoint defines the minimum finality, and a minimum of history to build on. ChainSync takes LastCheckpoint on faith, and builds on it, never switching away from its history.
  • TargetHeads a list of BlockCIDs that represent blocks at the fringe of block production. These are the newest and best blocks ChainSync knows about. They are “target” heads because ChainSync will try to sync to them. This list is sorted by “likelihood of being the best chain” (eg for now, simply ChainWeight)
  • BestTargetHead the single best chain head BlockCID to try to sync to. This is the first element of TargetHeads

ChainSync Summary

At a high level, ChainSync does the following:

  • Part 1: Verify internal state (INIT state below)
    • SHOULD verify data structures and validate local chain
    • Resource expensive verification MAY be skipped at nodes’ own risk
  • Part 2: Bootstrap to the network (BOOTSTRAP)
    • Step 1. Bootstrap to the network, and acquire a “secure enough” set of peers (more details below)
    • Step 2. Bootstrap to the BlockPubsub channels
    • Step 3. Listen and serve on Graphsync
  • Part 3: Synchronize trusted checkpoint state (SYNC_CHECKPOINT)
    • Step 1. Start with a TrustedCheckpoint (defaults to GenesisCheckpoint).
    • Step 2. Get the block it points to, and that block’s parents
    • Step 3. Graphsync the StateTree
  • Part 4: Catch up to the chain (CHAIN_CATCHUP)
    • Step 1. Maintain a set of TargetHeads (BlockCIDs), and select the BestTargetHead from it
    • Step 2. Synchronize to the latest heads observed, validating blocks towards them (requesting intermediate points)
    • Step 3. As validation progresses, TargetHeads and BestTargetHead will likely change, as new blocks at the production fringe will arrive, and some target heads or paths to them may fail to validate.
    • Step 4. Finish when node has “caught up” with BestTargetHead (retrieved all the state, linked to local chain, validated all the blocks, etc).
  • Part 5: Stay in sync, and participate in block propagation (CHAIN_FOLLOW)
    • Step 1. If security conditions change, go back to Part 4 (CHAIN_CATCHUP)
    • Step 2. Receive, validate, and propagate received Blocks
    • Step 3. Now with greater certainty of having the best chain, finalize Tipsets, and advance chain state.

libp2p Network Protocols

As a networking-heavy protocol, ChainSync makes heavy use of libp2p. In particular, we use two sets of protocols:

  • libp2p.PubSub a family of publish/subscribe protocols to propagate recent Blocks. The concrete protocol choice impacts ChainSync’s effectiveness, efficiency, and security dramatically. For Filecoin v1.0 we will use libp2p.Gossipsub, a recent libp2p protocol that combines features and learnings from many excellent PubSub systems. In the future, Filecoin may use other PubSub protocols. Important Note: is entirely possible for Filecoin Nodes to run multiple versions simultaneously. That said, this specification requires that filecoin nodes MUST connect and participate in the main channel, using libp2p.Gossipsub.
  • libp2p.PeerDiscovery a family of discovery protocols, to learn about peers in the network. This is especially important for security because network “Bootstrap” is a difficult problem in peer-to-peer networks. The set of peers we initially connect to may completely dominate our awareness of other peers, and therefore all state. We use a union of PeerDiscovery protocols as each by itself is not secure or appropriate for users’ threat models. The union of these provides a pragmatic and effective solution.

More concretely, we use these protocols:

  • libp2p.PeerDiscovery
    • (required) libp2p.BootstrapList a protocol that uses a persistent and user-configurable list of semi-trusted bootstrap peers. The default list includes a set of peers semi-trusted by the Filecoin Community.
    • (required) libp2p.Gossipsub a pub/sub protocol that – as a side-effect – disseminates peer information
    • (optional/TODO) libp2p.PersistentPeerstore a connectivity component that keeps persistent information about peers observed in the network throughout the lifetime of the node. This is useful because we resume and continually improve Bootstrap security.
    • (optional/TODO) libp2p.DNSDiscovery to learn about peers via DNS lookups to semi-trusted peer aggregators
    • (optional/TODO) libp2p.HTTPDiscovery to learn about peers via HTTP lookups to semi-trusted peer aggregators
    • (optional) libp2p.KademliaDHT a dht protocol that enables random queries across the entire network
  • libp2p.PubSub
    • (required) libp2p.Gossipsub the concrete libp2p.PubSub protocol ChainSync uses.

Subcomponents

Aside from libp2p, ChainSync uses or relies on the following components:

  • Libraries:
    • ipld data structures, selectors, and protocols
    • ipld.Store local persistent storage for chain datastructures
    • ipld.Selector a way to express requests for chain data structures
    • ipfs.GraphSync a general-purpose ipld datastructure syncing protocol
  • Data Structures:
    • Data structures in the chain package: Block, Tipset, Chain, Checkpoint ...
    • chainsync.BlockCache a temporary cache of blocks, to constrain resource expended
    • chainsync.AncestryGraph a datastructure to efficiently link Blocks, Tipsets, and PartialChains
    • chainsync.ValidationGraph a datastructure for efficient and secure validation of Blocks and Tipsets
Graphsync in ChainSync

ChainSync is written in terms of Graphsync. ChainSync adds blockchain and filecoin-specific synchronization functionality that is critical for Filecoin security.

Rate Limiting Graphsync responses (SHOULD)

When running Graphsync, Filecoin nodes must respond to graphsync queries. Filecoin requires nodes to provide critical data structures to others, otherwise the network will not function. During ChainSync, it is in operators’ interests to provide data structures critical to validating, following, and participating in the blockchain they are on. However, this has limitations, and some level of rate limiting is critical for maintaining security in the presence of attackers who might issue large Graphsync requests to cause DOS.

We recommend the following:

  • Set and enforce batch size rate limits. Force selectors to be shaped like: LimitedBlockIpldSelector(blockCID, BatchSize) for a single constant BatchSize = 1000. Nodes may push for this equilibrium by only providing BatchSize objects in responses, even for pulls much larger than BatchSize. This forces subsequent pulls to be run, re-rooted appropriately, and hints at other parties that they should be requesting with that BatchSize.
  • Force all Graphsync queries for blocks to be aligned along cacheable bounderies. In conjunction with a BatchSize, implementations should aim to cache the results of Graphsync queries, so that they may propagate them to others very efficiently. Aligning on certain boundaries (eg specific ChainEpoch limits) increases the likelihood many parties in the network will request the same batches of content. Another good cacheable boundary is the entire contents of a Block (BlockHeader, Messages, Signatures, etc).
  • Maintain per-peer rate-limits. Use bandwidth usage to decide whether to respond and how much on a per-peer basis. Libp2p already tracks bandwidth usage in each connection. This information can be used to impose rate limits in Graphsync and other Filecoin protocols.
  • Detect and react to DOS: restrict operation. The safest implementations will likely detect and react to DOS attacks. Reactions could include:
    • Smaller Graphsync.BatchSize limits
    • Fewer connections to other peers
    • Rate limit total Graphsync bandwidth
    • Assign Graphsync bandwidth based on a peer priority queue
    • Disconnect from and do not accept connections from unknown peers
    • Introspect Graphsync requests and filter/deny/rate limit suspicious ones
Previous BlockSync protocol

Prior versions of this spec recommended a BlockSync protocol. This protocol definition is available here. Filecoin nodes are libp2p nodes, and therefore may run a variety of other protocols, including this BlockSync protocol. As with anything else in Filecoin, nodes MAY opt to use additional protocols to achieve the results. That said, Nodes MUST implement the version of ChainSync as described in this spec in order to be considered implementations of Filecoin. Test suites will assume this protocol.

ChainSync State Machine

ChainSync uses the following conceptual state machine. Since this is a conceptual state machine, implementations MAY deviate from implementing precisely these states, or dividing them strictly. Implementations MAY blur the lines between the states. If so, implementations MUST ensure security of the altered protocol.

State Machine:

ChainSync State Machine (open in new tab)
ChainSync FSM: INIT
  • beginning state. no network connections, not synchronizing.
  • local state is loaded: internal data structures (eg chain, cache) are loaded
  • LastTrustedCheckpoint is set the latest network-wide accepted TrustedCheckpoint
  • FinalityTipset is set to finality achieved in a prior protocol run.
    • Default: If no later FinalityTipset has been achieved, set FinalityTipset to LastTrustedCheckpoint
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint)
  • security conditions to transition out:
    • local state and data structures SHOULD be verified to be correct
    • this means validating any parts of the chain or StateTree the node has, from LastTrustedCheckpoint on.
    • LastTrustedCheckpoint is well-known across the Filecoin Network to be a true TrustedCheckpoint
    • this SHOULD NOT be verified in software, it SHOULD be verified by operators
    • Note: we ALWAYS have at least one TrustedCheckpoint, the GenesisCheckpoint.
  • transitions out:
    • once done verifying things: move to BOOTSTRAP
ChainSync FSM: BOOTSTRAP
  • network.Bootstrap(): establish connections to peers until we satisfy security requirement
    • for better security, use many different libp2p.PeerDiscovery protocols
  • BlockPubsub.Bootstrap(): establish connections to BlockPubsub peers
    • The subscription is for both peer discovery and to start selecting best heads. Listing on pubsub from the start keeps the node informed about potential head changes.
  • Graphsync.Serve(): set up a Graphsync service, that responds to others’ queries
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is whatever was loaded from prior executions (worst case is LastTrustedCheckpoint).
  • security conditions to transition out:
    • Network connectivity MUST have reached the security level acceptable for ChainSync
    • BlockPubsub connectivity MUST have reached the security level acceptable for ChainSync
    • “on time” blocks MUST be arriving through BlockPubsub
  • transitions out:
    • once bootstrap is deemed secure enough:
    • if node does not have the Blocks or StateTree corresponding to LastTrustedCheckpoint: move to SYNC_CHECKPOINT
    • otherwise: move to CHAIN_CATCHUP
ChainSync FSM: SYNC_CHECKPOINT
  • While in this state:
    • ChainSync is well-bootstrapped, but does not yet have the Blocks or StateTree for LastTrustedCheckpoint
    • ChainSync issues Graphsync requests to its peers randomly for the Blocks and StateTree for LastTrustedCheckpoint:
    • ChainSync’s counterparts in other peers MUST provide the state tree.
    • It is only semi-rational to do so, so ChainSync may have to try many peers.
    • Some of these requests MAY fail.
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has.
    • No new blocks are reported to consumers.
    • The chain state provided is the available Blocks and StateTree for LastTrustedCheckpoint.
  • Important Notes:
    • ChainSync needs to fetch several blocks: the Block pointed at by LastTrustedCheckpoint, and its direct Block.Parents.
    • Nodes only need hashing to validate these Blocks and StateTrees – no block validation or state machine computation is needed.
    • The initial value of LastTrustedCheckpoint is GenesisCheckpoint, but it MAY be a value later in Chain history.
    • LastTrustedCheckpoint enables efficient syncing by making the implicit economic consensus of chain history explicit.
    • By allowing fetching of the StateTree of LastTrustedCheckpoint via Graphsync, ChainSync can yield much more efficient syncing than comparable blockchain synchronization protocols, as syncing and validation can start there.
    • Nodes DO NOT need to validate the chain from GenesisCheckpoint. LastTrustedCheckpoint MAY be a value later in Chain history.
    • Nodes DO NOT need to but MAY sync earlier StateTrees than LastTrustedCheckpoint as well.
  • Pseudocode 1: a basic version of SYNC_CHECKPOINT:

    func (c *ChainSync) SyncCheckpoint() {
        while !c.HasCompleteStateTreeFor(c.LastTrustedCheckpoint) {
            selector := ipldselector.SelectAll(c.LastTrustedCheckpoint)
            c.Graphsync.Pull(c.Peers, sel, c.IpldStore)
            // Pull SHOULD NOT pull what c.IpldStore already has (check first)
            // Pull SHOULD pull from different peers simultaneously
            // Pull SHOULD be efficient (try different parts of the tree from many peers)
            // Graphsync implementations may not offer these features. These features
            // can be implemented on top of a graphsync that only pulls from a single
            // peer and does not check local store first.
        }
        c.ChainCatchup() // on to CHAIN_CATCHUP
    }
  • security conditions to transition out:

    • StateTree for LastTrustedCheckpoint MUST be stored locally and verified (hashing is enough)
  • transitions out:

    • once node receives and verifies complete StateTree for LastTrustedCheckpoint: move to CHAIN_CATCHUP
ChainSync FSM: CHAIN_CATCHUP
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync is receiving latest Blocks from BlockPubsub
    • ChainSync starts fetching and validating blocks (see Block Fetching and Validation above).
    • ChainSync has unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
  • Chain State and Finality:
    • In this state, the chain MUST NOT advance beyond whatever the node already has:
    • FinalityTipset does not change.
    • No new blocks are reported to consumers/users of ChainSync yet.
    • The chain state provided is the available Blocks and StateTree for all available epochs, specially the FinalityTipset.
    • finality must not move forward here because there are serious attack vectors where a node can be forced to end up on the wrong fork if finality advances before validation is complete up to the block production fringe.
    • Validation must advance, all the way to the block production fringe:
    • Validate the whole chain, from FinalityTipset to BestTargetHead
    • The node can reach BestTargetHead only to find out it was invalid, then has to update BestTargetHead with next best one, and sync to it (without having advanced FinalityTipset yet, as otherwise we may end up on the wrong fork)
  • security conditions to transition out:
    • Gaps between ChainSync.FinalityTipset ... ChainSync.BestTargetHead have been closed:
    • All Blocks and their content MUST be fetched, stored, linked, and validated locally. This includes BlockHeaders, Messages, etc.
    • Bad heads have been expunged from ChainSync.TargetHeads. Bad heads include heads that initially seemed good but turned out invalid, or heads that ChainSync has failed to connect (ie. cannot fetch ancestors connecting back to ChainSync.FinalityTipset within a reasonable amount of time).
    • All blocks between ChainSync.FinalityTipset ... ChainSync.TargetHeads have been validated This means all blocks before the best heads.
    • Not under a temporary network partition
  • transitions out:
    • once gaps between ChainSync.FinalityTipset ... ChainSync.TargetHeads are closed: move to CHAIN_FOLLOW
    • (Perhaps moving to CHAIN_FOLLOW when 1-2 blocks back in validation may be ok.
    • we dont know we have the right head until we validate it, so if other heads of similar height are right/better, we wont know till then.)
ChainSync FSM: CHAIN_FOLLOW
  • While in this state:
    • ChainSync is well-bootstrapped, and has an initial trusted StateTree to start from.
    • ChainSync fetches and validates blocks (see Block Fetching and Validation).
    • ChainSync is receiving and validating latest Blocks from BlockPubsub
    • ChainSync DOES NOT have unvalidated blocks between ChainSync.FinalityTipset and ChainSync.TargetHeads
    • ChainSync MUST drop back to another state if security conditions change.
    • Keep a set of gap measures:
    • BlockGap is the number of remaining blocks to validate between the Validated blocks and BestTargetHead.
      • (ie how many epochs do we need to validate to have validated BestTargetHead. does not include null blocks)
    • EpochGap is the number of epochs between the latest validated block, and BestTargetHead (includes null blocks).
    • MaxBlockGap = 2, which means how many blocks may ChainSync fall behind on before switching back to CHAIN_CATCHUP (does not include null blocks)
    • MaxEpochGap = 10, which means how many epochs may ChainSync fall behind on before switching back to CHAIN_CATCHUP (includes null blocks)
  • Chain State and Finality:
    • In this state, the chain MUST advance as all the blocks up to BestTargetHead are validated.
    • New blocks are finalized as they cross the finality threshold (ValidG.Heads[0].ChainEpoch - FinalityLookback)
    • New finalized blocks are reported to consumers.
    • The chain state provided includes the Blocks and StateTree for the Finality epoch, as well as candidate Blocks and StateTrees for unfinalized epochs.
  • security conditions to transition out:
    • Temporary network partitions (see Detecting Network Partitions).
    • Encounter gaps of >MaxBlockGap or >MaxEpochGap between Validated set and a new ChainSync.BestTargetHead
  • transitions out:
    • if a temporary network partition is detected: move to CHAIN_CATCHUP
    • if BlockGap > MaxBlockGap: move to CHAIN_CATCHUP
    • if EpochGap > MaxEpochGap: move to CHAIN_CATCHUP
    • if node is shut down: move to INIT

Block Fetching, Validation, and Propagation

Notes on changing TargetHeads while syncing
  • TargetHeads is changing, as ChainSync must be aware of the best heads at any time. reorgs happen, and our first set of peers could’ve been bad, we keep discovering others.
    • Hello protocol is good, but it’s polling. unless node is constantly polllng, wont see all the heads.
    • BlockPubsub gives us the realtime view into what’s actually going on.
    • weight can also be close between 2+ possible chains (long-forked), and ChainSync must select the right one (which, we may not be able to distinguish until validating all the way)
  • fetching + validation are strictly faster per round on average than blocks produced/block time (if they’re not, will always fall behind), so we definitely catch up eventually (and even quickly). the last couple rounds can be close (“almost got it, almost got it, there”).
General notes on fetching Blocks
  • ChainSync selects and maintains a set of the most likely heads to be correct from among those received via BlockPubsub. As more blocks are received, the set of TargetHeads is reevaluated.
  • ChainSync fetches Blocks, Messages, and StateTree through the Graphsync protocol.
  • ChainSync maintains sets of Blocks/Tipsets in Graphs (see ChainSync.id)
  • ChainSync gathers a list of TargetHeads from BlockPubsub, sorted by likelihood of being the best chain (see below).
  • ChainSync makes requests for chains of BlockHeaders to close gaps between TargetHeads
  • ChainSync forms partial unvalidated chains of BlockHeaders, from those received via BlockPubsub, and those requested via Graphsync.
  • ChainSync attempts to form fully connected chains of BlockHeaders, parting from StateTree, toward observed Heads
  • ChainSync minimizes resource expenditures to fetch and validate blocks, to protect against DOS attack vectors. ChainSync employs Progressive Block Validation, validating different facets at different stages of syncing.
  • ChainSync delays syncing Messages until they are needed. Much of the structure of the partial chains can be checked and used to make syncing decisions without fetching the Messages.
Progressive Block Validation
  • Blocks can be validated in progressive stages, in order to minimize resource expenditure.
  • Validation computation is considerable, and a serious DOS attack vector.
  • Secure implementations must carefully schedule validation and minimize the work done by pruning blocks without validating them fully.
  • ChainSync SHOULD keep a cache of unvalidated blocks (ideally sorted by likelihood of belonging to the chain), and delete unvalidated blocks when they are passed by FinalityTipset, or when ChainSync is under significant resource load.
  • It is key to note that any block received after the ROUND_CUTOFF time must be automatically discarded by the miner until the start of the next epoch.

  • Progressive Stages of Block Validation

    • (TODO: move this to blockchain/Block section)
    • BV0 - Syntactic Validation: Validate data structure packing and ensure correct typing.
    • BV1 - Light Consensus State Checks: Validate b.ChainWeight, b.ChainEpoch, b.MinerAddress, b.Timestamp, are plausible (some ranges of bad values can be detected easily, especially if we have the state of the chain at b.ChainEpoch - consensus.LookbackParameter. Eg Weight and Epoch have well defined valid ranges, and b.MinerAddress must exist in the lookback state). This requires some chain state, enough to establish plausibility levels of each of these values. A node should be able to estimate valid ranges for b.ChainEpoch based on the LastTrustedCheckpoint. b.ChainWeight is easy if some of the relatively recent chain is available, otherwise hard.
    • BV2 - Signature Validation: Verify b.BlockSig is correct.
    • BV3 - Verify ElectionProof: Verify b.ElectionProof is correct. This requires having state for relevant lookback parameters.
    • BV4 - Verify Ancestry links to chain: Verify ancestry links back to trusted blocks. If the ancestry forks off before finality, or does not connect at all, it is a bad block.
    • BV4 - Verify MessageSigs: Verify the signatures on messages
    • BV5 - Verify StateTree: Verify the application of b.Parents.Messages() correctly produces b.StateTree and b.MessageReceipts
  • These stages can be used partially across many blocks in a candidate chain, in order to prune out clearly bad blocks long before actually doing the expensive validation work.

Notes: - in CHAIN_CATCHUP, if a node is receiving/fetching hundreds/thousands of BlockHeaders, validating signatures can be very expensive, and can be deferred in favor of other validation. (ie lots of BlockHeaders coming in through network pipe, dont want to bound on sig verification, other checks can help dump blocks on the floor faster (BV0, BV1) - in CHAIN_FOLLOW, we’re not receiving thousands, we’re receiving maybe a dozen or 2 dozen packets in a few seconds. We receive cid w/ Sig and addr first (ideally fits in 1 packet), and can afford to (a) check if we already have the cid (if so done, cheap), or (b) if not, check if sig is correct before fetching header (expensive computation, but checking 1 sig is way faster than checking a ton). In practice likely that which one to do is dependent on miner tradeoffs. we’ll recommend something but let miners decide, because one strat or the other may be much more effective depending on their hardware, on their bandwidth limitations, or their propensity to getting DOSed

Progressive Block Propagation (or BlockSend)
  • In order to make Block propagation more efficient, we trade off network round trips for bandwidth usage.
  • Motivating observations:
    • Block propagation is one of the most security critical points of the whole protocol.
    • Bandwidth usage during Block propagation is the biggest rate limiter for network scalability.
    • The time it takes for a Block to propagate to the whole network is a critical factor in determining a secure BlockTime
    • Blocks propagating through the network should take as few sequential roundtrips as possible, as these roundtrips impose serious block time delays. However, interleaved roundtrips may be fine. Meaning that block.CIDs may be propagated on their own, without the header, then the header without the messages, then the messages.
    • Blocks will propagate over a libp2p.PubSub. libp2p.PubSub.Messages will most likely arrive multiple times at a node. Therefore, using only the block.CID here could make this very cheap in bandwidth (more expensive in round trips)
    • Blocks in a single epoch may include the same Messages, and duplicate transfers can be avoided
    • Messages propagate through their own MessagePubsub, and nodes have a significant probability of already having a large fraction of the messages in a block. Since messages are the bulk of the size of a Block, this can present great bandwidth savings.
  • Progressive Steps of Block Propagation
    • IMPORTANT NOTES:
      • these can be effectively pipelined. The receiver is in control of what to pull, and when. It is up them to decide when to trade-off RTTs for Bandwidth.
      • If the sender is propagating the block at all to receiver, it is in their interest to provide the full content to receiver when asked. Otherwise the block may not get included at all.
      • Lots of security assumptions here – this needs to be hyper verified, in both spec and code.
      • sender is a filecoin node running ChainSync, propagating a block via Gossipsub (as the originator, as another peer in the network, or just a Gossipsub router).
      • receiver is the local filecoin node running ChainSync, trying to get the blocks.
      • for receiver to Pull things from sender, receivermust conntect to sender. Usually sender is sending to receiver because of the Gossipsub propagation rules. receiver could choose to Pull from any other node they are connected to, but it is most likely sender will have the needed information. They usually may be more well-connected in the network.
    • Step 1. (sender) Push BlockHeader:
      • sender sends block.BlockHeader to receiver via Gossipsub:
        • bh := Gossipsub.Send(h block.BlockHeader)
        • This is a light-ish object (<4KB).
      • receiver receives bh.
        • This has many fields that can be validated before pulling the messages. (See Progressive Block Validation).
        • BV0, BV1, and BV2 validation takes place before propagating bh to other nodes.
        • receiver MAY receive many advertisements for each winning block in an epoch in quick succession. this is because (a) many want propagation as fast as possible, (b) many want to make those network advertisements as light as reasonable, © we want to enable receiver to choose who to ask it from (usually the first party to advertise it, and that’s what spec will recommend), and (d) want to be able to fall back to asking others if that fails (fail = dont get it in 1s or so)
    • Step 2. (receiver) Pull MessageCids:
      • upon receiving bh, receiver checks whether it already has the full block for bh.BlockCID. if not:
        • receiver requests bh.MessageCids from sender:
          • bm := Graphsync.Pull(sender, SelectAMTCIDs(b.Messages))
    • Step 3. (receiver) Pull Messages:
      • if receiver DOES NOT already have the all messages for b.BlockCID, then:
        • if receiver has some of the messages:
          • receiver requests missing Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bm[3], bm[10], bm[50], ...)) or
            • for m in bm {
              Graphsync.Pull(sender, SelectAll(m))
              }
              
        • if receiver does not have any of the messages (default safe but expensive thing to do):
          • receiver requests all Messages from sender:
            • Graphsync.Pull(sender, SelectAll(bh.Messages))
        • (This is the largest amount of stuff)
    • Step 4. (receiver) Validate Block:
      • the only remaining thing to do is to complete Block Validation.

Calculations

Security Parameters
  • Peers >= 32 – direct connections
    • ideally Peers >= {64, 128} -
Pubsub Bandwidth

These bandwidth calculations are used to motivate choices in ChainSync.

If you imagine that you will receive the header once per gossipsub peer (or if lucky, half of them), and that there is EC.E_LEADERS=10 blocks per round, then we’re talking the difference between:

16 peers, 1 pkt  -- 1 * 16 * 10 = 160 dup pkts (256KB) in <5s
16 peers, 4 pkts -- 4 * 16 * 10 = 640 dup pkts (1MB)   in <5s

32 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (512KB) in <5s
32 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (2MB)   in <5s

64 peers, 1 pkt  -- 1 * 32 * 10 =   320 dup pkts (1MB) in <5s
64 peers, 4 pkts -- 4 * 32 * 10 = 1,280 dup pkts (4MB)   in <5s

2MB in <5s may not be worth saving– and maybe gossipsub can be much better about supressing dups.

Notes (TODO: move elsewhere)

Checkpoints
  • A checkpoint is the CID of a block (not a tipset list of CIDs, or StateTree)
  • The reason a block is OK is that it uniquely identifies a tipset.
  • using tipsets directly would make Checkpoints harder to communicate. we want to make checkpoints a single hash, as short as we can have it. They will be shared in tweets, URLs, emails, printed into newspapers, etc. Compactness, ease of copy-paste, etc matters.
  • we’ll make human readable lists of checkpoints, and making “lists of lists” is more annoying.
  • When we have EC.E_PARENTS > 5 or = 10, tipsets will get annoyingly large.
  • the big quirk/weirdness with blocks it that it also must be in the chain. (if you relaxed that constraint you could end up in a weird case where a checkpoint isnt in the chain and that’s weird/violates assumptions).

Bootstrap chain stub
  • the mainnet filecoin chain will need to start with a small chain stub of blocks.
  • we must include some data in different blocks.
  • we do need a genesis block – we derive randomness from the ticket there. Rather than special casing, it is easier/less complex to ensure a well-formed chain always, including at the beginning
  • A lot of code expects lookbacks, especially actor code. Rather than introducing a bunch of special case logic for what happens ostensibly once in network history (special case logic which adds complexity and likelihood of problems), it is easiest to assume the chain is always at least X blocks long, and the system lookback parameters are all fine and dont need to be scaled in the beginning of network’s history.
PartialGraph

The PartialGraph of blocks.

Is a graph necessarily connected, or is this just a bag of blocks, with each disconnected subgraph being reported in heads/tails?

The latter. the partial graph is a DAG fragment– including disconnected components. here’s a visual example, 4 example PartialGraphs, with Heads and Tails. (note they aren’t tipsets)

Storage Power Consensus

The Storage Power Consensus subsystem is the main interface which enables Filecoin nodes to agree on the state of the system. SPC accounts for individual storage miners’ effective power over consensus in given chains in its Power Table. It also runs Expected Consensus (the underlying consensus algorithm in use by Filecoin), enabling storage miners to run leader election and generate new blocks updating the state of the Filecoin system.

Succinctly, the SPC subsystem offers the following services: - Access to the Power Table for every subchain, accounting for individual storage miner power and total power on-chain. - Access to Expected Consensus for individual storage miners, enabling: - Access to verifiable randomness Tickets as needed in the rest of the protocol. - Running Secret Leader Election to produce new blocks. - Running Chain Selection across subchains using EC’s weighting function. - Identification of the most recently finalized tipset, for use by all protocol participants.

Much of the Storage Power Consensus’ subsystem functionality is detailed in the code below but we touch upon some of its behaviors in more detail.

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import st "github.com/filecoin-project/specs/systems/filecoin_vm/state_tree"
import base_mining "github.com/filecoin-project/specs/systems/filecoin_mining"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"

type StoragePowerConsensusSubsystem struct {//(@mutable)
    // actor                StoragePowerActor
    associatedStateTree &st.StateTree  // TODO: remove this. should not store this here.

    GenerateElectionProof(tipset block.Tipset) block.ElectionProof
    ChooseTipsetToMine(tipsets [block.Tipset]) [block.Tipset]

    ec ExpectedConsensus

    // call by BlockchainSubsystem during block reception
    ValidateBlock(block block.Block) error

    IsWinningElectionProof(
        electionProof  block.ElectionProof
        workerAddr     addr.Address
    ) bool

    validateElectionProof(
        height         block.ChainEpoch
        electionProof  block.ElectionProof
        workerAddr     addr.Address
    ) bool

    validateTicket(tix block.Ticket, pk filcrypto.VRFPublicKey) bool

    computeChainWeight(tipset block.Tipset) block.ChainWeight

    StoragePowerConsensusError() StoragePowerConsensusError

    // Randomness methods

    // call by StorageMiningSubsystem during block production
    GetTicketProductionSeed(chain block.Chain, epoch block.ChainEpoch) block.Ticket

    // call by StorageMiningSubsystem during block production
    GetElectionProofSeed(chain block.Chain, epoch block.ChainEpoch) block.Ticket

    // call by StorageMiningSubsystem in sealing sector
    GetSealSeed(chain block.Chain, epoch block.ChainEpoch) base_mining.SealSeed

    // call by StorageMiningSubsystem after sealing
    GetPoStChallenge(chain block.Chain, epoch block.ChainEpoch) base_mining.PoStChallenge

    GetFinality()     block.ChainEpoch
    FinalizedEpoch()  block.ChainEpoch
}

type StoragePowerConsensusError struct {}

Distinguishing between storage miners and block miners

There are two ways to earn Filecoin tokens in the Filecoin network: - By participating in the Storage Market as a storage provider and being paid by clients for file storage deals. - By mining new blocks on the network, helping modify system state and secure the Filecoin consensus mechanism.

We must distinguish between both types of “miners” (storage and block miners). Secret Leader Election in Filecoin is predicated on a miner’s storage power. Thus, while all block miners will be storage miners, the reverse is not necessarily true.

However, given Filecoin’s “useful Proof-of-Work” is achieved through file storage (PoRep and PoSt), there is little overhead cost for storage miners to participate in leader election. Such a Storage Miner Actor need only register with the Storage Power Actor in order to participate in Expected Consensus and mine blocks.

Repeated leader election attempts

In the case that no miner is eligible to produce a block in a given round of EC, the storage power consensus subsystem will be called by the block producer to attempt another leader election by incrementing the nonce appended to the ticket drawn from the past in order to attempt to craft a new valid ElectionProof and trying again.

The Ticket chain and randomness on-chain

While each Filecoin block header contains a ticket field (see Tickets), it is useful to provide nodes with a ticket chain abstraction.

Namely, tickets are used throughout the Filecoin system as sources of on-chain randomness. For instance, - The Sector Sealer uses tickets as SealSeeds to bind sector commitments to a given subchain. - The Storage Miner likewise uses tickets as PoStChallenges to prove sectors remain committed as of a given block. - They are drawn by the Storage Power subsystem as randomness in Secret Leader Election to determine their eligibility to mine a block - They are drawn by the Storage Power subsystem in order to generate new tickets for future use.

Each of these ticket uses may require drawing tickets at different chain heights, according to the security requirements of the particular protocol making use of tickets. Due to the nature of Filecoin’s Tipsets and the possibility of using losing tickets (that did not yield leaders in leader election) for randomness at a given height, tracking the canonical ticket of a subchain at a given height can be arduous to reason about in terms of blocks. To that end, it is helpful to create a ticket chain abstraction made up of only those tickets to be used for randomness at a given height.

This ticket chain will track one-to-one with a block at each height in a given subchain, but omits certain details including other blocks mined at that height.

It is composed inductively as follows. For a given chain:

  • At height 0, take the genesis block, return its ticket
  • At height n+1, take the heaviest tipset in our chain at height n.
    • select the block in that tipset with the smallest final ticket, return its ticket

Because a Tipset can contain multiple blocks, the smallest ticket in the Tipset must be drawn otherwise the block will be invalid.

   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚                      β”‚
   β”‚                      β”‚
   β”‚β”Œβ”€β”€β”€β”€β”                β”‚
   β”‚β”‚ TA β”‚              A β”‚
   β””β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

   β”Œ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                          β”‚
   β”‚
    β”Œβ”€β”€β”€β”€β”                β”‚       TA < TB < TC
   β”‚β”‚ TB β”‚              B
    β”΄β”€β”€β”€β”€β”˜β”€ ─ ─ ─ ─ ─ ─ ─ β”˜

   β”Œ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                          β”‚
   β”‚
    β”Œβ”€β”€β”€β”€β”                β”‚
   β”‚β”‚ TC β”‚              C
    β”΄β”€β”€β”€β”€β”˜β”€ ─ ─ ─ ─ ─ ─ ─ β”˜

In the above diagram, a miner will use block A’s Ticket to generate a new ticket (or an election proof farther in the future) since it is the smallest in the Tipset.

Storage Power Actor

StoragePowerActor interface
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import util "github.com/filecoin-project/specs/util"

type PowerTableEntry struct {
    ActivePower             block.StoragePower
    InactivePower           block.StoragePower
    AvailableBalance        actor.TokenAmount
    LockedPledgeCollateral  actor.TokenAmount
}

type PowerReport struct {
    ActivePower    block.StoragePower  // set value
    InactivePower  block.StoragePower  // set value
}

type FaultReport struct {
    NewDeclaredFaults          util.UVarint  // diff value
    NewDetectedFaults          util.UVarint  // diff value
    NewTerminatedFaults        util.UVarint  // diff value

    GetDeclaredFaultSlash()    actor.TokenAmount
    GetDetectedFaultSlash()    actor.TokenAmount
    GetTerminatedFaultSlash()  actor.TokenAmount
}

// type PowerTableHAMT {actor.ActorID: PowerTableEntry}
type PowerTableHAMT {addr.Address: PowerTableEntry}  // TODO: convert address to ActorID

// TODO: What does graceful removal look like?

type StoragePowerActorState struct {
    // PowerTable is a mapping from MinerActorID to PowerTableEntry
    PowerTable  PowerTableHAMT
    EC          ExpectedConsensus

    _slashPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount)
    _lockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount)
    _unlockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount)
    _getPledgeCollateralReq(rt Runtime, newPower block.StoragePower) actor.TokenAmount
    _sampleMinersToSurprise(rt Runtime, challengeCount int) [addr.Address]
}

type StoragePowerActorCode struct {
    AddBalance(rt Runtime)
    WithdrawBalance(rt Runtime, amount actor.TokenAmount)

    // call by StorageMiningSubsytem on miner creation
    CreateStorageMiner(
        // TODO: document differences in Addr, Key and ID accross spec
        rt          Runtime
        ownerAddr   addr.Address
        workerAddr  addr.Address
        peerId      libp2p.PeerID  // TODO: will be removed likely (see: https://github.com/filecoin-project/specs/pull/555#pullrequestreview-300991681)
    ) addr.Address

    RemoveStorageMiner(rt Runtime, addr addr.Address)

    // PowerTable Operations
    GetTotalPower(rt Runtime) block.StoragePower

    EnsurePledgeCollateralSatisfied(rt Runtime) bool

    ProcessPowerReport(rt Runtime, report PowerReport)
    ProcessFaultReport(rt Runtime, report FaultReport)
    ReportConsensusFault(
        // slasherAddr  addr.Address TODO: fromActor
        rt         Runtime
        faultType  ConsensusFaultType
        proof      [block.Block]
    )

    Surprise(rt Runtime, ticket block.Ticket) [addr.Address]
    // this should be part of ReportConsensusFault, numSectors should be all sectors
    // ReportUncommittedPowerFault(cheaterAddr addr.Address, numSectors UVarint)
}
StoragePowerActor implementation
package storage_power_consensus

import (
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	libp2p "github.com/filecoin-project/specs/libraries/libp2p"
	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
	util "github.com/filecoin-project/specs/util"
)

const (
	StoragePowerActor_ProcessPowerReport actor.MethodNum = 1
	StoragePowerActor_ProcessFaultReport actor.MethodNum = 2
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type InvocOutput = msg.InvocOutput
type Runtime = vmr.Runtime
type Bytes = util.Bytes
type State = StoragePowerActorState

func (a *StoragePowerActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StoragePowerActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (r *FaultReport_I) GetDeclaredFaultSlash() actor.TokenAmount {
	return actor.TokenAmount(0)
}

func (r *FaultReport_I) GetDetectedFaultSlash() actor.TokenAmount {
	return actor.TokenAmount(0)
}

func (r *FaultReport_I) GetTerminatedFaultSlash() actor.TokenAmount {
	return actor.TokenAmount(0)
}

////////////////////////////////////////////////////////////////////////////////

func (st *StoragePowerActorState_I) _slashPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount) {
	if amount < 0 {
		rt.Abort("negative amount.")
	}

	// TODO: convert address to MinerActorID
	var minerID addr.Address

	currEntry, found := st.PowerTable()[minerID]
	if !found {
		rt.Abort("minerID not found.")
	}

	amountToSlash := amount

	if currEntry.Impl().LockedPledgeCollateral() < amount {
		amountToSlash = currEntry.Impl().LockedPledgeCollateral_
		currEntry.Impl().LockedPledgeCollateral_ = 0
		// TODO: extra handling of not having enough pledgecollateral to be slashed
	} else {
		currEntry.Impl().LockedPledgeCollateral_ = currEntry.LockedPledgeCollateral() - amount
	}

	// TODO: send amountToSlash to TreasuryActor
	panic(amountToSlash)
	st.Impl().PowerTable_[minerID] = currEntry

	// TODO: commit state change
}

// TODO: batch process this if possible
func (st *StoragePowerActorState_I) _lockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount) {
	// AvailableBalance -> LockedPledgeCollateral
	// TODO: potentially unnecessary check
	if amount < 0 {
		rt.Abort("negative amount.")
	}

	// TODO: convert address to MinerActorID
	var minerID addr.Address

	currEntry, found := st.PowerTable()[minerID]
	if !found {
		rt.Abort("minerID not found.")
	}

	if currEntry.Impl().AvailableBalance() < amount {
		rt.Abort("insufficient available balance.")
	}

	currEntry.Impl().AvailableBalance_ = currEntry.AvailableBalance() - amount
	currEntry.Impl().LockedPledgeCollateral_ = currEntry.LockedPledgeCollateral() + amount
	st.Impl().PowerTable_[minerID] = currEntry
}

func (st *StoragePowerActorState_I) _unlockPledgeCollateral(rt Runtime, address addr.Address, amount actor.TokenAmount) {
	// lockedPledgeCollateral -> AvailableBalance
	if amount < 0 {
		rt.Abort("negative amount.")
	}

	// TODO: convert address to MinerActorID
	var minerID addr.Address

	currEntry, found := st.PowerTable()[minerID]
	if !found {
		rt.Abort("minerID not found.")
	}

	if currEntry.Impl().LockedPledgeCollateral() < amount {
		rt.Abort("insufficient locked balance.")
	}

	currEntry.Impl().LockedPledgeCollateral_ = currEntry.LockedPledgeCollateral() - amount
	currEntry.Impl().AvailableBalance_ = currEntry.AvailableBalance() + amount
	st.Impl().PowerTable_[minerID] = currEntry

}

func (st *StoragePowerActorState_I) _getPledgeCollateralReq(rt Runtime, power block.StoragePower) actor.TokenAmount {

	// TODO: Implement
	pcRequired := actor.TokenAmount(0)

	return pcRequired
}

func (st *StoragePowerActorState_I) _sampleMinersToSurprise(rt Runtime, challengeCount int) []addr.Address {
	// this wont quite work -- a.PowerTable() is a HAMT by actor address, doesn't
	// support enumerating by int index. maybe we need that as an interface too,
	// or something similar to an iterator (or iterator over the keys)
	// or even a seeded random call directly in the HAMT: myhamt.GetRandomElement(seed []byte, idx int)

	allMiners := make([]addr.Address, len(st.PowerTable()))
	index := 0

	for address, _ := range st.PowerTable() {
		allMiners[index] = address
		index++
	}

	return postSurpriseSample(rt, allMiners, challengeCount)
}

// postSurpriseSample implements the PoSt-Surprise sampling algorithm
func postSurpriseSample(rt Runtime, allMiners []addr.Address, challengeCount int) []addr.Address {

	sm := make([]addr.Address, challengeCount)
	for i := 0; i < challengeCount; i++ {
		// rInt := rt.NextRandomInt() // we need something like this in the runtime
		rInt := 4 // xkcd prng for now.
		miner := allMiners[rInt%len(allMiners)]
		sm = append(sm, miner)
	}

	return sm
}

////////////////////////////////////////////////////////////////////////////////

func (a *StoragePowerActorCode_I) AddBalance(rt Runtime) {

	var msgValue actor.TokenAmount

	// TODO: this should be enforced somewhere else
	if msgValue < 0 {
		rt.Abort("negative message value.")
	}

	// TODO: convert msgSender to MinerActorID
	var minerID addr.Address

	h, st := a.State(rt)

	currEntry, found := st.PowerTable()[minerID]

	if !found {
		// AddBalance will just fail if miner is not created before hand
		rt.Abort("minerID not found.")
	}
	currEntry.Impl().AvailableBalance_ = currEntry.AvailableBalance() + msgValue
	st.Impl().PowerTable_[minerID] = currEntry

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) WithdrawBalance(rt Runtime, amount actor.TokenAmount) {

	if amount < 0 {
		rt.Abort("negative amount.")
	}

	// TODO: convert msgSender to MinerActorID
	var minerID addr.Address

	h, st := a.State(rt)

	currEntry, found := st.PowerTable()[minerID]
	if !found {
		rt.Abort("minerID not found.")
	}

	if currEntry.AvailableBalance() < amount {
		rt.Abort("insufficient balance.")
	}

	currEntry.Impl().AvailableBalance_ = currEntry.AvailableBalance() - amount
	st.Impl().PowerTable_[minerID] = currEntry

	UpdateRelease(rt, h, st)

	// TODO: send funds to msgSender
}

func (a *StoragePowerActorCode_I) CreateStorageMiner(
	rt Runtime,
	ownerAddr addr.Address,
	workerAddr addr.Address,
	peerId libp2p.PeerID,
) addr.Address {

	// TODO: anything to check here?
	newMiner := &PowerTableEntry_I{
		ActivePower_:            block.StoragePower(0),
		InactivePower_:          block.StoragePower(0),
		AvailableBalance_:       actor.TokenAmount(0),
		LockedPledgeCollateral_: actor.TokenAmount(0),
	}

	// TODO: call constructor of StorageMinerActor
	// store ownerAddr and workerAddr there
	// and return StorageMinerActor address

	// TODO: minerID should be a MinerActorID
	// which is smaller than MinerAddress
	var minerID addr.Address

	h, st := a.State(rt)

	st.PowerTable()[minerID] = newMiner

	UpdateRelease(rt, h, st)

	return minerID

}

func (a *StoragePowerActorCode_I) RemoveStorageMiner(rt Runtime, address addr.Address) {

	// TODO: make explicit address type
	var minerID addr.Address

	h, st := a.State(rt)

	if (st.PowerTable()[minerID].ActivePower() + st.PowerTable()[minerID].InactivePower()) > 0 {
		rt.Abort("power still remains.")
	}

	delete(st.PowerTable(), minerID)

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) GetTotalPower(rt Runtime) block.StoragePower {

	totalPower := block.StoragePower(0)

	h, st := a.State(rt)

	for _, miner := range st.PowerTable() {
		totalPower = totalPower + miner.ActivePower() + miner.InactivePower()
	}

	Release(rt, h, st)

	return totalPower
}

func (a *StoragePowerActorCode_I) EnsurePledgeCollateralSatisfied(rt Runtime) bool {

	ret := false

	// TODO: convert msgSender to MinerActorID
	var minerID addr.Address

	h, st := a.State(rt)

	powerEntry, found := st.PowerTable()[minerID]

	if !found {
		rt.Abort("miner not found.")
	}

	pledgeCollateralRequired := st._getPledgeCollateralReq(rt, powerEntry.ActivePower()+powerEntry.InactivePower())

	if pledgeCollateralRequired < powerEntry.LockedPledgeCollateral() {
		ret = true
	} else if pledgeCollateralRequired < (powerEntry.LockedPledgeCollateral() + powerEntry.AvailableBalance()) {
		st._lockPledgeCollateral(rt, minerID, (pledgeCollateralRequired - powerEntry.LockedPledgeCollateral()))
		ret = true
	}

	UpdateRelease(rt, h, st)

	return ret
}

func (a *StoragePowerActorCode_I) ProcessFaultReport(rt Runtime, report FaultReport) {

	var msgSender addr.Address // TODO replace this

	h, st := a.State(rt)

	declaredFaultSlash := report.GetDeclaredFaultSlash()
	detectedFaultSlash := report.GetDetectedFaultSlash()
	terminatedFaultSlash := report.GetTerminatedFaultSlash()

	st._slashPledgeCollateral(rt, msgSender, (declaredFaultSlash + detectedFaultSlash + terminatedFaultSlash))

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) ProcessPowerReport(rt Runtime, report PowerReport) {

	// TODO: convert msgSender to MinerActorID
	var minerID addr.Address

	h, st := a.State(rt)

	powerEntry, found := st.PowerTable()[minerID]

	if !found {
		rt.Abort("miner not found.")
	}
	powerEntry.Impl().ActivePower_ = report.ActivePower()
	powerEntry.Impl().InactivePower_ = report.InactivePower()
	st.Impl().PowerTable_[minerID] = powerEntry

	UpdateRelease(rt, h, st)
}

func (a *StoragePowerActorCode_I) ReportConsensusFault(rt Runtime, slasherAddr addr.Address, faultType ConsensusFaultType, proof []block.Block) {
	panic("TODO")

	// Use EC's IsValidConsensusFault method to validate the proof
	// slash block miner's pledge collateral
	// reward slasher

	// include ReportUncommittedPowerFault(cheaterAddr addr.Address, numSectors util.UVarint) as case
	// Quite a bit more straightforward since only called by the cron actor (ie publicly verified)
	// slash cheater pledge collateral accordingly based on num sectors faulted

}

// TODO: add Surprise to the cron actor
func (a *StoragePowerActorCode_I) Surprise(rt Runtime, ticket block.Ticket) {

	// The number of blocks that a challenged miner has to respond
	// TODO: this should be set in.. spa?
	// var postChallengeTime util.UInt

	var provingPeriod uint // TODO

	// sample the actor addresses
	h, st := a.State(rt)

	challengeCount := len(st.PowerTable()) / int(provingPeriod)
	surprisedMiners := st._sampleMinersToSurprise(rt, challengeCount)

	UpdateRelease(rt, h, st)

	// now send the messages
	for _, addr := range surprisedMiners {
		// TODO: rt.SendMessage(addr, ...)
		panic(addr)
	}
}

func (a *StoragePowerActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	panic("TODO")
}
The Power Table

The portion of blocks a given miner generates through leader election in EC (and so the block rewards they earn) is proportional to their Power Fraction over time. That is, a miner whose storage represents 1% of total storage on the network should mine 1% of blocks on expectation.

SPC provides a power table abstraction which tracks miner power (i.e. miner storage in relation to network storage) over time. The power table is updated for new sector commitments (incrementing miner power), when PoSts fail to be put on-chain (decrementing miner power) or for other storage and consensus faults.

An invariant of the storage power consensus subsystem is that all storage in the power table must be verified. That is, miners can only derive power from storage they have already proven to the network.

In order to achieve this, Filecoin delays updating power for new sector commitments until the first valid PoSt in the next proving period corresponding to that sector. (TODO: potential delay this further in order to ensure that any power cut goes undetected at most as long as the shortest power delay on new sector commitments).

For instance, say a miner X does the following: - In epoch 100: commits 10 TB - In epoch 110: publishes a PoSt for their storage - In epoch 120: commits another 10TB - In epoch 135: publishes a new PoSt for their storage

Querying the power table for this miner at different rounds should yield (using the following shorthand as an illustration only): - Power(X, 90) == 0 - Power(X, 100) == 0 - Power(X, 110) == 0 - Power(X, 111) == 10 - Power(X, 120) == 10 - Power(X, 135) == 10 - Power(x, 136) == 20

Conversely, storage faults only lead to power loss once they are detected (up to one proving period after the fault) so miners will mine with no more power than they have used to store data over time.

Put another way, power accounting in the SPC is delayed between storage being proven or faulted, and power being updated in the power table (and so for leader election). This ensures fairness over time.

The Miner lifecycle in the power table should be roughly as follows: - MinerRegistration: A new miner with an associated worker public key and address is registered on the power table by the storage mining subsystem, along with their associated sector size (there is only one per worker). - UpdatePower: These power increments and decrements (in multiples of the associated sector size) are called by various storage actor (and must thus be verified by every full node on the network). Specifically: - Power is incremented to account for a new SectorCommitment at the first PoSt past the first ProvingPeriod. - All Power is decremented immediately after a missed PoSt. - Power is decremented immediately after faults are declared, proportional to the faulty sector size. - Power is incremented after a PoSt recovering from a fault. - Power is definitively removed from the Power Table past the sector failure timeout (see Faults) To summarize, only sectors in the Active state will command power. A Sector becomes Active after their first PoSt from Committed and Recovering stages. Power is immediately decremented when an Active Sector enters the Failing state (through DeclareFaults or Cron) and when an Active Sector expires.

Pledge Collateral

Consensus in Filecoin is secured in part by economic incentives enforced by Pledge Collateral.

Pledge collateral amount is committed based on power pledged to the system (i.e. proportional to number of sectors committed and sector size for a miner). It is a system-wide parameter and is committed to the StoragePowerActor. TODO: define parameter value. Pledge Collateral submission methods take on storage deals to determine the appropriate amount of collateral to be pledged. Pledge collateral can be posted by the StorageMinerActor at any time by a miner up to sector commitments. A sector commitment without the requisite posted pledge collateral will be deemed invalid.

Pledge Collateral will be slashed when Consensus Faults are reported to the StoragePowerActor’s ReportConsensusFault method or when the CronActor calls the StoragePowerActor’s ReportUncommittedPowerFault method.

Pledge Collateral is slashed for any fault affecting storage-power consensus, these include: - faults to expected consensus in particular (see Consensus Faults) which will be reported by a slasher to the StoragePowerActor in exchange for a reward. - faults affecting consensus power more generally, specifically uncommitted power faults (i.e. Faults) which will be reported by the CronActor automatically.

Token

FIL Wallet

Payments

Payment Channels

Payment Channel Actor

(You can see the old Payment Channel Actor here )

type Voucher struct {}
type VouchersApprovalResponse struct {}
type PieceInclusionProof struct {}

type PaymentChannelActor struct {
    RedeemVoucherWithApproval(voucher Voucher)
    RedeemVoucherWithPIP(voucher Voucher, pip PieceInclusionProof)
}

Multisig - Wallet requiring multiple signatures

Multisig Actor

(You can see the old Multisig Actor here )

import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"

type TxSeqNo UVarint
type NumRequired UVarint
type EpochDuration UVarint
type Epoch UVarint

type MultisigActor struct {
    signers         [address.Address]
    required        NumRequired
    nextTxId        TxSeqNo
    initialBalance  actor.TokenAmount
    startingBlock   Epoch
    unlockDuration  EpochDuration
    // transactions    {TxSeqNo: Transaction} // TODO Transaction type does not exist

    Construct(
        signers         [address.Address]
        required        NumRequired
        unlockDuration  EpochDuration
    )
    Propose(
        to      address.Address
        value   actor.TokenAmount
        method  string
        params  Bytes
    ) TxSeqNo
    Approve(txid TxSeqNo)
    Cancel(txid TxSeqNo)
    ClearCompleted()
    AddSigner(signer address.Address, increaseReq bool)
    RemoveSigner(signer address.Address, decreaseReq bool)
    SwapSigner(old address.Address, new address.Address)
    ChangeRequirement(req NumRequired)
}

Storage Mining System - proving storage for producing blocks

The Storage Mining System is the part of the Filecoin Protocol that deals with storing Client’s data, producing proof artifacts that demonstrate correct storage behavior, and managing the work involved.

Storing data and producing proofs is a complex, highly optimizable process, with lots of tunable choices. Miners should explore the design space to arrive at something that (a) satisfies protocol and network-wide constraints, (b) satisfies clients’ requests and expectations (as expressed in Deals), and (c) gives them the most cost-effective operation. This part of the Filecoin Spec primarily describes in detail what MUST and SHOULD happen here, and leaves ample room for various optimizations for implementers, miners, and users to make. In some parts, we describe algorithms that could be replaced by other, more optimized versions, but in those cases it is important that the protocol constraints are satisfied. The protocol constraints are spelled out in clear detail (an unclear, unmentioned constraint is a “spec error”). It is up to implementers who deviate from the algorithms presented here to ensure their modifications satisfy those constraints, especially those relating to protocol security.

Storage Miner

TODO:

  • rename “Storage Mining Worker” ?

Filecoin Storage Mining Subsystem

import sectoridx "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"
import spc "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"
import filcrypto "github.com/filecoin-project/specs/algorithms/crypto"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import blockchain "github.com/filecoin-project/specs/systems/filecoin_blockchain"
import blockproducer "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block_producer"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import storage_proving "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving"
import key_store "github.com/filecoin-project/specs/systems/filecoin_nodes/key_store"

type StorageMiningSubsystem struct {
    // TODO: constructor
    // InitStorageMiningSubsystem() struct{}

    // Component subsystems
    // StorageProvider    storage_provider.StorageProvider
    StoragePowerActor  spc.StoragePowerActorCode
    MinerActor         StorageMinerActorCode
    SectorIndex        sectoridx.SectorIndexerSubsystem
    StorageProving     storage_proving.StorageProvingSubsystem

    // Need access to block producer in order to publish blocks
    blockProducer      blockproducer.BlockProducer

    // Need access to SPC in order to mine a block
    consensus          spc.StoragePowerConsensusSubsystem

    // Need access to the blockchain system in order to query for things in the chain
    blockchain         blockchain.BlockchainSubsystem

    // Need access to the key store in order to generate tickets and election proofs
    keyStore           key_store.KeyStore

    // TODO: why are these here? remove?
    StartMining()
    StopMining()

    // call by StorageMiningSubsystem itself to create miner
    createMiner(
        ownerPubKey   filcrypto.PublicKey
        workerPubKey  filcrypto.PublicKey
        pledgeAmt     actor.TokenAmount
    )

    // get miner key by address
    GetMinerKeyByAddress(addr address.Address) filcrypto.PublicKey

    // call by StorageMarket.StorageProvider at the start of a deal.
    // Triggers AddNewDeal on SectorIndexer
    // StorageDeal contains DealCID
    HandleStorageDeal(deal deal.StorageDeal)

    // call by StorageMinerActor when error in sealing
    CommitSectorError()

    // call by StorageMiningSubsystem itself in BlockProduction
    DrawElectionProof(
        lookbackTicket  block.Ticket
        height          block.ChainEpoch
        vrfKP           filcrypto.VRFKeyPair
    ) block.ElectionProof

    // call by StorageMiningSubsystem itself in BlockProduction
    PrepareNewTicket(priorTicket block.Ticket, workerKeyPair filcrypto.VRFKeyPair) block.Ticket

    // call by BlockChain when a new block is produced
    OnNewBestChain()

    // call by clock during BlockProduction
    // TODO: define clock better
    OnNewRound()

    tryLeaderElection()
}

Sector in StorageMiner State Machine (new one)

Sector State (new one) (open in new tab)
Sector State Legend (new one) (open in new tab)

Sector in StorageMiner State Machine (both)

Sector State Machine (both) (open in new tab)

Storage Mining Cycle

Block miners should constantly be performing Proofs of SpaceTime, and also checking if they have a winning ticket to propose a block at each height/in each round. Rounds are currently set to take around 30 seconds, in order to account for network propagation around the world. The details of both processes are defined here.

The Miner Actor

After successfully calling CreateStorageMiner, a miner actor will be created on-chain, and registered in the storage market. This miner, like all other Filecoin State Machine actors, has a fixed set of methods that can be used to interact with or control it.

import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import sealing "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"

type Seed struct {}
type SectorCommitment struct {}

type SectorExpirationQueueItem struct {
    SectorNumber  sector.SectorNumber
    Expiration    block.ChainEpoch
}

type SectorExpirationQueue struct {
    Add(i SectorExpirationQueueItem)
    Pop() SectorExpirationQueueItem
    Peek() SectorExpirationQueueItem
    Remove(n sector.SectorNumber)
}

type SectorTable struct {
    SectorSize             sector.SectorSize
    ActiveSectors          util.UVarint
    CommittedSectors       util.UVarint
    RecoveringSectors      util.UVarint
    FailingSectors         util.UVarint

    ActivePower()          block.StoragePower
    InactivePower()        block.StoragePower

    TerminationFaultCount  util.UVarint  // transient State that get reset on every constructPowerReport
}

type SectorOnChainInfo struct {
    SealCommitment  sector.SealCommitment
    State           SectorState
}

type ChallengeStatus struct {
    LastChallengeEpoch     block.ChainEpoch  // get updated by NotifyOfPoStChallenge
    LastChallengeEndEpoch  block.ChainEpoch  // get updated upon successful submitPoSt

    IsChallenged()         bool
}

type PreCommittedSector struct {
    Info           sealing.SectorPreCommitInfo
    ReceivedEpoch  block.ChainEpoch
}

type PreCommittedSectorsAMT {sector.SectorNumber: PreCommittedSector}
type SectorsAMT {sector.SectorNumber: SectorOnChainInfo}
type StagedCommittedSectorAMT {sector.SectorNumber: SectorOnChainInfo}

type StorageMinerActorState struct {
    // CollateralVault CollateralVault

    PreCommittedSectors        PreCommittedSectorsAMT
    Sectors                    SectorsAMT
    StagedCommittedSectors     StagedCommittedSectorAMT

    // ProvingSet get copied over to NextProvingSet on PoSt challenge and CheckPoStSubmissionHappened
    // successful SubmitPoSt will perform changes to NextProvingSet
    // and update ProvingSet with NextProvingSet at the end
    // No DeclareFaults and CommitSector can happen when SM is in the isChallenged state
    ProvingSet                 sector.CompactSectorSet

    SectorTable
    SectorExpirationQueue
    ChallengeStatus

    // contains mostly static info about this miner
    Info                       &MinerInfo

    // TODO ProvingPeriodEnd   Epoch

    _isChallenged(rt Runtime)  bool
    _isSealVerificationCorrect(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool
    _sectorExists(sectorNo sector.SectorNumber) bool

    _updateFailSector(
        rt                   Runtime
        sectorNo             sector.SectorNumber
        incrementFaultCount  bool
    )
    _updateExpireSectors(rt Runtime)
    _updateCommittedSectors(rt Runtime)
    _updateClearSector(rt Runtime, sectorNo sector.SectorNumber)
    _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber)
}

type StorageMinerActorCode struct {
    NotifyOfPoStChallenge(rt Runtime)

    PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo)  // TODO: check with Magik on sizes
    ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo)

    SubmitPoSt(rt Runtime, postSubmission poster.PoStSubmission)

    CheckPoStSubmissionHappened(rt Runtime)

    // TODO: should depledge be in here or in storage market actor?

    DeclareFaults(rt Runtime, failingSet sector.CompactSectorSet)

    RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet)

    _onMissedPoSt(rt Runtime)

    _submitFaultReport(
        rt          Runtime
        declared    util.UVarint
        detected    util.UVarint
        terminated  util.UVarint
    )
    _submitPowerReport(rt Runtime)

    _verifyPoStSubmission(rt Runtime, postSubmission poster.PoStSubmission) bool

    // _computeProvingPeriodEndSectorState(rt Runtime)  // TODO

    _expirePreCommittedSectors(rt Runtime)
}

type MinerInfo struct {
    // Account that owns this miner.
    // - Income and returned collateral are paid to this address.
    // - This address is also allowed to change the worker address for the miner.
    Owner           address.Address

    // Worker account for this miner.
    // This will be the key that is used to sign blocks created by this miner, and
    // sign messages sent on behalf of this miner to commit sectors, submit PoSts, and
    // other day to day miner activities.
    Worker          address.Address

    // Libp2p identity that should be used when connecting to this miner.
    PeerId          libp2p.PeerID

    // Amount of space in each sector committed to the network by this miner.
    SectorSize      util.BytesAmount
    SubsectorCount  UVarint
    Partitions      UVarint
}
package storage_mining

import (
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

	ipld "github.com/filecoin-project/specs/libraries/ipld"

	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

	poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"

	power "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"

	proving "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving"

	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"

	util "github.com/filecoin-project/specs/util"

	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

	exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type State = StorageMinerActorState
type Any = util.Any
type Bool = util.Bool
type Bytes = util.Bytes
type InvocOutput = msg.InvocOutput
type Runtime = vmr.Runtime

var TODO = util.TODO

func (a *StorageMinerActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StorageMinerActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

// TODO: placeholder epoch value -- this will be set later
const MAX_PROVE_COMMIT_SECTOR_EPOCH = block.ChainEpoch(3)

func (st *SectorTable_I) ActivePower() block.StoragePower {
	return block.StoragePower(st.ActiveSectors_ * util.UVarint(st.SectorSize_))
}

func (st *SectorTable_I) InactivePower() block.StoragePower {
	return block.StoragePower((st.CommittedSectors_ + st.RecoveringSectors_ + st.FailingSectors_) * util.UVarint(st.SectorSize_))
}

func (cs *ChallengeStatus_I) OnNewChallenge(currEpoch block.ChainEpoch) ChallengeStatus {
	cs.LastChallengeEpoch_ = currEpoch
	return cs
}

// Call by either SubmitPoSt or OnMissedPoSt
// TODO: verify this is correct and if we need to distinguish SubmitPoSt vs OnMissedPoSt
func (cs *ChallengeStatus_I) OnChallengeResponse(currEpoch block.ChainEpoch) ChallengeStatus {
	cs.LastChallengeEndEpoch_ = currEpoch
	return cs
}

func (cs *ChallengeStatus_I) IsChallenged() bool {
	// true (isChallenged) when LastChallengeEpoch is later than LastChallengeEndEpoch
	return cs.LastChallengeEpoch() > cs.LastChallengeEndEpoch()
}

func (st *StorageMinerActorState_I) _isChallenged(rt Runtime) bool {
	return st.ChallengeStatus().IsChallenged()
}

func (a *StorageMinerActorCode_I) _isChallenged(rt Runtime) bool {
	h, st := a.State(rt)
	ret := st._isChallenged(rt)
	Release(rt, h, st)
	return ret
}

// called by CronActor to notify StorageMiner of PoSt Challenge
func (a *StorageMinerActorCode_I) NotifyOfPoStChallenge(rt Runtime) InvocOutput {
	rt.ValidateCallerIs(addr.CronActorAddr)

	if a._isChallenged(rt) {
		return rt.SuccessReturn() // silent return, dont re-challenge
	}

	a._expirePreCommittedSectors(rt)

	h, st := a.State(rt)
	st.ChallengeStatus().Impl().LastChallengeEpoch_ = rt.CurrEpoch()
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _updateCommittedSectors(rt Runtime) {
	for sectorNo, sealOnChainInfo := range st.StagedCommittedSectors() {
		st.Sectors()[sectorNo] = sealOnChainInfo
		st.Impl().ProvingSet_.Add(sectorNo)
		st.SectorTable().Impl().CommittedSectors_ += 1
	}

	// empty StagedCommittedSectors
	st.Impl().StagedCommittedSectors_ = make(map[sector.SectorNumber]SectorOnChainInfo)
}

// construct FaultReport
// reset NewTerminatedFaults
func (a *StorageMinerActorCode_I) _submitFaultReport(
	rt Runtime,
	newDeclaredFaults util.UVarint,
	newDetectedFaults util.UVarint,
	newTerminatedFaults util.UVarint,
) {
	faultReport := &power.FaultReport_I{
		NewDeclaredFaults_:   newDeclaredFaults,
		NewDetectedFaults_:   newDetectedFaults,
		NewTerminatedFaults_: newTerminatedFaults,
	}

	rt.Abort("TODO") // TODO: Send(SPA, ProcessFaultReport(faultReport))
	panic(faultReport)

	h, st := a.State(rt)
	st.SectorTable().Impl().TerminationFaultCount_ = util.UVarint(0)
	UpdateRelease(rt, h, st)
}

// construct PowerReport from SectorTable
func (a *StorageMinerActorCode_I) _submitPowerReport(rt Runtime) {
	h, st := a.State(rt)
	powerReport := &power.PowerReport_I{
		ActivePower_:   st.SectorTable().ActivePower(),
		InactivePower_: st.SectorTable().InactivePower(),
	}
	Release(rt, h, st)

	rt.Abort("TODO") // TODO: Send(SPA, ProcessPowerReport(powerReport))
	panic(powerReport)
}

func (a *StorageMinerActorCode_I) _onMissedPoSt(rt Runtime) {
	h, st := a.State(rt)

	failingSectorNumbers := getSectorNums(st.Sectors())
	for _, sectorNo := range failingSectorNumbers {
		st._updateFailSector(rt, sectorNo, true)
	}
	st._updateExpireSectors(rt)
	UpdateRelease(rt, h, st)

	h, st = a.State(rt)
	newDetectedFaults := st.SectorTable().FailingSectors()
	newTerminatedFaults := st.SectorTable().TerminationFaultCount()
	Release(rt, h, st)

	// Note: NewDetectedFaults is now the sum of all
	// previously active, committed, and recovering sectors minus expired ones
	// and any previously Failing sectors that did not exceed MaxFaultCount
	// Note: previously declared faults is now treated as part of detected faults
	a._submitFaultReport(
		rt,
		util.UVarint(0), // NewDeclaredFaults
		newDetectedFaults,
		newTerminatedFaults,
	)

	a._submitPowerReport(rt)

	// end of challenge
	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnChallengeResponse(rt.CurrEpoch())
	st._updateCommittedSectors(rt)
	UpdateRelease(rt, h, st)
}

// If a Post is missed (either due to faults being not declared on time or
// because the miner run out of time, every sector is reported as failing
// for the current proving period.
func (a *StorageMinerActorCode_I) CheckPoStSubmissionHappened(rt Runtime) InvocOutput {
	TODO() // TODO: validate caller

	if !a._isChallenged(rt) {
		// Miner gets out of a challenge when submit a successful PoSt
		// or when detected by CronActor. Hence, not being in isChallenged means that we are good here
		return rt.SuccessReturn()
	}

	a._expirePreCommittedSectors(rt)

	// oh no -- we missed it. rekt
	a._onMissedPoSt(rt)

	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) _verifyPoStSubmission(rt Runtime, postSubmission poster.PoStSubmission) bool {
	// 1. A proof must be submitted after the postRandomness for this proving
	// period is on chain
	// if rt.ChainEpoch < sm.ProvingPeriodEnd - challengeTime {
	//   rt.Abort("too early")
	// }

	// 2. A proof must be a valid snark proof with the correct public inputs
	// 2.1 Get randomness from the chain at the right epoch
	// postRandomness := rt.Randomness(postSubmission.Epoch, 0)
	// 2.2 Generate the set of challenges
	// challenges := GenerateChallengesForPoSt(r, keys(sm.Sectors))
	// 2.3 Verify the PoSt Proof
	// verifyPoSt(challenges, TODO)

	rt.Abort("TODO") // TODO: finish
	return false
}

func (a *StorageMinerActorCode_I) _expirePreCommittedSectors(rt Runtime) {

	h, st := a.State(rt)
	for _, preCommitSector := range st.PreCommittedSectors() {

		elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()
		if elapsedEpoch > MAX_PROVE_COMMIT_SECTOR_EPOCH {
			delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
			// TODO: potentially some slashing if ProveCommitSector comes late
		}
	}
	UpdateRelease(rt, h, st)

}

// move Sector from Active/Failing
// into Cleared State which means deleting the Sector from state
// remove SectorNumber from all states on chain
// update SectorTable
func (st *StorageMinerActorState_I) _updateClearSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorActiveSN:
		// expiration case
		st.SectorTable().Impl().ActiveSectors_ -= 1
	case SectorFailingSN:
		// expiration and termination cases
		st.SectorTable().Impl().FailingSectors_ -= 1
	default:
		// Committed and Recovering should not go to Cleared directly
		rt.Abort("invalid state in clearSector")
		// TODO: determine proper error here and error-handling machinery
	}

	delete(st.Sectors(), sectorNo)
	st.ProvingSet_.Remove(sectorNo)
	st.SectorExpirationQueue().Remove(sectorNo)
}

// move Sector from Committed/Recovering into Active State
// reset FaultCount to zero
// update SectorTable
func (st *StorageMinerActorState_I) _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_ -= 1
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_ -= 1
	default:
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("invalid state in activateSector")
	}

	st.Sectors()[sectorNo].Impl().State_ = SectorActive()
	st.SectorTable().Impl().ActiveSectors_ += 1
}

// failSector moves Sector from Active/Committed/Recovering into Failing State
// and increments FaultCount if asked to do so (DeclareFaults does not increment faultCount)
// move Sector from Failing to Cleared State if increment results in faultCount exceeds MaxFaultCount
// update SectorTable
// remove from ProvingSet
func (st *StorageMinerActorState_I) _updateFailSector(rt Runtime, sectorNo sector.SectorNumber, increment bool) {
	newFaultCount := st.Sectors()[sectorNo].State().FaultCount

	if increment {
		newFaultCount += 1
	}

	state := st.Sectors()[sectorNo].State()
	switch state.StateNumber {
	case SectorActiveSN:
		// wont be terminated from Active
		st.SectorTable().Impl().ActiveSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorFailingSN:
		// no change to SectorTable but increase in FaultCount
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	default:
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("Invalid sector state in CronAction")
	}

	if newFaultCount > MAX_CONSECUTIVE_FAULTS {
		// TODO: heavy penalization: slash pledge collateral and delete sector
		// TODO: SendMessage(SPA.SlashPledgeCollateral)

		st._updateClearSector(rt, sectorNo)
		st.SectorTable().Impl().TerminationFaultCount_ += 1
	}
}

// Decision is to currently account for power based on sector
// with at least one active deals and deals cannot be updated
// an alternative proposal is to account for power based on active deals
// an improvement proposal is to allow storage deal update in a sector

// TODO: decide whether declared faults sectors should be
// penalized in the same way as undeclared sectors and how

// SubmitPoSt Workflow:
// - Verify PoSt Submission
// - Process ProvingSet.SectorsOn()
//   - State Transitions
//     - Committed -> Active and credit power
//     - Recovering -> Active and credit power
//   - Process Active Sectors (pay miners)
// - Process ProvingSet.SectorsOff()
//     - increment FaultCount
//     - clear Sector and slash pledge collateral if count > MAX_CONSECUTIVE_FAULTS
// - Process Expired Sectors (settle deals and return storage collateral to miners)
//     - State Transition
//       - Failing / Recovering / Active / Committed -> Cleared
//     - Remove SectorNumber from Sectors, ProvingSet
func (a *StorageMinerActorCode_I) SubmitPoSt(rt Runtime, postSubmission poster.PoStSubmission) InvocOutput {
	TODO() // TODO: validate caller

	if !a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot SubmitPoSt when not challenged")
	}

	// Verify correct PoSt Submission
	isPoStVerified := a._verifyPoStSubmission(rt, postSubmission)
	if !isPoStVerified {
		// no state transition, just error out and miner should submitPoSt again
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("TODO")
	}

	h, st := a.State(rt)

	// The proof is verified, process ProvingSet.SectorsOn():
	// ProvingSet.SectorsOn() contains SectorCommitted, SectorActive, SectorRecovering
	// ProvingSet itself does not store states, states are all stored in Sectors.State
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorCommittedSN, SectorRecoveringSN:
			st._updateActivateSector(rt, sectorNo)
		case SectorActiveSN:
			// Process payment in all active deals
			// Note: this must happen before marking sectors as expired.
			// TODO: Pay miner in a single batch message
			// SendMessage(sma.ProcessStorageDealsPayment(sm.Sectors()[sectorNumber].DealIDs()))
		default:
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in ProvingSet.SectorsOn()")
		}
	}

	// commit state change so that committed and recovering are now active

	// Process ProvingSet.SectorsOff()
	// ProvingSet.SectorsOff() contains SectorFailing
	// SectorRecovering is Proving and hence will not be in GetZeros()
	// heavy penalty if Failing for more than or equal to MAX_CONSECUTIVE_FAULTS
	// otherwise increment FaultCount in Sectors().State
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOff() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			continue
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			st._updateFailSector(rt, sectorNo, true)
		default:
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in ProvingSet.SectorsOff")
		}
	}

	// Process Expiration.
	st._updateExpireSectors(rt)

	UpdateRelease(rt, h, st)

	h, st = a.State(rt)
	terminationFaultCount := st.SectorTable().Impl().TerminationFaultCount_
	Release(rt, h, st)

	a._submitFaultReport(
		rt,
		util.UVarint(0), // NewDeclaredFaults
		util.UVarint(0), // NewDetectedFaults
		util.UVarint(terminationFaultCount),
	)

	a._submitPowerReport(rt)

	// TODO: check EnsurePledgeCollateralSatisfied
	// pledgeCollateralSatisfied

	// Reset Proving Period and report power updates
	// sm.ProvingPeriodEnd_ = PROVING_PERIOD_TIME

	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnChallengeResponse(rt.CurrEpoch())
	st._updateCommittedSectors(rt)
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _updateExpireSectors(rt Runtime) {
	currEpoch := rt.CurrEpoch()

	queue := st.SectorExpirationQueue()
	for queue.Peek().Expiration() <= currEpoch {
		expiredSectorNo := queue.Pop().SectorNumber()

		state := st.Sectors()[expiredSectorNo].State()
		// sc := sm.Sectors()[expiredSectorNo]
		switch state.StateNumber {
		case SectorActiveSN:
			// Note: in order to verify if something was stored in the past, one must
			// scan the chain. SectorNumber can be re-used.

			// Settle deals
			// SendMessage(sma.SettleExpiredDeals(sc.DealIDs()))
			st._updateClearSector(rt, expiredSectorNo)
		case SectorFailingSN:
			// TODO: check if there is any fault that we should handle here
			// If a SectorFailing Expires, return remaining StorageDealCollateral and remove sector
			// SendMessage(sma.SettleExpiredDeals(sc.DealIDs()))

			// a failing sector expires, no change to FaultCount
			st._updateClearSector(rt, expiredSectorNo)
		default:
			// Note: SectorCommittedSN, SectorRecoveringSN transition first to SectorFailingSN, then expire
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in SectorExpirationQueue")
		}
	}

	// Return PledgeCollateral for active expirations
	// SendMessage(spa.Depledge) // TODO
	rt.Abort("TODO: refactor use of this method in order for caller to send this message")
}

// RecoverFaults checks if miners have sufficent collateral
// and adds SectorFailing into SectorRecovering
// - State Transition
//   - Failing -> Recovering with the same FaultCount
// - Add SectorNumber to ProvingSet
// Note that power is not updated until it is active
func (a *StorageMinerActorCode_I) RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	if a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot RecoverFaults when sm isChallenged")
	}

	h, st := a.State(rt)

	// for all SectorNumber marked as recovering by recoveringSet
	for _, sectorNo := range recoveringSet.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			// Check if miners have sufficient balances in sma

			// SendMessage(sma.PublishStorageDeals) or sma.ResumeStorageDeals?
			// throw if miner cannot cover StorageDealCollateral

			// Check if miners have sufficient pledgeCollateral

			// copy over the same FaultCount
			st.Sectors()[sectorNo].Impl().State_ = SectorRecovering(sectorState.State().FaultCount)
			st.Impl().ProvingSet_.Add(sectorNo)

			st.SectorTable().Impl().FailingSectors_ -= 1
			st.SectorTable().Impl().RecoveringSectors_ += 1

		default:
			// TODO: determine proper error here and error-handling machinery
			// TODO: consider this a no-op (as opposed to a failure), because this is a user
			// call that may be delayed by the chain beyond some other state transition.
			rt.Abort("Invalid sector state in RecoverFaults")
		}
	}

	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

// DeclareFaults penalizes miners (slashStorageDealCollateral and remove power)
// TODO: decide how much storage collateral to slash
// - State Transition
//   - Active / Commited / Recovering -> Failing
// - Update State in Sectors()
// - Remove Active / Commited / Recovering from ProvingSet
func (a *StorageMinerActorCode_I) DeclareFaults(rt Runtime, faultSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	if a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot DeclareFaults when challenged")
	}

	h, st := a.State(rt)

	// fail all SectorNumber marked as Failing by faultSet
	for _, sectorNo := range faultSet.SectorsOn() {
		st._updateFailSector(rt, sectorNo, false)
	}
	declaredFaults := len(faultSet.SectorsOn())

	UpdateRelease(rt, h, st)

	a._submitFaultReport(
		rt,
		util.UVarint(declaredFaults), // DeclaredFaults
		util.UVarint(0),              // DetectedFaults
		util.UVarint(0),              // TerminatedFault
	)

	a._submitPowerReport(rt)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _isSealVerificationCorrect(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool {
	// TODO: verify seal @nicola
	// TODO: st.verifySeal(sectorID SectorID, comm sector.OnChainSealVerifyInfo, proof SealProof)

	// verifySeal will also generate CommD on the fly from CommP and PieceSize

	var pieceInfos []sector.PieceInfo // = make([]sector.PieceInfo, 0)

	for dealId := range onChainInfo.DealIDs() {
		// FIXME: Actually get the deal info from the storage market actor and use it to create a sector.PieceInfo.
		_ = dealId

		pieceInfos = append(pieceInfos, nil)
	}

	new(proving.StorageProvingSubsystem_I).VerifySeal(&sector.SealVerifyInfo_I{
		SectorID_: &sector.SectorID_I{
			MinerID_: st.Info().Worker(), // TODO: This is actually miner address. MinerID needs to be derived.
			Number_:  onChainInfo.SectorNumber(),
		},
		OnChain_: onChainInfo,

		// TODO: Make SealCfg sector.SealCfg from miner configuration (where is that?)
		SealCfg_: &sector.SealCfg_I{
			SectorSize_:     st.Info().SectorSize(),
			SubsectorCount_: st.Info().SubsectorCount(),
			Partitions_:     st.Info().Partitions(),
		},

		// TODO: get Randomness sector.SealRandomness using onChainInfo.Epoch
		//Randomness_:
		// TODO: get InteractiveRandomness sector.SealRandomness using onChainInfo.InteractiveEpoch
		//InteractiveRandomness_:
		PieceInfos_: pieceInfos,
	})
	return false // TODO: finish
}

func (st *StorageMinerActorState_I) _sectorExists(sectorNo sector.SectorNumber) bool {
	_, found := st.Sectors()[sectorNo]
	return found
}

// Deals must be posted on chain via sma.PublishStorageDeals before PreCommitSector
// TODO(optimization): PreCommitSector could contain a list of deals that are not published yet.
func (a *StorageMinerActorCode_I) PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo) InvocOutput {
	TODO() // TODO: validate caller

	// no checks needed
	// can be called regardless of Challenged status

	// TODO: might record CurrEpoch for PreCommitSector expiration
	// in other words, a ProveCommitSector must be on chain X Epoch after a PreCommitSector goes on chain
	// TODO: might take collateral in case no ProveCommit follows within sometime
	// TODO: collateral also penalizes repeated precommit to get randomness that one likes
	// TODO: might be a good place for Treasury

	h, st := a.State(rt)

	_, found := st.PreCommittedSectors()[info.SectorNumber()]

	if found {
		// TODO: burn some funds?
		rt.Abort("Sector already pre committed.")
	}

	sectorExists := st._sectorExists(info.SectorNumber())
	if sectorExists {
		rt.Abort("Sector already exists.")
	}

	// TODO: verify every DealID has been published and not yet expired

	precommittedSector := &PreCommittedSector_I{
		Info_:          info,
		ReceivedEpoch_: rt.CurrEpoch(),
	}
	st.PreCommittedSectors()[info.SectorNumber()] = precommittedSector

	UpdateRelease(rt, h, st)
	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo) InvocOutput {
	TODO() // TODO: validate caller

	h, st := a.State(rt)

	preCommitSector, found := st.PreCommittedSectors()[info.SectorNumber()]

	if !found {
		rt.Abort("Sector not pre committed.")
	}

	sectorExists := st._sectorExists(info.SectorNumber())

	if sectorExists {
		rt.Abort("Sector already exists.")
	}

	// check if ProveCommitSector comes too late after PreCommitSector
	elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()

	// if more than MAX_PROVE_COMMIT_SECTOR_EPOCH has elapsed
	if elapsedEpoch > MAX_PROVE_COMMIT_SECTOR_EPOCH {
		// TODO: potentially some slashing if ProveCommitSector comes late

		// expired
		delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
		UpdateRelease(rt, h, st)
		return rt.ErrorReturn(exitcode.UserDefinedError(0)) // TODO: user dfined error code?
	}

	onChainInfo := &sector.OnChainSealVerifyInfo_I{
		SealedCID_:        preCommitSector.Info().SealedCID(),
		SealEpoch_:        preCommitSector.Info().SealEpoch(),
		InteractiveEpoch_: info.InteractiveEpoch(),
		Proof_:            info.Proof(),
		DealIDs_:          preCommitSector.Info().DealIDs(),
		SectorNumber_:     preCommitSector.Info().SectorNumber(),
	}

	isSealVerificationCorrect := st._isSealVerificationCorrect(rt, onChainInfo)
	if !isSealVerificationCorrect {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("Seal verification failed")
	}

	// TODO: check EnsurePledgeCollateralSatisfied
	// pledgeCollateralSatisfied

	// determine lastDealExpiration from sma
	// TODO: proper onchain transaction
	// lastDealExpiration := SendMessage(sma, GetLastDealExpirationFromDealIDs(onChainInfo.DealIDs()))
	var lastDealExpiration block.ChainEpoch

	// Note: in the current iteration, a Sector expires only when all storage deals in it have expired.
	// This is likely to change but it aims to meet user requirement that users can enter into deals of any size.
	// add sector expiration to SectorExpirationQueue
	st.SectorExpirationQueue().Add(&SectorExpirationQueueItem_I{
		SectorNumber_: onChainInfo.SectorNumber(),
		Expiration_:   lastDealExpiration,
	})

	// no need to store the proof and randomseed in the state tree
	// verify and drop, only SealCommitment{CommR, DealIDs} on chain
	sealCommitment := &sector.SealCommitment_I{
		SealedCID_:  onChainInfo.SealedCID(),
		DealIDs_:    onChainInfo.DealIDs(),
		Expiration_: lastDealExpiration, // TODO decide if we need this too
	}

	// add SectorNumber and SealCommitment to Sectors
	// set Sectors.State to SectorCommitted
	// Note that SectorNumber will only become Active at the next successful PoSt
	sealOnChainInfo := &SectorOnChainInfo_I{
		SealCommitment_: sealCommitment,
		State_:          SectorCommitted(),
	}

	if st._isChallenged(rt) {
		// move PreCommittedSector to StagedCommittedSectors if in Challenged status
		st.StagedCommittedSectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
	} else {
		// move PreCommittedSector to CommittedSectors if not in Challenged status
		st.Sectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
		st.Impl().ProvingSet_.Add(onChainInfo.SectorNumber())
		st.SectorTable().Impl().CommittedSectors_ += 1
	}

	// now remove SectorNumber from PreCommittedSectors (processed)
	delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func getSectorNums(m map[sector.SectorNumber]SectorOnChainInfo) []sector.SectorNumber {
	var l []sector.SectorNumber
	for i, _ := range m {
		l = append(l, i)
	}
	return l
}

Owner Worker distinction

The miner actor has two distinct ‘controller’ addresses. One is the worker, which is the address which will be responsible for doing all of the work, submitting proofs, committing new sectors, and all other day to day activities. The owner address is the address that created the miner, paid the collateral, and has block rewards paid out to it. The reason for the distinction is to allow different parties to fulfil the different roles. One example would be for the owner to be a multisig wallet, or a cold storage key, and the worker key to be a ‘hot wallet’ key.

Storage Mining Cycle

Storage miners must continually produce Proofs of SpaceTime over their storage to convince the network that they are actually storing the sectors that they have committed to. Each PoSt covers a miner’s entire storage.

Step 0: Registration

To initially become a miner, a miner first register a new miner actor on-chain. This is done through the storage power actor’s CreateStorageMiner method. The call will then create a new miner actor instance and return its address.

The next step is to place one or more storage market asks on the market. This is done off-chain as part of storagee market functions. A miner may create a single ask for their entire storage, or partition their storage up in some way with multiple asks (at potentially different prices).

After that, they need to make deals with clients and begin filling up sectors with data. For more information on making deals, see the Storage Market.

When they have a full sector, they should seal it. This is done by invoking the Sector Sealer.

Changing Worker Addresses

Note that any change to worker keys after registration (TODO: spec how this works) must be appropriately delayed in relation to randomness lookback for SEALing data (see this issue).

Step 1: Commit

When the miner has completed their first seal, they should post it on-chain using the Storage Miner Actor’s ProveCommitSector function. If the miner had zero committed sectors prior to this call, this begins their proving period.

The proving period is a fixed amount of time in which the miner must submit a Proof of Space Time to the network.

During this period, the miner may also commit to new sectors, but they will not be included in proofs of space time until the next proving period starts. For example, if a miner currently PoSts for 10 sectors, and commits to 20 more sectors. The next PoSt they submit (i.e. the one they’re currently proving) will be for 10 sectors again, the subsequent one will be for 30.

TODO: sectors need to be globally unique. This can be done either by having the seal proof prove the sector is unique to this miner in some way, or by having a giant global map on-chain is checked against on each submission. As the system moves towards sector aggregation, the latter option will become unworkable, so more thought needs to go into how that proof statement could work.

Step 2: Proving Storage (PoSt creation)
func ProveStorage(sectorSize BytesAmount, sectors []commR) PoStProof {
    challengeBlockHeight := miner.ProvingPeriodEnd - POST_CHALLENGE_TIME

    // Faults to be used are the currentFaultSet for the miner.
    faults := miner.currentFaultSet
    seed := GetRandFromBlock(challengeBlockHeight)
    return GeneratePoSt(sectorSize, sectors, seed, faults)
}

Note: See ‘Proof of Space Time’ for more details.

The proving set remains consistent during the proving period. Any sectors added in the meantime will be included in the next proving set, at the beginning of the next proving period.

Step 3: PoSt Submission

When the miner has completed their PoSt, they must submit it to the network by calling SubmitPoSt. There are two different times that this could be done.

  1. Standard Submission: A standard submission is one that makes it on-chain before the end of the proving period. The length of time it takes to compute the PoSts is set such that there is a grace period between then and the actual end of the proving period, so that the effects of network congestion on typical miner actions is minimized.
  2. Penalized Submission: A penalized submission is one that makes it on-chain after the end of the proving period, but before the generation attack threshold. These submissions count as valid PoSt submissions, but the miner must pay a penalty for their late submission. (See ‘Faults’ for more information)
    • Note: In this case, the next PoSt should still be started at the beginning of the proving period, even if the current one is not yet complete. Miners must submit one PoSt per proving period.

Along with the PoSt submission, miners may also submit a set of sectors that they wish to remove from their proving set. This is done by selecting the sectors in the ‘done’ bitfield passed to SubmitPoSt.

Stop Mining

In order to stop mining, a miner must complete all of its storage contracts, and remove them from their proving set during a PoSt submission. A miner may then call DePledge() to retrieve their collateral. DePledge must be called twice, once to start the cooldown, and once again after the cooldown to reclaim the funds. The cooldown period is to allow clients whose files have been dropped by a miner to slash them before they get their money back and get away with it.

Faults

Faults are described in the faults document.

On Being Slashed (WIP, needs discussion)

If a miner is slashed for failing to submit their PoSt on time, they currently lose all their pledge collateral. They do not necessarily lose their storage collateral. Storage collateral is lost when a miner’s clients slash them for no longer having the data. Missing a PoSt does not necessarily imply that a miner no longer has the data. There should be an additional timeout here where the miner can submit a PoSt, along with ‘refilling’ their pledge collateral. If a miner does this, they can continue mining, their mining power will be reinstated, and clients can be assured that their data is still there.

TODO: disambiguate the two collaterals across the entire spec

Review Discussion Note: Taking all of a miners collateral for going over the deadline for PoSt submission is really really painful, and is likely to dissuade people from even mining filecoin in the first place (If my internet going out could cause me to lose a very large amount of money, that leads to some pretty hard decisions around profitability). One potential strategy could be to only penalize miners for the amount of sectors they could have generated in that timeframe.

Future Work

There are many ideas for improving upon the storage miner, here are ideas that may be potentially implemented in the future.

  • Sector Resealing: Miners should be able to ’re-seal’ sectors, to allow them to take a set of sectors with mostly expired pieces, and combine the not-yet-expired pieces into a single (or multiple) sectors.
  • Sector Transfer: Miners should be able to re-delegate the responsibility of storing data to another miner. This is tricky for many reasons, and will not be implemented in the initial release of Filecoin, but could provide interesting capabilities down the road.

Storage Miner Actor

(You can see the old Storage Miner Actor here )

StorageMinerActor interface
import libp2p "github.com/filecoin-project/specs/libraries/libp2p"
import sealing "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import address "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"

type Seed struct {}
type SectorCommitment struct {}

type SectorExpirationQueueItem struct {
    SectorNumber  sector.SectorNumber
    Expiration    block.ChainEpoch
}

type SectorExpirationQueue struct {
    Add(i SectorExpirationQueueItem)
    Pop() SectorExpirationQueueItem
    Peek() SectorExpirationQueueItem
    Remove(n sector.SectorNumber)
}

type SectorTable struct {
    SectorSize             sector.SectorSize
    ActiveSectors          util.UVarint
    CommittedSectors       util.UVarint
    RecoveringSectors      util.UVarint
    FailingSectors         util.UVarint

    ActivePower()          block.StoragePower
    InactivePower()        block.StoragePower

    TerminationFaultCount  util.UVarint  // transient State that get reset on every constructPowerReport
}

type SectorOnChainInfo struct {
    SealCommitment  sector.SealCommitment
    State           SectorState
}

type ChallengeStatus struct {
    LastChallengeEpoch     block.ChainEpoch  // get updated by NotifyOfPoStChallenge
    LastChallengeEndEpoch  block.ChainEpoch  // get updated upon successful submitPoSt

    IsChallenged()         bool
}

type PreCommittedSector struct {
    Info           sealing.SectorPreCommitInfo
    ReceivedEpoch  block.ChainEpoch
}

type PreCommittedSectorsAMT {sector.SectorNumber: PreCommittedSector}
type SectorsAMT {sector.SectorNumber: SectorOnChainInfo}
type StagedCommittedSectorAMT {sector.SectorNumber: SectorOnChainInfo}

type StorageMinerActorState struct {
    // CollateralVault CollateralVault

    PreCommittedSectors        PreCommittedSectorsAMT
    Sectors                    SectorsAMT
    StagedCommittedSectors     StagedCommittedSectorAMT

    // ProvingSet get copied over to NextProvingSet on PoSt challenge and CheckPoStSubmissionHappened
    // successful SubmitPoSt will perform changes to NextProvingSet
    // and update ProvingSet with NextProvingSet at the end
    // No DeclareFaults and CommitSector can happen when SM is in the isChallenged state
    ProvingSet                 sector.CompactSectorSet

    SectorTable
    SectorExpirationQueue
    ChallengeStatus

    // contains mostly static info about this miner
    Info                       &MinerInfo

    // TODO ProvingPeriodEnd   Epoch

    _isChallenged(rt Runtime)  bool
    _isSealVerificationCorrect(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool
    _sectorExists(sectorNo sector.SectorNumber) bool

    _updateFailSector(
        rt                   Runtime
        sectorNo             sector.SectorNumber
        incrementFaultCount  bool
    )
    _updateExpireSectors(rt Runtime)
    _updateCommittedSectors(rt Runtime)
    _updateClearSector(rt Runtime, sectorNo sector.SectorNumber)
    _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber)
}

type StorageMinerActorCode struct {
    NotifyOfPoStChallenge(rt Runtime)

    PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo)  // TODO: check with Magik on sizes
    ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo)

    SubmitPoSt(rt Runtime, postSubmission poster.PoStSubmission)

    CheckPoStSubmissionHappened(rt Runtime)

    // TODO: should depledge be in here or in storage market actor?

    DeclareFaults(rt Runtime, failingSet sector.CompactSectorSet)

    RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet)

    _onMissedPoSt(rt Runtime)

    _submitFaultReport(
        rt          Runtime
        declared    util.UVarint
        detected    util.UVarint
        terminated  util.UVarint
    )
    _submitPowerReport(rt Runtime)

    _verifyPoStSubmission(rt Runtime, postSubmission poster.PoStSubmission) bool

    // _computeProvingPeriodEndSectorState(rt Runtime)  // TODO

    _expirePreCommittedSectors(rt Runtime)
}

type MinerInfo struct {
    // Account that owns this miner.
    // - Income and returned collateral are paid to this address.
    // - This address is also allowed to change the worker address for the miner.
    Owner           address.Address

    // Worker account for this miner.
    // This will be the key that is used to sign blocks created by this miner, and
    // sign messages sent on behalf of this miner to commit sectors, submit PoSts, and
    // other day to day miner activities.
    Worker          address.Address

    // Libp2p identity that should be used when connecting to this miner.
    PeerId          libp2p.PeerID

    // Amount of space in each sector committed to the network by this miner.
    SectorSize      util.BytesAmount
    SubsectorCount  UVarint
    Partitions      UVarint
}
StorageMinerActor implementation
package storage_mining

import (
	actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
	addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

	block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

	ipld "github.com/filecoin-project/specs/libraries/ipld"

	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

	poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"

	power "github.com/filecoin-project/specs/systems/filecoin_blockchain/storage_power_consensus"

	proving "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving"

	sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"

	util "github.com/filecoin-project/specs/util"

	vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"

	exitcode "github.com/filecoin-project/specs/systems/filecoin_vm/runtime/exitcode"
)

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type State = StorageMinerActorState
type Any = util.Any
type Bool = util.Bool
type Bytes = util.Bytes
type InvocOutput = msg.InvocOutput
type Runtime = vmr.Runtime

var TODO = util.TODO

func (a *StorageMinerActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StorageMinerActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

// TODO: placeholder epoch value -- this will be set later
const MAX_PROVE_COMMIT_SECTOR_EPOCH = block.ChainEpoch(3)

func (st *SectorTable_I) ActivePower() block.StoragePower {
	return block.StoragePower(st.ActiveSectors_ * util.UVarint(st.SectorSize_))
}

func (st *SectorTable_I) InactivePower() block.StoragePower {
	return block.StoragePower((st.CommittedSectors_ + st.RecoveringSectors_ + st.FailingSectors_) * util.UVarint(st.SectorSize_))
}

func (cs *ChallengeStatus_I) OnNewChallenge(currEpoch block.ChainEpoch) ChallengeStatus {
	cs.LastChallengeEpoch_ = currEpoch
	return cs
}

// Call by either SubmitPoSt or OnMissedPoSt
// TODO: verify this is correct and if we need to distinguish SubmitPoSt vs OnMissedPoSt
func (cs *ChallengeStatus_I) OnChallengeResponse(currEpoch block.ChainEpoch) ChallengeStatus {
	cs.LastChallengeEndEpoch_ = currEpoch
	return cs
}

func (cs *ChallengeStatus_I) IsChallenged() bool {
	// true (isChallenged) when LastChallengeEpoch is later than LastChallengeEndEpoch
	return cs.LastChallengeEpoch() > cs.LastChallengeEndEpoch()
}

func (st *StorageMinerActorState_I) _isChallenged(rt Runtime) bool {
	return st.ChallengeStatus().IsChallenged()
}

func (a *StorageMinerActorCode_I) _isChallenged(rt Runtime) bool {
	h, st := a.State(rt)
	ret := st._isChallenged(rt)
	Release(rt, h, st)
	return ret
}

// called by CronActor to notify StorageMiner of PoSt Challenge
func (a *StorageMinerActorCode_I) NotifyOfPoStChallenge(rt Runtime) InvocOutput {
	rt.ValidateCallerIs(addr.CronActorAddr)

	if a._isChallenged(rt) {
		return rt.SuccessReturn() // silent return, dont re-challenge
	}

	a._expirePreCommittedSectors(rt)

	h, st := a.State(rt)
	st.ChallengeStatus().Impl().LastChallengeEpoch_ = rt.CurrEpoch()
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _updateCommittedSectors(rt Runtime) {
	for sectorNo, sealOnChainInfo := range st.StagedCommittedSectors() {
		st.Sectors()[sectorNo] = sealOnChainInfo
		st.Impl().ProvingSet_.Add(sectorNo)
		st.SectorTable().Impl().CommittedSectors_ += 1
	}

	// empty StagedCommittedSectors
	st.Impl().StagedCommittedSectors_ = make(map[sector.SectorNumber]SectorOnChainInfo)
}

// construct FaultReport
// reset NewTerminatedFaults
func (a *StorageMinerActorCode_I) _submitFaultReport(
	rt Runtime,
	newDeclaredFaults util.UVarint,
	newDetectedFaults util.UVarint,
	newTerminatedFaults util.UVarint,
) {
	faultReport := &power.FaultReport_I{
		NewDeclaredFaults_:   newDeclaredFaults,
		NewDetectedFaults_:   newDetectedFaults,
		NewTerminatedFaults_: newTerminatedFaults,
	}

	rt.Abort("TODO") // TODO: Send(SPA, ProcessFaultReport(faultReport))
	panic(faultReport)

	h, st := a.State(rt)
	st.SectorTable().Impl().TerminationFaultCount_ = util.UVarint(0)
	UpdateRelease(rt, h, st)
}

// construct PowerReport from SectorTable
func (a *StorageMinerActorCode_I) _submitPowerReport(rt Runtime) {
	h, st := a.State(rt)
	powerReport := &power.PowerReport_I{
		ActivePower_:   st.SectorTable().ActivePower(),
		InactivePower_: st.SectorTable().InactivePower(),
	}
	Release(rt, h, st)

	rt.Abort("TODO") // TODO: Send(SPA, ProcessPowerReport(powerReport))
	panic(powerReport)
}

func (a *StorageMinerActorCode_I) _onMissedPoSt(rt Runtime) {
	h, st := a.State(rt)

	failingSectorNumbers := getSectorNums(st.Sectors())
	for _, sectorNo := range failingSectorNumbers {
		st._updateFailSector(rt, sectorNo, true)
	}
	st._updateExpireSectors(rt)
	UpdateRelease(rt, h, st)

	h, st = a.State(rt)
	newDetectedFaults := st.SectorTable().FailingSectors()
	newTerminatedFaults := st.SectorTable().TerminationFaultCount()
	Release(rt, h, st)

	// Note: NewDetectedFaults is now the sum of all
	// previously active, committed, and recovering sectors minus expired ones
	// and any previously Failing sectors that did not exceed MaxFaultCount
	// Note: previously declared faults is now treated as part of detected faults
	a._submitFaultReport(
		rt,
		util.UVarint(0), // NewDeclaredFaults
		newDetectedFaults,
		newTerminatedFaults,
	)

	a._submitPowerReport(rt)

	// end of challenge
	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnChallengeResponse(rt.CurrEpoch())
	st._updateCommittedSectors(rt)
	UpdateRelease(rt, h, st)
}

// If a Post is missed (either due to faults being not declared on time or
// because the miner run out of time, every sector is reported as failing
// for the current proving period.
func (a *StorageMinerActorCode_I) CheckPoStSubmissionHappened(rt Runtime) InvocOutput {
	TODO() // TODO: validate caller

	if !a._isChallenged(rt) {
		// Miner gets out of a challenge when submit a successful PoSt
		// or when detected by CronActor. Hence, not being in isChallenged means that we are good here
		return rt.SuccessReturn()
	}

	a._expirePreCommittedSectors(rt)

	// oh no -- we missed it. rekt
	a._onMissedPoSt(rt)

	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) _verifyPoStSubmission(rt Runtime, postSubmission poster.PoStSubmission) bool {
	// 1. A proof must be submitted after the postRandomness for this proving
	// period is on chain
	// if rt.ChainEpoch < sm.ProvingPeriodEnd - challengeTime {
	//   rt.Abort("too early")
	// }

	// 2. A proof must be a valid snark proof with the correct public inputs
	// 2.1 Get randomness from the chain at the right epoch
	// postRandomness := rt.Randomness(postSubmission.Epoch, 0)
	// 2.2 Generate the set of challenges
	// challenges := GenerateChallengesForPoSt(r, keys(sm.Sectors))
	// 2.3 Verify the PoSt Proof
	// verifyPoSt(challenges, TODO)

	rt.Abort("TODO") // TODO: finish
	return false
}

func (a *StorageMinerActorCode_I) _expirePreCommittedSectors(rt Runtime) {

	h, st := a.State(rt)
	for _, preCommitSector := range st.PreCommittedSectors() {

		elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()
		if elapsedEpoch > MAX_PROVE_COMMIT_SECTOR_EPOCH {
			delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
			// TODO: potentially some slashing if ProveCommitSector comes late
		}
	}
	UpdateRelease(rt, h, st)

}

// move Sector from Active/Failing
// into Cleared State which means deleting the Sector from state
// remove SectorNumber from all states on chain
// update SectorTable
func (st *StorageMinerActorState_I) _updateClearSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorActiveSN:
		// expiration case
		st.SectorTable().Impl().ActiveSectors_ -= 1
	case SectorFailingSN:
		// expiration and termination cases
		st.SectorTable().Impl().FailingSectors_ -= 1
	default:
		// Committed and Recovering should not go to Cleared directly
		rt.Abort("invalid state in clearSector")
		// TODO: determine proper error here and error-handling machinery
	}

	delete(st.Sectors(), sectorNo)
	st.ProvingSet_.Remove(sectorNo)
	st.SectorExpirationQueue().Remove(sectorNo)
}

// move Sector from Committed/Recovering into Active State
// reset FaultCount to zero
// update SectorTable
func (st *StorageMinerActorState_I) _updateActivateSector(rt Runtime, sectorNo sector.SectorNumber) {
	sectorState := st.Sectors()[sectorNo].State()
	switch sectorState.StateNumber {
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_ -= 1
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_ -= 1
	default:
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("invalid state in activateSector")
	}

	st.Sectors()[sectorNo].Impl().State_ = SectorActive()
	st.SectorTable().Impl().ActiveSectors_ += 1
}

// failSector moves Sector from Active/Committed/Recovering into Failing State
// and increments FaultCount if asked to do so (DeclareFaults does not increment faultCount)
// move Sector from Failing to Cleared State if increment results in faultCount exceeds MaxFaultCount
// update SectorTable
// remove from ProvingSet
func (st *StorageMinerActorState_I) _updateFailSector(rt Runtime, sectorNo sector.SectorNumber, increment bool) {
	newFaultCount := st.Sectors()[sectorNo].State().FaultCount

	if increment {
		newFaultCount += 1
	}

	state := st.Sectors()[sectorNo].State()
	switch state.StateNumber {
	case SectorActiveSN:
		// wont be terminated from Active
		st.SectorTable().Impl().ActiveSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorCommittedSN:
		st.SectorTable().Impl().CommittedSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorRecoveringSN:
		st.SectorTable().Impl().RecoveringSectors_ -= 1
		st.SectorTable().Impl().FailingSectors_ += 1
		st.ProvingSet_.Remove(sectorNo)
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	case SectorFailingSN:
		// no change to SectorTable but increase in FaultCount
		st.Sectors()[sectorNo].Impl().State_ = SectorFailing(newFaultCount)
	default:
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("Invalid sector state in CronAction")
	}

	if newFaultCount > MAX_CONSECUTIVE_FAULTS {
		// TODO: heavy penalization: slash pledge collateral and delete sector
		// TODO: SendMessage(SPA.SlashPledgeCollateral)

		st._updateClearSector(rt, sectorNo)
		st.SectorTable().Impl().TerminationFaultCount_ += 1
	}
}

// Decision is to currently account for power based on sector
// with at least one active deals and deals cannot be updated
// an alternative proposal is to account for power based on active deals
// an improvement proposal is to allow storage deal update in a sector

// TODO: decide whether declared faults sectors should be
// penalized in the same way as undeclared sectors and how

// SubmitPoSt Workflow:
// - Verify PoSt Submission
// - Process ProvingSet.SectorsOn()
//   - State Transitions
//     - Committed -> Active and credit power
//     - Recovering -> Active and credit power
//   - Process Active Sectors (pay miners)
// - Process ProvingSet.SectorsOff()
//     - increment FaultCount
//     - clear Sector and slash pledge collateral if count > MAX_CONSECUTIVE_FAULTS
// - Process Expired Sectors (settle deals and return storage collateral to miners)
//     - State Transition
//       - Failing / Recovering / Active / Committed -> Cleared
//     - Remove SectorNumber from Sectors, ProvingSet
func (a *StorageMinerActorCode_I) SubmitPoSt(rt Runtime, postSubmission poster.PoStSubmission) InvocOutput {
	TODO() // TODO: validate caller

	if !a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot SubmitPoSt when not challenged")
	}

	// Verify correct PoSt Submission
	isPoStVerified := a._verifyPoStSubmission(rt, postSubmission)
	if !isPoStVerified {
		// no state transition, just error out and miner should submitPoSt again
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("TODO")
	}

	h, st := a.State(rt)

	// The proof is verified, process ProvingSet.SectorsOn():
	// ProvingSet.SectorsOn() contains SectorCommitted, SectorActive, SectorRecovering
	// ProvingSet itself does not store states, states are all stored in Sectors.State
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorCommittedSN, SectorRecoveringSN:
			st._updateActivateSector(rt, sectorNo)
		case SectorActiveSN:
			// Process payment in all active deals
			// Note: this must happen before marking sectors as expired.
			// TODO: Pay miner in a single batch message
			// SendMessage(sma.ProcessStorageDealsPayment(sm.Sectors()[sectorNumber].DealIDs()))
		default:
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in ProvingSet.SectorsOn()")
		}
	}

	// commit state change so that committed and recovering are now active

	// Process ProvingSet.SectorsOff()
	// ProvingSet.SectorsOff() contains SectorFailing
	// SectorRecovering is Proving and hence will not be in GetZeros()
	// heavy penalty if Failing for more than or equal to MAX_CONSECUTIVE_FAULTS
	// otherwise increment FaultCount in Sectors().State
	for _, sectorNo := range st.Impl().ProvingSet_.SectorsOff() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			continue
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			st._updateFailSector(rt, sectorNo, true)
		default:
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in ProvingSet.SectorsOff")
		}
	}

	// Process Expiration.
	st._updateExpireSectors(rt)

	UpdateRelease(rt, h, st)

	h, st = a.State(rt)
	terminationFaultCount := st.SectorTable().Impl().TerminationFaultCount_
	Release(rt, h, st)

	a._submitFaultReport(
		rt,
		util.UVarint(0), // NewDeclaredFaults
		util.UVarint(0), // NewDetectedFaults
		util.UVarint(terminationFaultCount),
	)

	a._submitPowerReport(rt)

	// TODO: check EnsurePledgeCollateralSatisfied
	// pledgeCollateralSatisfied

	// Reset Proving Period and report power updates
	// sm.ProvingPeriodEnd_ = PROVING_PERIOD_TIME

	h, st = a.State(rt)
	st.ChallengeStatus().Impl().OnChallengeResponse(rt.CurrEpoch())
	st._updateCommittedSectors(rt)
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _updateExpireSectors(rt Runtime) {
	currEpoch := rt.CurrEpoch()

	queue := st.SectorExpirationQueue()
	for queue.Peek().Expiration() <= currEpoch {
		expiredSectorNo := queue.Pop().SectorNumber()

		state := st.Sectors()[expiredSectorNo].State()
		// sc := sm.Sectors()[expiredSectorNo]
		switch state.StateNumber {
		case SectorActiveSN:
			// Note: in order to verify if something was stored in the past, one must
			// scan the chain. SectorNumber can be re-used.

			// Settle deals
			// SendMessage(sma.SettleExpiredDeals(sc.DealIDs()))
			st._updateClearSector(rt, expiredSectorNo)
		case SectorFailingSN:
			// TODO: check if there is any fault that we should handle here
			// If a SectorFailing Expires, return remaining StorageDealCollateral and remove sector
			// SendMessage(sma.SettleExpiredDeals(sc.DealIDs()))

			// a failing sector expires, no change to FaultCount
			st._updateClearSector(rt, expiredSectorNo)
		default:
			// Note: SectorCommittedSN, SectorRecoveringSN transition first to SectorFailingSN, then expire
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Invalid sector state in SectorExpirationQueue")
		}
	}

	// Return PledgeCollateral for active expirations
	// SendMessage(spa.Depledge) // TODO
	rt.Abort("TODO: refactor use of this method in order for caller to send this message")
}

// RecoverFaults checks if miners have sufficent collateral
// and adds SectorFailing into SectorRecovering
// - State Transition
//   - Failing -> Recovering with the same FaultCount
// - Add SectorNumber to ProvingSet
// Note that power is not updated until it is active
func (a *StorageMinerActorCode_I) RecoverFaults(rt Runtime, recoveringSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	if a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot RecoverFaults when sm isChallenged")
	}

	h, st := a.State(rt)

	// for all SectorNumber marked as recovering by recoveringSet
	for _, sectorNo := range recoveringSet.SectorsOn() {
		sectorState, found := st.Sectors()[sectorNo]
		if !found {
			// TODO: determine proper error here and error-handling machinery
			rt.Abort("Sector state not found in map")
		}
		switch sectorState.State().StateNumber {
		case SectorFailingSN:
			// Check if miners have sufficient balances in sma

			// SendMessage(sma.PublishStorageDeals) or sma.ResumeStorageDeals?
			// throw if miner cannot cover StorageDealCollateral

			// Check if miners have sufficient pledgeCollateral

			// copy over the same FaultCount
			st.Sectors()[sectorNo].Impl().State_ = SectorRecovering(sectorState.State().FaultCount)
			st.Impl().ProvingSet_.Add(sectorNo)

			st.SectorTable().Impl().FailingSectors_ -= 1
			st.SectorTable().Impl().RecoveringSectors_ += 1

		default:
			// TODO: determine proper error here and error-handling machinery
			// TODO: consider this a no-op (as opposed to a failure), because this is a user
			// call that may be delayed by the chain beyond some other state transition.
			rt.Abort("Invalid sector state in RecoverFaults")
		}
	}

	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

// DeclareFaults penalizes miners (slashStorageDealCollateral and remove power)
// TODO: decide how much storage collateral to slash
// - State Transition
//   - Active / Commited / Recovering -> Failing
// - Update State in Sectors()
// - Remove Active / Commited / Recovering from ProvingSet
func (a *StorageMinerActorCode_I) DeclareFaults(rt Runtime, faultSet sector.CompactSectorSet) InvocOutput {
	TODO() // TODO: validate caller

	if a._isChallenged(rt) {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("cannot DeclareFaults when challenged")
	}

	h, st := a.State(rt)

	// fail all SectorNumber marked as Failing by faultSet
	for _, sectorNo := range faultSet.SectorsOn() {
		st._updateFailSector(rt, sectorNo, false)
	}
	declaredFaults := len(faultSet.SectorsOn())

	UpdateRelease(rt, h, st)

	a._submitFaultReport(
		rt,
		util.UVarint(declaredFaults), // DeclaredFaults
		util.UVarint(0),              // DetectedFaults
		util.UVarint(0),              // TerminatedFault
	)

	a._submitPowerReport(rt)

	return rt.SuccessReturn()
}

func (st *StorageMinerActorState_I) _isSealVerificationCorrect(rt Runtime, onChainInfo sector.OnChainSealVerifyInfo) bool {
	// TODO: verify seal @nicola
	// TODO: st.verifySeal(sectorID SectorID, comm sector.OnChainSealVerifyInfo, proof SealProof)

	// verifySeal will also generate CommD on the fly from CommP and PieceSize

	var pieceInfos []sector.PieceInfo // = make([]sector.PieceInfo, 0)

	for dealId := range onChainInfo.DealIDs() {
		// FIXME: Actually get the deal info from the storage market actor and use it to create a sector.PieceInfo.
		_ = dealId

		pieceInfos = append(pieceInfos, nil)
	}

	new(proving.StorageProvingSubsystem_I).VerifySeal(&sector.SealVerifyInfo_I{
		SectorID_: &sector.SectorID_I{
			MinerID_: st.Info().Worker(), // TODO: This is actually miner address. MinerID needs to be derived.
			Number_:  onChainInfo.SectorNumber(),
		},
		OnChain_: onChainInfo,

		// TODO: Make SealCfg sector.SealCfg from miner configuration (where is that?)
		SealCfg_: &sector.SealCfg_I{
			SectorSize_:     st.Info().SectorSize(),
			SubsectorCount_: st.Info().SubsectorCount(),
			Partitions_:     st.Info().Partitions(),
		},

		// TODO: get Randomness sector.SealRandomness using onChainInfo.Epoch
		//Randomness_:
		// TODO: get InteractiveRandomness sector.SealRandomness using onChainInfo.InteractiveEpoch
		//InteractiveRandomness_:
		PieceInfos_: pieceInfos,
	})
	return false // TODO: finish
}

func (st *StorageMinerActorState_I) _sectorExists(sectorNo sector.SectorNumber) bool {
	_, found := st.Sectors()[sectorNo]
	return found
}

// Deals must be posted on chain via sma.PublishStorageDeals before PreCommitSector
// TODO(optimization): PreCommitSector could contain a list of deals that are not published yet.
func (a *StorageMinerActorCode_I) PreCommitSector(rt Runtime, info sector.SectorPreCommitInfo) InvocOutput {
	TODO() // TODO: validate caller

	// no checks needed
	// can be called regardless of Challenged status

	// TODO: might record CurrEpoch for PreCommitSector expiration
	// in other words, a ProveCommitSector must be on chain X Epoch after a PreCommitSector goes on chain
	// TODO: might take collateral in case no ProveCommit follows within sometime
	// TODO: collateral also penalizes repeated precommit to get randomness that one likes
	// TODO: might be a good place for Treasury

	h, st := a.State(rt)

	_, found := st.PreCommittedSectors()[info.SectorNumber()]

	if found {
		// TODO: burn some funds?
		rt.Abort("Sector already pre committed.")
	}

	sectorExists := st._sectorExists(info.SectorNumber())
	if sectorExists {
		rt.Abort("Sector already exists.")
	}

	// TODO: verify every DealID has been published and not yet expired

	precommittedSector := &PreCommittedSector_I{
		Info_:          info,
		ReceivedEpoch_: rt.CurrEpoch(),
	}
	st.PreCommittedSectors()[info.SectorNumber()] = precommittedSector

	UpdateRelease(rt, h, st)
	return rt.SuccessReturn()
}

func (a *StorageMinerActorCode_I) ProveCommitSector(rt Runtime, info sector.SectorProveCommitInfo) InvocOutput {
	TODO() // TODO: validate caller

	h, st := a.State(rt)

	preCommitSector, found := st.PreCommittedSectors()[info.SectorNumber()]

	if !found {
		rt.Abort("Sector not pre committed.")
	}

	sectorExists := st._sectorExists(info.SectorNumber())

	if sectorExists {
		rt.Abort("Sector already exists.")
	}

	// check if ProveCommitSector comes too late after PreCommitSector
	elapsedEpoch := rt.CurrEpoch() - preCommitSector.ReceivedEpoch()

	// if more than MAX_PROVE_COMMIT_SECTOR_EPOCH has elapsed
	if elapsedEpoch > MAX_PROVE_COMMIT_SECTOR_EPOCH {
		// TODO: potentially some slashing if ProveCommitSector comes late

		// expired
		delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
		UpdateRelease(rt, h, st)
		return rt.ErrorReturn(exitcode.UserDefinedError(0)) // TODO: user dfined error code?
	}

	onChainInfo := &sector.OnChainSealVerifyInfo_I{
		SealedCID_:        preCommitSector.Info().SealedCID(),
		SealEpoch_:        preCommitSector.Info().SealEpoch(),
		InteractiveEpoch_: info.InteractiveEpoch(),
		Proof_:            info.Proof(),
		DealIDs_:          preCommitSector.Info().DealIDs(),
		SectorNumber_:     preCommitSector.Info().SectorNumber(),
	}

	isSealVerificationCorrect := st._isSealVerificationCorrect(rt, onChainInfo)
	if !isSealVerificationCorrect {
		// TODO: determine proper error here and error-handling machinery
		rt.Abort("Seal verification failed")
	}

	// TODO: check EnsurePledgeCollateralSatisfied
	// pledgeCollateralSatisfied

	// determine lastDealExpiration from sma
	// TODO: proper onchain transaction
	// lastDealExpiration := SendMessage(sma, GetLastDealExpirationFromDealIDs(onChainInfo.DealIDs()))
	var lastDealExpiration block.ChainEpoch

	// Note: in the current iteration, a Sector expires only when all storage deals in it have expired.
	// This is likely to change but it aims to meet user requirement that users can enter into deals of any size.
	// add sector expiration to SectorExpirationQueue
	st.SectorExpirationQueue().Add(&SectorExpirationQueueItem_I{
		SectorNumber_: onChainInfo.SectorNumber(),
		Expiration_:   lastDealExpiration,
	})

	// no need to store the proof and randomseed in the state tree
	// verify and drop, only SealCommitment{CommR, DealIDs} on chain
	sealCommitment := &sector.SealCommitment_I{
		SealedCID_:  onChainInfo.SealedCID(),
		DealIDs_:    onChainInfo.DealIDs(),
		Expiration_: lastDealExpiration, // TODO decide if we need this too
	}

	// add SectorNumber and SealCommitment to Sectors
	// set Sectors.State to SectorCommitted
	// Note that SectorNumber will only become Active at the next successful PoSt
	sealOnChainInfo := &SectorOnChainInfo_I{
		SealCommitment_: sealCommitment,
		State_:          SectorCommitted(),
	}

	if st._isChallenged(rt) {
		// move PreCommittedSector to StagedCommittedSectors if in Challenged status
		st.StagedCommittedSectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
	} else {
		// move PreCommittedSector to CommittedSectors if not in Challenged status
		st.Sectors()[onChainInfo.SectorNumber()] = sealOnChainInfo
		st.Impl().ProvingSet_.Add(onChainInfo.SectorNumber())
		st.SectorTable().Impl().CommittedSectors_ += 1
	}

	// now remove SectorNumber from PreCommittedSectors (processed)
	delete(st.PreCommittedSectors(), preCommitSector.Info().SectorNumber())
	UpdateRelease(rt, h, st)

	return rt.SuccessReturn()
}

func getSectorNums(m map[sector.SectorNumber]SectorOnChainInfo) []sector.SectorNumber {
	var l []sector.SectorNumber
	for i, _ := range m {
		l = append(l, i)
	}
	return l
}

Mining Scheduler

import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import mining "github.com/filecoin-project/specs/systems/filecoin_mining"
// import storage_indexer "github.com/filecoin-project/specs/systems/filecoin_mining/storage_indexer"

type MiningScheduler struct {
    getStagedSectors()    sector.SectorSet
    getSealedSectors()    sector.SealedSectorSet
    getFaultySectors()    sector.SectorSet
    getRepairedSectors()  sector.SectorSet
    // same as completedSectors/doneSectors
    getExpiredSectors()   sector.SectorSet

    ProducePost(sectors sector.SectorSet) poster.PoStSubmission
    VerifyPost(sectors sector.SectorSet) poster.PoStSubmission

    ReportFaults(
        actor &StorageMinerActorCode
    ) bool

    RemoveSectors(remove sector.SectorSet) bool

    DePledge(
        amount actor.TokenAmount
    ) bool

    // receives from sector storage subsystem
    SealedSector(
        sealedSector mining.SealedSector
    ) bool

    AddSector(
        pledge    actor.TokenAmount
        sectorID  &sector.SectorID
        comm      &sector.OnChainSealVerifyInfo
    ) sector.SectorID
}

Sector

The Sector is a fundamental “storage container” abstraction used in Filecoin Storage Mining. It is the basic unit of storage, and serves to make storage conform to a set of expectations.

import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type Bytes32 Bytes
type MinerID addr.Address
type Commitment Bytes32  // TODO
type UnsealedSectorCID ipld.CID
type SealedSectorCID ipld.CID

// SectorNumber is a numeric identifier for a sector. It is usually
// relative to a Miner.
type SectorNumber UInt

type FaultSet CompactSectorSet

// SectorSize indicates one of a set of possible sizes in the network.
type SectorSize UInt

// Ideally, SectorSize would be an enum
// type SectorSize enum {
//   1KiB = UInt 1024
//   1MiB = Uint 1048576
//   1GiB = Uint 1073741824
//   1TiB = Uint 1099511627776
//   1PiB = Uint 1125899906842624
// }

// TODO make sure this is globally unique
type SectorID struct {
    MinerID
    Number SectorNumber
}

// SectorInDetail describes all the bits of information associated
// with each sector.
// - ID   - a unique identifier assigned once the Sector is registered on chain
// - Size - the size of the sector. there are a set of allowable sizes
//
// NOTE: do not use this struct. It is for illustrative purposes only.
type SectorInDetail struct {
    ID    SectorID
    Size  SectorSize

    Unsealed struct {
        CID     UnsealedSectorCID
        Deals   [deal.StorageDeal]
        Pieces  [piece.Piece]
        // Pieces Tree<Piece> // some tree for proofs
        Bytes
    }

    Sealed struct {
        CID SealedSectorCID
        Bytes
        SealCfg
    }
}

// SectorInfo is an object that gathers all the information miners know about their
// sectors. This is meant to be used for a local index.
type SectorInfo struct {
    ID              SectorID
    UnsealedInfo    UnsealedSectorInfo
    SealedInfo      SealedSectorInfo
    SealVerifyInfo
    ProofAux
}

// UnsealedSectorInfo is an object that tracks the relevant data to keep in a sector
type UnsealedSectorInfo struct {
    UnsealedCID  UnsealedSectorCID  // CommD
    Size         SectorSize
    PieceCount   UVarint  // number of pieces in this sector (can get it from len(Pieces) too)
    Pieces       [piece.PieceInfo]  // wont get externalized easy, -- it's big
    SealCfg  // this will be here as well. it's determined.
    // Deals       [deal.StorageDeal]
}

// SealedSectorInfo keeps around information about a sector that has been sealed.
type SealedSectorInfo struct {
    SealedCID  SealedSectorCID
    Size       SectorSize
    SealCfg
    SealArgs   SealArguments
}

TODO:

  • sector illustration
  • describe how Sectors are used in practice
  • describe sizing ranges of sectors
  • describe “storage/shipping container” analogy

Sector Set

// sector sets
type SectorSet [SectorID]
type UnsealedSectorSet SectorSet
type SealedSectorSet SectorSet

// compact sector sets
type Bitfield Bytes  // TODO: move to the right place -- a lib?
type RLEpBitfield Bitfield  // TODO: move to the right place -- a lib?
type CompactSectorSet RLEpBitfield

Sector Sealing

import file "github.com/filecoin-project/specs/systems/filecoin_files/file"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type Path struct {}  // TODO

type SealRandomness Bytes
type InteractiveSealRandomness Bytes

// SealSeed is unique to each Sector
// SealSeed is:
//    SealSeedHash(MinerID, SectorNumber, SealRandomness, UnsealedSectorCID)
type SealSeed Bytes

type SealCfg struct {
    SectorSize      UInt
    SubsectorCount  UInt
    Partitions      UInt
}

// SealVerifyInfo is the structure of all thte information a verifier
// needs to verify a Seal.
type SealVerifyInfo struct {
    SectorID
    OnChain                OnChainSealVerifyInfo
    SealCfg
    Randomness             SealRandomness
    InteractiveRandomness  InteractiveSealRandomness
    PieceInfos             [PieceInfo]
}

// OnChainSealVerifyInfo is the structure of information that must be sent with
// a message to commit a sector. Most of this information is not needed in the
// state tree but will be verified in sm.CommitSector. See SealCommitment for
// data stored on the state tree for each sector.
type OnChainSealVerifyInfo struct {
    SealedCID         SealedSectorCID  // CommR
    SealEpoch         block.ChainEpoch
    InteractiveEpoch  block.ChainEpoch
    Proof             SealProof
    DealIDs           [deal.DealID]
    SectorNumber
}

// SealCommitment is the information kept in the state tree about a sector.
// SealCommitment is a subset of OnChainSealVerifyInfo.
type SealCommitment struct {
    SealedCID   SealedSectorCID  // CommR
    DealIDs     [deal.DealID]
    Expiration  block.ChainEpoch
}

type SectorPreCommitInfo struct {
    SectorNumber
    SealedCID     SealedSectorCID  // CommR
    SealEpoch     block.ChainEpoch
    DealIDs       [deal.DealID]
}

type SectorProveCommitInfo struct {
    SectorNumber
    Proof             SealProof
    InteractiveEpoch  block.ChainEpoch
}

// ProofAux is meta data required to generate certain proofs
// for a sector, for example PoSt.
// These should be stored and indexed somewhere by CommR.
type ProofAux struct {
    CommRLast          Commitment
    CommC              Commitment

    // TODO: This may be a partially-cached tree.
    // this may be empty
    CommRLastTreePath  file.Path
}

type ProofAuxTmp struct {
    PersistentAux   ProofAux

    SectorID
    CommD           Commitment
    CommR           SealedSectorCID
    CommDTreePaths  [file.Path]
    CommCTreePath   file.Path

    Seeds           [SealSeed]
    SubsectorData   [Bytes]
    Replicas        [Bytes]
    KeyLayers       [Bytes]
}

type SealArguments struct {
    Algorithm        SealAlgorithm
    OutputArtifacts  SealOutputArtifacts
}

type SealProof struct {//<curve, system> {
    Config      SealProofConfig
    ProofBytes  Bytes
}

type SealProofConfig struct {// TODO
}

// TODO: move into proofs lib
type FilecoinSNARKProof struct {}  //<bls12-381, Groth16>

type SealAlgorithm enum {
    // ZigZagPoRep
    StackedDRG
}

// TODO
type SealOutputArtifacts struct {}

type PieceInfo struct {
    Size   UInt  // Size in nodes. For BLS12-381 (capacity 254 bits), must be >= 16. (16 * 8 = 128)
    CommP  UnsealedSectorCID
}
Drawing randomness for sector commitments

Tickets are used as input to the SEAL above in order to tie Proofs-of-Replication to a given chain, thereby preventing long-range attacks (from another miner in the future trying to reuse SEALs).

The ticket has to be drawn from a finalized block in order to prevent the miner from potential losing storage (in case of a chain reorg) even though their storage is intact.

Verification should ensure that the ticket was drawn no farther back than necessary by the miner. We note that tickets can uniquely be associated to a given round in the protocol (lest a hash collision be found), but that the round number is explicited by the miner in commitSector.

We present precisely how ticket selection and verification should work. In the below, we use the following notation:

  • F– Finality (number of rounds)
  • X– round in which SEALing starts
  • Z– round in which the SEAL appears (in a block)
  • Y– round announced in the SEAL commitSector (should be X, but a miner could use any Y <= X), denoted by the ticket selection
    • T– estimated time for SEAL, dependent on sector size
    • G = T + variance– necessary flexibility to account for network delay and SEAL-time variance.

We expect Filecoin will be able to produce estimates for sector commitment time based on sector sizes, e.g.: (estimate, variance) <--- SEALTime(sectors) G and T will be selected using these.

Picking a Ticket to Seal

When starting to prepare a SEAL in round X, the miner should draw a ticket from X-F with which to compute the SEAL.

Verifying a Seal’s ticket

When verifying a SEAL in round Z, a verifier should ensure that the ticket used to generate the SEAL is found in the range of rounds [Z-T-F-G, Z-T-F+G].

In Detail
                               Prover
           ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
          β”‚

          β–Ό
         X-F ◀───────F────────▢ X ◀──────────T─────────▢ Z
     -G   .  +G                 .                        .
  ───(β”Œβ”€β”€β”€β”€β”€β”€β”€β”)───────────────( )──────────────────────( )────────▢
      β””β”€β”€β”€β”€β”€β”€β”€β”˜                 '                        '        time
 [Z-T-F-G, Z-T-F+G]
          β–²

          β”” ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                              Verifier

Note that the prover here is submitting a message on chain (i.e. the SEAL). Using an older ticket than necessary to generate the SEAL is something the miner may do to gain more confidence about finality (since we are in a probabilistically final system). However it has a cost in terms of securing the chain in the face of long-range attacks (specifically, by mixing in chain randomness here, we ensure that an attacker going back a month in time to try and create their own chain would have to completely regenerate any and all sectors drawing randomness since to use for their fork’s power).

We break this down as follows:

  • The miner should draw from X-F.
  • The verifier wants to find what X-F should have been (to ensure the miner is not drawing from farther back) even though Y (i.e. the round of the ticket actually used) is an unverifiable value.
  • Thus, the verifier will need to make an inference about what X-F is likely to have been based on:
    • (known) round in which the message is received (Z)
    • (known) finality value (F)
    • (approximate) SEAL time (T)
  • Because T is an approximate value, and to account for network delay and variance in SEAL time across miners, the verifier allows for G offset from the assumed value of X-F: Z-T-F, hence verifying that the ticket is drawn from the range [Z-T-F-G, Z-T-F+G].

Sector Index

import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

// TODO import this from StorageMarket
type SectorIndex struct {
    BySectorID     {sector.SectorID: sector.SectorInfo}
    ByUnsealedCID  {sector.UnsealedSectorCID: sector.SectorInfo}
    BySealedCID    {sector.SealedSectorCID: sector.SectorInfo}
    ByPieceID      {piece.PieceID: sector.SectorInfo}
    ByDealID       {deal.DealID: sector.SectorInfo}
}

type SectorIndexerSubsystem struct {
    Index    SectorIndex
    Store    SectorStore
    Builder  SectorBuilder

    // AddNewDeal is called by StorageMiningSubsystem after the StorageMarket
    // has made a deal. AddNewDeal returns an error when:
    // - there is no capacity to store more deals and their pieces
    AddNewDeal(deal deal.StorageDeal) StageDealResponse

    // bring back if needed.
    // OnNewTipset(chain Chain, epoch blockchain.Epoch) struct {}

    // SectorsExpiredAtEpoch returns the set of sectors that expire
    // at a particular epoch.
    SectorsExpiredAtEpoch(epoch block.ChainEpoch) [sector.SectorID]

    // removeSectors removes the given sectorIDs from storage.
    removeSectors(sectorIDs [sector.SectorID])
}

Sector Builder

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
// import smkt "github.com/filecoin-project/specs/systems/filecoin_markets/storage_market"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

// SectorBuilder accumulates deals, keeping track of their
// sector configuration requirements and the piece sizes.
// Once there is a sector ready to be sealed, NextSector
// will return a sector.

type StageDealResponse struct {
    SectorID sector.SectorID
}

type SectorBuilder struct {
    // DealsToSeal keeps a set of StorageDeal objects.
    // These include the info for the relevant pieces.
    // This builder just accumulates deals, keeping track of their
    // sector configuration requirements, and the piece sizes.
    DealsToSeal [deal.StorageDeal]

    // StageDeal adds a deal to be packed into a sector.
    StageDeal(d deal.StorageDeal) StageDealResponse

    // NextSector returns an UnsealedSectorInfo, which includes the (ordered) set of
    // pieces, and the SealCfg. An error may be returned if SectorBuilder is not
    // ready to produce a Sector.
    //
    // TODO: use go channels? or notifications?
    NextSector() union {i sector.UnsealedSectorInfo, err error}
}

SectorStore

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"

type SectorStore struct {
    // FileStore stores all the unsealed and sealed sectors.
    FileStore   file.FileStore

    // PieceStore is shared with DataTransfer, and is a way to store or read
    // pieces temporarily. This may or may not be backed by the FileStore above.
    PieceStore  piece.PieceStore

    // GetSectorFile returns the file for a given sector id.
    // If the SectorID does not have any sector files associated yet, GetSectorFiles
    // returns an error.
    GetSectorFiles(id sector.SectorID) union {f SectorFiles, err error}

    // Get information, including a merkle tree file/path, needed to generate PoSt for a sector.
    GetSectorProofAux(id sector.SectorID) sector.ProofAux

    StoreSectorProofAux(id sector.SectorID, proofAux sector.ProofAux) union {proofAux sector.ProofAux, err error}

    // CreateSectorFiles allocates two sector files, one for unsealed and one for
    // sealed sector.
    CreateSectorFiles(id sector.SectorID) union {f SectorFiles, err error}
}

// SectorFiles is a datastructure that groups two file objects and a sectorID.
// These files are where unsealed and sealed sectors should go.
type SectorFiles struct {
    SectorID  sector.SectorID
    Unsealed  file.File
    Sealed    file.File
}

TODO

  • talk about how sectors are stored

Storage Proving

Filecoin Poving Subsystem

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import poster "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/poster"
import sealer "github.com/filecoin-project/specs/systems/filecoin_mining/storage_proving/sealer"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type StorageProvingSubsystem struct {
    SectorSealer   sealer.SectorSealer
    PostGenerator  poster.PostGenerator

    VerifySeal(sv sector.SealVerifyInfo, pieceInfos [sector.PieceInfo]) union {ok bool, err error}

    ValidateBlock(block block.Block)

    // TODO: remove this?
    // GetPieceInclusionProof(pieceRef CID) union { PieceInclusionProofs, error }
}

Sector Sealer

Sector Sealer

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import file "github.com/filecoin-project/specs/systems/filecoin_files/file"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

type SealInputs struct {
    SectorID       sector.SectorID
    SealCfg        sector.SealCfg
    MinerID        addr.Address
    RandomSeed     sector.SealRandomness
    UnsealedPaths  [file.Path]
    SealedPaths    [file.Path]
    DealIDs        [deal.DealID]
}

type CreateSealProofInputs struct {
    SectorID               sector.SectorID
    SealCfg                sector.SealCfg
    InteractiveRandomSeed  sector.InteractiveSealRandomness
    SealedPaths            [file.Path]
    SealOutputs
}

type SealOutputs struct {
    ProofAuxTmp sector.ProofAuxTmp
}

type CreateSealProofOutputs struct {
    SealInfo  sector.SealVerifyInfo
    ProofAux  sector.ProofAux
}

type SectorSealer struct {
    SealSector() union {so SealOutputs, err error}
    CreateSealProof(si CreateSealProofInputs) union {so CreateSealProofOutputs, err error}

    MaxUnsealedBytesPerSector(SectorSize UInt) UInt
}

Sector Poster

import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sectorIndex "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type UInt64 UInt

// TODO: move this to somewhere the blockchain can import
// candidates:
// - filproofs - may have to learn about Sectors (and if we move Seal stuff, Deals)
// - "blockchain/builtins" or something like that - a component in the blockchain that handles storage verification
type PoStSubmission struct {
    PostProof   sector.PoStProof
    ChainEpoch  block.ChainEpoch
}

type PostGenerator struct {
    GeneratePoSt(
        postCfg        sector.PoStCfg
        challengeSeed  sector.PoStRandomness
        faults         sector.FaultSet
        sectors        [sector.SectorID]
        sectorStore    sectorIndex.SectorStore
    ) sector.PoStProof
}
package poster

import filproofs "github.com/filecoin-project/specs/libraries/filcrypto/filproofs"
import sector "github.com/filecoin-project/specs/systems/filecoin_mining/sector"
import sectorIndex "github.com/filecoin-project/specs/systems/filecoin_mining/sector_index"

// See "Proof-of-Spacetime Parameters" Section
// TODO: Unify with orient model.
const POST_CHALLENGE_DEADLINE = uint(480)

func GeneratePoStWitness(postCfg sector.PoStCfg, challengeSeed sector.PoStRandomness, faults sector.FaultSet, sectors []sector.SectorID, sectorStore sectorIndex.SectorStore) sector.PoStWitness {
	// Question: Should we pass metadata into FilProofs so it can interact with SectorStore directly?
	// Like this:
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetatada);

	// Question: Or should we resolve + manifest trees here and pass them in?
	// Like this:
	// trees := sectorsMetadata.map(func(md) { SectorStorage.GetMerkleTree(md.MerkleTreePath) });
	// Done this way, we redundantly pass the tree paths in the metadata. At first thought, the other way
	// seems cleaner.
	// PoStReponse := SectorStorageSubsystem.GeneratePoSt(sectorSize, challenge, faults, sectorsMetadata, trees);

	// Poroposed answer: An alternative, which avoids the downsides of both of the above, by adding a new filproofs API call:

	sdr := filproofs.SDRParams(nil, postCfg)

	return sdr.GeneratePoStWitness(challengeSeed, faults, sectorStore)
}

func GeneratePoStProof(postCfg sector.PoStCfg, witness sector.PoStWitness) sector.PoStProof {
	sdr := filproofs.SDRParams(nil, postCfg)
	return sdr.GeneratePoStProof(witness)
}
PoSt Generator object

Markets in Filecoin

The Filecoin project is a protocol, a platform, and a marketplace. There are two major components to Filecoin markets, storage market and retrieval market. While both markets are expected to happen primarily off the blockchain, storage deals made in storage market will be published on chain and enforced by the protocol. Storage deal negotiation and order matching are expected to happen off chain in the first version of Filecoin. Retrieval deals are also negotiated off chain and executed with micropayments between transacting parties in payment channels.

Even though most of the market actions happen off the blockchain, there are on-chain invariant and structure that create economic structure for network success and allow for positive emergent behavior. Storage Mining in Filecoin can be compared to maintaining a storage cargo container (reference to Sector) with storage deals sitting in them. There must be at least one active deal in a Sector for the Sector to count towards a miner’s power. The Sector is only considered Active after submitting it first PoSt successfully (reference to Sector FSM). ASector no longer counts for power when it leaves the Active state either through fault or expiration. A Sector expires when all the deals in the Sector have expired and all StorageDealCollateral will only be returned then. In the first version of Filecoin, deals are immutable once added to the chain.

Market Orders - Asks

Asks contain the terms on which a miner is willing to provide its services. They are propogated via gossipsub.

A StorageAsk contains basic storage deal terms of price, collateral, and minimum piece size (size of the smallest piece it is willing to store under these terms). It also contains a Timestamp for its creation in ChainEpoch, a MaxDuration for the max duration in ChainEpoch that a miner is willing to store under these terms, and a MinDuration. If a miner wishes to override an ask, it can issue a new ask with a higher sequence number (SeqNo). Clients look at all the StorageAsks in a gossip network and decide which miner to contact to enter into a deal. The deal negotiation process happens off chain and the client submits a StorageDealProposal to the miner, as detailed in Storage Deals, after an agreement is reached.

TODO:

  • Retrieval asks
import util "github.com/filecoin-project/specs/util"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"

type StorageAsk struct {
    Price         util.BigInt  // attoFIL per GiB per epoch
    Collateral    util.BigInt  // attoFIL per GiB per epoch

    MinPieceSize  uint64
    Miner         addr.Address
    Timestamp     block.ChainEpoch
    MaxDuration   block.ChainEpoch
    MinDuration   block.ChainEpoch
    SeqNo         uint64
}

Verifiability

TODO:

  • write what parts of market orders are verifiable, and how
    • eg: miner storage ask could carry the amount of storage available (which should be at most (pledge - sectors sealed))
    • eg: client storage bid price could be checked against available money in the StorageMarket

Market Deals

There are two types of deals in Filecoin markets, storage deals and retrieval deals. Storage deals are recorded on the blockchain and enforced by the protocol. Retrieval deals are off chain and enabled by micropayment channel by transacting parties. All deal negotiation happen off chain and a request-response style storage deal protocol is in place to submit agreed-upon storage deals onto the network with PublishStorageDeal and CommitSector to gain storage power on chain. Hence, there is a StorageDealProposal and a RetrievalDealProposal that are half-signed contracts submitted by clients to be counter-signed and posted on-chain by the miners.

Filecoin Storage Market Deal Flow

Add Storage Deal and Power

  • 1. StorageClient and StorageProvider call StorageMarketActor.AddBalance to deposit funds into Storage Market. There are two fund states in the Storage Market, Locked and Available.
    • StorageClient and StorageProvider can call WithdrawBalance before any deal is made. (move to state X)
  • 2. StorageClient and StorageProvider negotiate a deal off chain. StorageClient sends a StorageDealProposal to a StorageProvider.
    • StorageProvider verifies the StorageDeal by checking address and signature of StorageClient, checking the proposal’s StartEpoch is after the current Epoch, checking StorageClient did not call withdraw in the last X Epoch (WithdrawBalance should take at least X Epoch), checking both StorageProvider and StorageClient have sufficient available balances in StorageMarketActor.
  • 3. StorageProvider signs the StorageDealProposal by constructing an on-chain message.
    • StorageProvider calls PublishStorageDeals in StorageMarketActor to publish this on-chain message which will generate a DealID for each StorageDeal and store a mapping from DealID to StorageDeal. However, the deals are not active at this point.
      • As a backup, StorageClient MAY call PublishStorageDeals with the StorageDeal, to activate the deal if they can obtain the signed on-chain message from StorageProvider.
      • It is possible for either StorageProvider or StorageClient to try to enter into two deals simultaneously with funds available only for one. Only the first deal to commit to the chain will clear, the second will fail with error errorcode.InsufficientFunds.
    • StorageProvider calls HandleStorageDeal in StorageMiningSubsystem which will then add the StorageDeal into a Sector.
  • 4. Once the miner finishes packing a Sector, it generates a Sealed Sector and calls StorageMinerActor.CommitSector to verify the seal, store sector expiration, and record the mapping from SectorNumber to SealCommitment. It will also place this newly added Sector in the list of CommittedSectors in StorageMinerActor. StorageMiner does not earn any power for this newly added sector until its first PoSt has been submitted. Note that CommitSector can be called any time. However, sectors will be added to a staging buffer StagedCommittedSectors when miners are in the Challenged status (see 5 below).

Receive Challenge

  • 5. Miners enter the Challenged status whenever NotifyOfPoStChallenge is called by the chain. Miners will then have X Epoch as the ProvingPeriod to submit a successful PoSt before CheckPoStSubmissionHappened is called by the chain. Miners can only get out the challenge with SubmitPoSt or onMissedPoSt.
  • 6. Miners are not allowed to call DeclareFaults or RecoverFaults when they are in the Challneged state but CommitSector is allowed and sectors will be added to a StagedCommittedSectors buffer. When miners get out of the Challenged status, StagedCommittedSectors will be copied over to their Sectors, ProvingSet and SectorTable and emptied.

Declare and Recover Faults

  • 7. Declared faults are penalized to a smaller degree than detected faults by CronActor. Miners declare failing sectors by invoking StorageMinerActor.DeclareFaults and X of the StorageDealCollateral will be slashed and power corresponding to these sectors will be tempororily lost. However, miners can only declare faults when they are not in Challenged status.
  • 8. Miners can then recover faults by invoking StorageMinerActor.RecoverFaults and have sufficient StorageDealCollateral in their available balances. FaultySectors are recommitted and power is only restored at the next PoSt submission. Miners will not be able to invoke RecoverFaults when they are in the Challenged status.
  • 9. Sectors that are failing for storagemining.MaxFaults consecutive ChainEpochs will be cleared and result in StoragePowerActor.SlashPledgeCollateral.
    • TODO: set X parameter

Submit PoSt

(TODO: move into Storage Mining)

On every PoSt Submission, the following steps happen.

  • 10. StorageMinerActor first verifies the PoSt Submission. If PoSt is done correctly, all Committed and Recovering sectors will be marked as Active and power is credited to these sectors. Payments will be processed for deals that are Active by invoking StorageMarketActor.ProcessStorageDealsPayment.
  • 11. For all sectors that are off from the ProvingSet, these sectors are failing. Increment FaultCount on these sectors and if any of these sectors are failing for MaxFaultCount consecutive ChainEpoch, these sectors are terminated and cleared from the network.
  • 13. Process sector expiration. Sectors expire when all deals in that sector have expired. Expired sectors will be cleared and StorageDealCollateral for both miners and users returned depending on the state that the sectors are in.
  • 14. Submit FaultReport and PowerReport to StoragePowerActor for slashing and power accounting.
  • 15. Check and ensure that Pledge Collateral is statisfied. TODO: some details are missing here, also related to ProvingPeriod depending on PoSt construction.
  • 16. Update challenge status and add Committed sectors received during the challenge to the Sectors, ProvingSet, and SectorTable.
  • 17. All Sectors will be considered in DetectedFaults when a miner fail to SubmitPoSt in a proving period and detected by onMissedPoSt in CheckPoStSubmissionHappened (move to State 18).

Detect Faults

(TODO: move into Storage Mining)

  • 18. CronActor calls StoragePowerActor.EpochTick at every block. This calls StorageMinerActor.CheckPoStSubmissionHappened on all the miners whose ProvingPeriod is up.
    • If no PoSt is submitted by the end of the ProvingPeriod, onMissedPoSt detects the missing PoSt, and sets all sectors to Failing.
    • TODO: reword in terms of a conditional in the mining cycle
    • When there are sector faults are detected, some of StorageDealCollateral and PledgeCollateral are slashed, and power is lost.
    • If the faults persist for storagemining.MaxFaultCount then sectors are removed/cleared from StorageMinerActor.

Deal Code

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type DealID UVarint
type DealCID ipld.CID
type ProposalCID ipld.CID
type PayloadCID ipld.CID
type Signature struct {}  // TODO

// Note: Deal Collateral is only released and returned to clients and miners
// when the storage deal stops counting towards power. In the current iteration,
// it will be released when the sector containing the storage deals expires,
// even though some storage deals can expire earlier than the sector does.
// Collaterals are denominated in PerEpoch to incur a cost for self dealing or
// minimal deals that last for a long time.
// TODO: ClientCollateralPerEpoch may not be needed and removed pending future confirmation.
// TODO: StoragePrice is paid out by Epoch duration but the exact mechanics still lacks some details.
// There will be a Minimum value for both client and provider deal collateral.
type StorageDealProposal struct {
    PieceCID                      piece.PieceCID  // 35 bytes CommP
    PieceSize                     piece.PieceSize
    Client                        addr.Address
    Provider                      addr.Address
    ClientSignature               Signature

    StartEpoch                    block.ChainEpoch  // a deal is invalid if it is published to the chain after StartEpoch
    EndEpoch                      block.ChainEpoch
    StoragePricePerEpoch          actor.TokenAmount
    ProviderCollateralPerEpoch    actor.TokenAmount
    ClientCollateralPerEpoch      actor.TokenAmount  // potentially collapse into one with ProviderDealCollateral

    ClientBalanceRequirement()    actor.TokenAmount  // (ClientCollateralPerEpoch + StoragePricePerEpoch) * (EndEpoch - StartEpoch)
    ProviderBalanceRequirement()  actor.TokenAmount  // ProviderCollateralPerEpoch * (EndEpoch - StartEpoch)
    Duration()                    block.ChainEpoch

    CID()                         ProposalCID
}

// Everything in this struct will go on chain
// We are enforcing that StorageProvider calls PublishStorageDeal to get back a StorageDeal struct
// Provider's signature is implicit in the onchain call
type StorageDeal struct {
    Proposal()       StorageDealProposal  // can extract proposal from the message
    ProposalMessage  msg.Message  // counter signature is implicit in the message
    ID               DealID

    CID()            DealCID
}

type RetrievalDealProposal struct {}  // TODO
type RetrievalDeal struct {
    Proposal          RetrievalDealProposal
    CounterSignature  Signature
}

Deal Flow

Deal Flow Sequence Diagram (open in new tab)

Storage Market in Filecoin

Storage Market subsystem is the data entry point into the network. Storage miners only earn power from data stored in a storage deal and all deals live on the Filecoin network. Specific deal negotiation process happens off chain, clients and miners enter a storage deal after an agreement has been reached and post storage deals on the Filecoin network to earn block rewards and get paid for storing the data in the storage deal. A deal is only valid when it is posted on chain with signatures from both parties and at the time of posting, there are sufficient balances for both parties in StorageMarketActor to honor the deal in terms of deal price and deal collateral.

Both StorageClient and StorageProvider need to first deposit Filecoin token into StorageMarketActor before participating in the storage market. StorageClient can then send a StorageDealProposal to the StorageProvider along with the data. A partially signed StorageDeal is called a StorageDealProposal. StorageProvider can then put this storage deal in their Sector, countersign the StorageDealProposal and result in a StorageDeal. A StorageDeal is only in effect when it is submitted to and accepted by the StorageMarketActor on chain before the ProposalExpiryEpoch. StorageDeal does not include a StartEpoch as it will come into effect at the block when the deal gets accepted into the network. Hence, StorageProvider should publish the deal as soon as possible.

StorageDeal payments are processed at every successful PoSt submission and StorageMarketActor will move locked funds from StorageClient to StorageProvider. SlashStorageDealCollateral is also triggered on PoSt submission when a Sector containing a particular StorageDeal is faulty or miners fail to submit PoSt related to a StorageDeal. Note that StorageProvider does not need to be the same entity as the StorageMinerActor as long as the deal is stored in at least one Sector throughout the life time of the storage deal.

TODO: process StorageDeal payments are larger interval beyond every PoSt submission

Storage Market Actor

StorageMarketActor is responsible for processing and managing on-chain deals. This is also the entry point of all storage deals and data into the system. It maintains a mapping of StorageDealID to StorageDeal and keeps track of locked balances of StorageClient and StorageProvider. When a deal is posted on chain through the StorageMarketActor, it will first check if both transacting parties have sufficient balances locked up and include the deal on chain. On every successful submission of PoStProof, StorageMarketActor will credit the StorageProvider a fraction of the storage fee based on how many blocks have passed since the last PoStProof. In the event that there are sectors included in the FaultSet, StorageMarketActor will fetch deal information from the chain and SlashStorageFault for faulting on those deals. Similarly, when a PoStProof is missed by the end of a ProvingPeriod, SlashStorageFault will also be called by the CronActor to penalize StorageProvider for dropping a StorageDeal.

(You can see the old Storage Market Actor here )

StorageMarketActor interface
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"

type StorageParticipantBalance struct {
    Locked     actor.TokenAmount
    Available  actor.TokenAmount
}

type BalancesHAMT {addr.Address: StorageParticipantBalance}
type DealsAMT {deal.DealID: deal.StorageDeal}

type StorageMarketActorState struct {
    Balances  BalancesHAMT
    Deals     DealsAMT

    // generate storage deal id
    _generateStorageDealID(rt Runtime, deal deal.StorageDeal) deal.DealID

    // check if StorageDeal is signed before expiry
    // check if StorageDeal has the right signatures
    // check if minimum StoragePrice and StorageCollateral are met
    // check if provider and client have sufficient balances
    _validateNewStorageDeal(rt Runtime, deal deal.StorageDeal) bool

    _lockFundsForStorageDeal(rt Runtime, deal deal.StorageDeal)
    _processStorageDealPayment(rt Runtime, deal deal.StorageDeal, duration block.ChainEpoch)
    _settleExpiredStorageDeal(rt Runtime, deal deal.StorageDeal)
    _slashLockedFunds(rt Runtime, amount actor.TokenAmount)

    _lockBalance(rt Runtime, addr addr.Address, amount actor.TokenAmount)
    _unlockBalance(rt Runtime, addr addr.Address, amount actor.TokenAmount)
    _transferBalance(
        rt           Runtime
        fromLocked   addr.Address
        toAvailable  addr.Address
        amount       actor.TokenAmount
    )
    _isBalanceAvailable(a addr.Address, amount actor.TokenAmount) bool
}

type StorageMarketActorCode struct {
    WithdrawBalance(rt Runtime, balance actor.TokenAmount)
    AddBalance(rt Runtime)  // amount is in the message

    // call by CommitSector in StorageMiningSubsystem
    // a StorageDeal is only published on chain when it passes verifyStorageDeal
    // a DealID will be assigned and stored in the mapping of DealID to StorageDeal
    // PublishStorageDeal should be called before SecotrCommits
    // an unregistered StorageDeal will not be processed
    PublishStorageDeals(rt Runtime, deals [deal.StorageDeal]) [PublishStorageDealResponse]

    // call by CronActor when no PoSt is submitted within a ProvingPeriod
    // trigger subsequent calls on different SectorSet
    // pull SectorSet from the run time
    HandleCronAction(rt Runtime)

    // call by CronActor / onPoStSubmission on ExpiredSet
    // remove StorageDeal from StorageMarketActor
    // if no more active deals contain in the sector
    // return StorageCollateral to miners
    SettleExpiredDeals(rt Runtime, dealIDs [deal.DealID])

    // call by CronActor / onPoStSubmission on ActiveSet to process deal payment
    // go through StorageDealIDs, if IDs are active in MarketActor
    // payment will be processed
    ProcessStorageDealsPayment(rt Runtime, dealIDs [deal.DealID], duration block.ChainEpoch)

    // call by CronActor / on DeclareFault on FaultSet to slash deal collateral
    // Deals should be slashed for a single proving period
    SlashStorageDealsCollateral(rt Runtime, dealIDs [deal.DealID])

    GetLastExpirationFromDealIDs(rt Runtime, dealIDs [deal.DealID]) block.ChainEpoch

    // TODO: StorageDeals should be renewable
    // UpdateStorageDeal(newStorageDeals [deal.StorageDeal])
}
StorageMarketActor implementation
package storage_market

import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"
import block "github.com/filecoin-project/specs/systems/filecoin_blockchain/struct/block"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
import vmr "github.com/filecoin-project/specs/systems/filecoin_vm/runtime"
import ipld "github.com/filecoin-project/specs/libraries/ipld"
import util "github.com/filecoin-project/specs/util"

////////////////////////////////////////////////////////////////////////////////
// Boilerplate
////////////////////////////////////////////////////////////////////////////////
type InvocOutput = msg.InvocOutput
type Runtime = vmr.Runtime
type Bytes = util.Bytes
type State = StorageMarketActorState

func (a *StorageMarketActorCode_I) State(rt Runtime) (vmr.ActorStateHandle, State) {
	h := rt.AcquireState()
	stateCID := h.Take()
	stateBytes := rt.IpldGet(ipld.CID(stateCID))
	if stateBytes.Which() != vmr.Runtime_IpldGet_FunRet_Case_Bytes {
		rt.Abort("IPLD lookup error")
	}
	state := DeserializeState(stateBytes.As_Bytes())
	return h, state
}
func Release(rt Runtime, h vmr.ActorStateHandle, st State) {
	checkCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.Release(checkCID)
}
func UpdateRelease(rt Runtime, h vmr.ActorStateHandle, st State) {
	newCID := actor.ActorSubstateCID(rt.IpldPut(st.Impl()))
	h.UpdateRelease(newCID)
}
func (st *StorageMarketActorState_I) CID() ipld.CID {
	panic("TODO")
}
func DeserializeState(x Bytes) State {
	panic("TODO")
}

////////////////////////////////////////////////////////////////////////////////

func (st *StorageMarketActorState_I) _generateStorageDealID(rt Runtime, storageDeal deal.StorageDeal) deal.DealID {
	// TODO
	var dealID deal.DealID
	return dealID
}

// Call by PublishStorageDeals and GetLastDealExpirationFromDealIDs (consider remove this)
// This is the check before a StorageDeal appears onchain
// It checks the following:
//   - verify deal did not expire when it is signed
//   - verify deal hits the chain before StartEpoch
//   - verify client and provider address and signature are correct (TODO may not be needed)
//   - verify StorageDealCollateral match requirements for MinimumStorageDealCollateral
//   - verify client and provider has sufficient balance
func (st *StorageMarketActorState_I) _validateNewStorageDeal(rt Runtime, d deal.StorageDeal) bool {
	// TODO verify client and provider signature
	// TODO verify minimum StoragePrice, ProviderCollateralPerEpoch, and ClientCollateralPerEpoch
	// TODO: verify deal did not expire when it is signed

	currEpoch := rt.CurrEpoch()
	p := d.Proposal()

	// deal has started before publish
	if p.StartEpoch() < currEpoch {
		return false
	}

	// TODO: verify client and provider address and signature are correct (may not be needed)

	// verify StorageDealCollateral match requirements for MinimumStorageDealCollateral
	if p.ProviderCollateralPerEpoch() < deal.MIN_PROVIDER_DEAL_COLLATERAL_PER_EPOCH ||
		p.ClientCollateralPerEpoch() < deal.MIN_CLIENT_DEAL_COLLATERAL_PER_EPOCH {
		return false
	}

	// verify client and provider has sufficient balance
	isClientBalAvailable := st._isBalanceAvailable(p.Client(), p.ClientBalanceRequirement())
	isProviderBalAvailable := st._isBalanceAvailable(p.Provider(), p.ProviderBalanceRequirement())

	if !isClientBalAvailable || !isProviderBalAvailable {
		return false
	}

	return true
}

// TODO: consider returning a boolean
func (st *StorageMarketActorState_I) _lockBalance(rt Runtime, addr addr.Address, amount actor.TokenAmount) {
	if amount < 0 {
		rt.Abort("negative amount.")
	}

	currBalance, found := st.Balances()[addr]
	if !found {
		rt.Abort("addr not found.")
	}

	currBalance.Impl().Available_ -= amount
	currBalance.Impl().Locked_ += amount
}

func (st *StorageMarketActorState_I) _unlockBalance(rt Runtime, addr addr.Address, amount actor.TokenAmount) {
	if amount < 0 {
		rt.Abort("negative amount.")
	}

	currBalance, found := st.Balances()[addr]
	if !found {
		rt.Abort("addr not found.")
	}

	currBalance.Impl().Locked_ -= amount
	currBalance.Impl().Available_ += amount
}

// move funds from locked in client to available in provider
func (st *StorageMarketActorState_I) _transferBalance(rt Runtime, fromLocked addr.Address, toAvailable addr.Address, amount actor.TokenAmount) {
	fromB := st.Balances()[fromLocked]
	toB := st.Balances()[toAvailable]

	if fromB.Locked() < amount {
		rt.Abort("attempt to lock funds greater than actor has")
		return
	}

	fromB.Impl().Locked_ -= amount
	toB.Impl().Available_ += amount
}

func (st *StorageMarketActorState_I) _isBalanceAvailable(a addr.Address, amount actor.TokenAmount) bool {
	bal := st.Balances()[a]
	return bal.Available() >= amount
}

func (st *StorageMarketActorState_I) _lockFundsForStorageDeal(rt Runtime, deal deal.StorageDeal) {
	p := deal.Proposal()

	st._lockBalance(rt, p.Client(), p.ClientBalanceRequirement())
	st._lockBalance(rt, p.Provider(), p.ProviderBalanceRequirement())
}

func (st *StorageMarketActorState_I) _processStorageDealPayment(rt Runtime, deal deal.StorageDeal, duration block.ChainEpoch) {
	p := deal.Proposal()

	amount := actor.TokenAmount(uint64(p.StoragePricePerEpoch()) * uint64(duration))
	st._transferBalance(rt, p.Client(), p.Provider(), amount)
}

func (st *StorageMarketActorState_I) _settleExpiredStorageDeal(rt Runtime, deal deal.StorageDeal) {
	// TODO
}

func (st *StorageMarketActorState_I) _slashLockedFunds(rt Runtime, amount actor.TokenAmount) {
	// TODO
}

////////////////////////////////////////////////////////////////////////////////

func (a *StorageMarketActorCode_I) WithdrawBalance(rt Runtime, balance actor.TokenAmount) {
	h, st := a.State(rt)

	var msgSender addr.Address // TODO replace this from VM runtime

	if balance < 0 {
		rt.Abort("negative balance to withdraw.")
	}

	senderBalance, found := st.Balances()[msgSender]
	if !found {
		rt.Abort("sender address not found.")
	}

	if senderBalance.Available() < balance {
		rt.Abort("insufficient balance.")
	}

	senderBalance.Impl().Available_ = senderBalance.Available() - balance
	st.Balances()[msgSender] = senderBalance

	// TODO send funds to msgSender with `transferBalance` in VM runtime

	UpdateRelease(rt, h, st)
}

func (a *StorageMarketActorCode_I) AddBalance(rt Runtime) {
	h, st := a.State(rt)

	var msgSender addr.Address    // TODO replace this
	var balance actor.TokenAmount // TODO replace this

	// TODO subtract balance from msgSender
	// TODO add balance to StorageMarketActor
	if balance < 0 {
		rt.Abort("negative balance to add.")
	}

	senderBalance, found := st.Balances()[msgSender]
	if found {
		senderBalance.Impl().Available_ = senderBalance.Available() + balance
		st.Balances()[msgSender] = senderBalance
	} else {
		st.Balances()[msgSender] = &StorageParticipantBalance_I{
			Locked_:    0,
			Available_: balance,
		}
	}

	UpdateRelease(rt, h, st)
}

func (a *StorageMarketActorCode_I) PublishStorageDeals(rt Runtime, newStorageDeals []deal.StorageDeal) []PublishStorageDealResponse {
	h, st := a.State(rt)

	l := len(newStorageDeals)
	response := make([]PublishStorageDealResponse, l)

	// TODO: verify behavior here
	// some StorageDeal will pass and some will fail
	// if ealier StorageDeal consumes some balance such that
	// funds are no longer sufficient for later storage deals
	// all later storage deals will return error
	// TODO: confirm st here will be changing
	for i, newDeal := range newStorageDeals {
		if st._validateNewStorageDeal(rt, newDeal) {
			st._lockFundsForStorageDeal(rt, newDeal)
			id := st._generateStorageDealID(rt, newDeal)
			st.Deals()[id] = newDeal
			response[i] = PublishStorageDealSuccess(id)
		} else {
			response[i] = PublishStorageDealError()
		}
	}

	UpdateRelease(rt, h, st)

	return response
}

func (a *StorageMarketActorCode_I) HandleCronAction(rt Runtime) {
	panic("TODO")
}

func (a *StorageMarketActorCode_I) SettleExpiredDeals(rt Runtime, storageDealIDs []deal.DealID) {
	// for dealID := range storageDealIDs {
	// Return the storage collateral
	// storageDeal := sma.Deals()[dealID]
	// storageCollateral := storageDeal.StorageCollateral()
	// provider := storageDeal.Provider()
	// assert(sma.Balances()[provider].Locked() >= storageCollateral)

	// // Move storageCollateral from locked to available
	// balance := sma.Balances()[provider]

	// sma.Balances()[provider] = &StorageParticipantBalance_I{
	// 	Locked_:    balance.Locked() - storageCollateral,
	// 	Available_: balance.Available() + storageCollateral,
	// }

	// // Delete reference to the deal
	// delete(sma.Deals_, dealID)
	// }
	panic("TODO")
}

func (a *StorageMarketActorCode_I) ProcessStorageDealsPayment(rt Runtime, dealIDs []deal.DealID, duration block.ChainEpoch) {
	h, st := a.State(rt)

	for _, dealID := range dealIDs {
		st._processStorageDealPayment(rt, st.Deals()[dealID], duration)
	}

	UpdateRelease(rt, h, st)
}

func (a *StorageMarketActorCode_I) SlashStorageDealsCollateral(rt Runtime, dealIDs []deal.DealID) {
	// for _, dealID := range storageDealIDs {
	// 	faultStorageDeal := sma.Deals()[dealID]
	// TODO remove locked funds and send slashed fund to TreasuryActor
	// TODO provider lose power for the FaultSet but not PledgeCollateral
	// }
	panic("TODO")
}

// Call by StorageMinerActor at CommitSector
func (a *StorageMarketActorCode_I) GetLastDealExpirationFromDealIDs(rt Runtime, dealIDs []deal.DealID) block.ChainEpoch {

	h, st := a.State(rt)

	var lastDealExpiration block.ChainEpoch
	for _, dealID := range dealIDs {
		deal, found := st.Deals()[dealID]
		if !found {
			rt.Abort("dealID not found.")
		}

		// TODO: more checks or be convinced that it's enough to assume deals are still valid

		currExpiration := deal.Proposal().EndEpoch()
		if currExpiration > lastDealExpiration {
			lastDealExpiration = currExpiration
		}
	}

	Release(rt, h, st)

	return lastDealExpiration
}

func (a *StorageMarketActorCode_I) InvokeMethod(rt Runtime, method actor.MethodNum, params actor.MethodParams) InvocOutput {
	panic("TODO")
}
Storage Deal Collateral

Storage Deals have an associated collateral amount. This StorageDealCollateral is held in the StorageMarketActor. Its value is agreed upon by the storage provider and client off-chain, but must be greater than a protocol-defined minimum in any deal. Storage providers will choose to offer greater collateral to signal high-quality storage to clients.

On SectorFailureTimeout (see Faults), the StorageDealCollateral will be burned. In the future, the Filecoin protocol may be amended to send up to half of the collateral to storage clients as damages in such cases.

Upon graceful deal expiration, storage providers must wait for finality number of epochs (as defined in EC Finality) before being able to withdraw their StorageDealCollateral from the StorageMarketActor.

Storage Provider

Both StorageProvider and StorageClient are StorageMarketParticipant. Any party can be a storage provider or client or both at the same time. Storage deal negotiation is expected to happen completely off chain and the request-response style storage deal protocol is to submit agreed-upon storage deal onto the network and gain storage power on chain. StorageClient will initiate the storage deal protocol by submitting a StorageDealProposal to the StorageProvider who will then add the deal data to a Sector and commit the sector onto the blockchain.

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
import msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"

type StorageProvider struct {
    // StorageDealProposalCID: StorageDealStatus
    ProposalStatus  {deal.ProposalCID: StorageDealStatus}
    // DealCID: StorageDeal
    DealStatus      {deal.DealCID: StorageDealStatus}

    // libp2p listener on new StorageDealProposal
    OnNewStorageDealProposal(proposal deal.StorageDealProposal, payloadCID ipld.CID)

    // Call by StorageProvider to sign proposal and construct StorageDeal message
    signStorageDealProposal(proposal deal.StorageDealProposal) msg.Message

    // Call by StorageProvider to call sma.PublishStorageDeal
    publishStorageDealMessage(message msg.Message)

    // Call by StorageProvider to accept a StorageDealProposal and notify StorageClient
    acceptStorageDealProposal(proposal deal.StorageDealProposal)

    // Call by StorageProvider to reject a StorageDealProposal and notify StorageClient
    rejectStorageDealProposal(proposal deal.StorageDealProposal)

    // Call by StorageProvider to check client balance and signature
    verifyStorageDealProposal(proposal deal.StorageDealProposal, payloadCID ipld.CID) bool

    // Check PieceCID(CommP) provided by StorageClient in StorageDealProposal
    // Provider needs to verify and reject deal if incorrect.
    // If on-chain CommP does not match actual piece, Seal proof will not verify.
    verifyPieceCID(pieceCID piece.PieceCID, payloadCID ipld.CID) bool

    // Call by StorageMiningSubsystem
    NotifyOfOnChainDealStatus(dealID deal.DealID, newStatus StorageDealStatus)

    // libp2p listener on receiving payload
    // TODO: take in dt.DataTransferVoucher as an argument
    OnReceivingPayload(payloadCID ipld.CID)

    // libp2p listener on storage deal proposal query
    OnStorageDealProposalQuery(proposalCID deal.ProposalCID) StorageDealStatus

    // libp2p listener on storage deal query
    OnStorageDealQuery(dealCID deal.DealCID) StorageDealStatus
}
package storage_market

import (
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
	deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
	msg "github.com/filecoin-project/specs/systems/filecoin_vm/message"
)

func (provider *StorageProvider_I) OnNewStorageDealProposal(proposal deal.StorageDealProposal, payloadCID ipld.CID) {

	_, found := provider.ProposalStatus()[proposal.CID()]
	if found {
		// TODO: return error
		return
	}

	var shouldReject bool // specified by StorageProvider
	if shouldReject {
		provider.rejectStorageDealProposal(proposal)
		return
	}

	if provider.verifyStorageDealProposal(proposal, payloadCID) {
		provider.acceptStorageDealProposal(proposal)
	} else {
		provider.rejectStorageDealProposal(proposal)
		return
	}

}

func (provider *StorageProvider_I) signStorageDealProposal(proposal deal.StorageDealProposal) msg.Message {
	// TODO: construct StorageDeal Message
	var storageDealMessage msg.Message

	// TODO: notify StorageClient StorageDealSigned
	return storageDealMessage
}

func (provider *StorageProvider_I) publishStorageDealMessage(message msg.Message) deal.StorageDeal {
	// TODO: send message to StorageMarketActor.PublishStorageDeal and get back DealID
	var dealID deal.DealID
	var dealCID deal.DealCID

	storageDeal := &deal.StorageDeal_I{
		ProposalMessage_: message,
		ID_:              dealID,
	}

	provider.DealStatus()[dealCID] = StorageDealPublished

	// TODO: notify StorageClient StorageDealPublished
	return storageDeal
}

func (provider *StorageProvider_I) acceptStorageDealProposal(proposal deal.StorageDealProposal) {
	provider.ProposalStatus()[proposal.CID()] = StorageDealProposalAccepted
	// TODO: notify StorageClient StorageDealAccepted
}

func (provider *StorageProvider_I) rejectStorageDealProposal(proposal deal.StorageDealProposal) {
	provider.ProposalStatus()[proposal.CID()] = StorageDealProposalRejected
	// TODO: notify StorageClient StorageDealRejected
}

func (provider *StorageProvider_I) verifyStorageDealProposal(proposal deal.StorageDealProposal, payloadCID ipld.CID) bool {
	// TODO make call to StorageMarketActor
	// balance, found := StorageMarketActor.Balances()[address]

	// if !found {
	// 	return false
	// }

	// if balance < price {
	// 	return false
	// }

	isPieceCIDVerified := provider.verifyPieceCID(proposal.PieceCID(), payloadCID)
	if !isPieceCIDVerified {
		// TODO: error out
		panic("TODO")
	}

	// TODO Check on Signature
	// return true
	panic("TODO")
}

func (provider *StorageProvider_I) verifyPieceCID(pieceCID piece.PieceCID, payloadCID ipld.CID) bool {
	panic("TODO")
	return false
}

func (provider *StorageProvider_I) NotifyOfOnChainDealStatus(dealCID deal.DealCID, newStatus StorageDealStatus) {
	_, found := provider.DealStatus()[dealCID]
	if found {
		provider.DealStatus()[dealCID] = newStatus
	}
}

// the entire payload graph is now in local IPLD store
// TODO: integrate with Data Transfer
func (provider *StorageProvider_I) OnReceivingPayload(payloadCID ipld.CID) {
	// TODO: get proposalCID from local storage
	var proposalCID deal.ProposalCID

	_, found := provider.ProposalStatus()[proposalCID]
	if !found {
		// TODO: error here
	}

	// TODO: get client addr from libp2p
	// TODO: get proposal from local storage
	var proposal deal.StorageDealProposal
	isProposalVerified := provider.verifyStorageDealProposal(proposal, payloadCID)

	if !isProposalVerified {
		provider.rejectStorageDealProposal(proposal)
		return
	}

	// StorageProvider can decide what to do here
	provider.signStorageDealProposal(proposal)

}

func (provider *StorageProvider_I) OnStorageDealProposalQuery(proposalCID deal.ProposalCID) StorageDealStatus {
	proposalStatus, found := provider.ProposalStatus()[proposalCID]

	if found {
		return proposalStatus
	}

	return StorageDealProposalNotFound
}

func (provider *StorageProvider_I) OnStorageDealQuery(dealCID deal.DealCID) StorageDealStatus {
	dealStatus, found := provider.DealStatus()[dealCID]

	if found {
		return dealStatus
	}

	return StorageDealNotFound
}

Storage Client

Both StorageProvider and StorageClient are StorageMarketParticipant. Any party can be a storage provider or client or both at the same time. Storage deal negotiation is expected to happen completely off chain and the request-response style storage deal protocol is to submit agreed-upon storage deal onto the network and gain storage power on chain. StorageClient will initiate the storage deal protocol by submitting a StorageDealProposal to the StorageProvider who will then add the deal data to a Sector and commit the sector onto the blockchain.

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
import piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"

type StorageClient struct {
    // generate PieceCID(CommP) from payload
    // so that client can identify their deal
    generatePieceCID(payloadCID ipld.CID) piece.PieceCID

    // Call by StorageProvider to pull data with GraphSync
    PullPayload(payloadCID ipld.CID)

    // Call by StorageProvider to inform StorageDealProposalStatus
    NotifyOfStorageDealProposalStatus(proposalCID deal.ProposalCID, status StorageDealStatus)

    // Call by StorageProvider to inform StorageDealStatus
    NotifyOfStorageDealStatus(dealCID deal.DealCID, status StorageDealStatus)
}
package storage_market

import (
	ipld "github.com/filecoin-project/specs/libraries/ipld"
	piece "github.com/filecoin-project/specs/systems/filecoin_files/piece"
	deal "github.com/filecoin-project/specs/systems/filecoin_markets/deal"
)

func (client *StorageClient_I) generatePieceCID(payloadCID ipld.CID) piece.PieceCID {
	panic("TODO")
	var pieceCID piece.PieceCID
	return pieceCID
}

func (client *StorageClient_I) PullPayload(payloadCID ipld.CID) {
	panic("TODO")
}

func (client *StorageClient_I) NotifyOfStorageDealProposalStatus(pieceCID piece.PieceCID, status StorageDealStatus) {
	panic("TODO")
}

func (client *StorageClient_I) NotifyOfStorageDealStatus(dealCID deal.DealCID, status StorageDealStatus) {
	panic("TODO")
}

Faults

There are two main categories of faults in the Filecoin network.

  • ConsensusFaults
  • StorageDealFaults

ConsensusFaults are faults that impact network consensus and StorageDealFaults are faults where data in a StorageDeal is not maintained by the providers pursuant to deal terms.

is slashed for ConsensusFaults and Storage Deal Collateral for StorageDealFaults.

Any misbehavior may result in more than one fault thus lead to slashing on both collaterals. For example, missing a PoStProof will incur a penalty on both PledgeCollateral and StorageDealCollateral given it impacts both a given StorageDeal and power derived from the sector commitments in Storage Power Consensus.

Storage Faults

TODO: complete this.

Retrieval Market in Filecoin

Components

Version 0 of the retrieval market protocol is what we (tentatively) will launch the filecoin network with. It is version zero because it will only be good enough to fit the bill as a way to pay another node for a file.

The main components are as follows:

  • A payment channel actor (See payment channels for details)
  • ‘retrieval-v0’ libp2p services
  • A chain-based content routing interface
  • A set of commands to interact with the above

Retrieval V0 libp2p Services

The v0 retrieval market will initially be implemented as two libp2p services. It will be request response based, where the client who is requesting a file sends a retrieval deal proposal to the miner. The miner chooses whether or not to accept it, sends their response which (if they accept the proposal) includes a signed retrieval deal, followed by the actual requested content, streamed as a series of bitswap block messages, using a pre-order traversal of the dag. Each block should use the bitswap block message format. This way, the client should be able to verify the data incrementally as it receives it. Once the client has received all the data, it should then send a payment channel SpendVoucher of the proposed amount to the miner. This protocol may be easily extended to include payments from the client to the miner every N blocks, but for now we omit that feature.

Retrieval Client

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type RetrievalClient struct {
    CreatePaymentChannel(provider addr.Address, payment actor.TokenAmount) PaymentChannel
}

Retrieval Provider (Miner)

import actor "github.com/filecoin-project/specs/systems/filecoin_vm/actor"
import addr "github.com/filecoin-project/specs/systems/filecoin_vm/actor/address"

type PaymentChannel struct {}
type CID struct {}

// File Retrieval Query
type FileRetrievalAvailable struct {
    MinPrice  actor.TokenAmount
    Miner     addr.Address
}
type FileRetrievalUnavailable struct {}
type RetrievalQueryResponse union {
    FileRetrievalAvailable
    FileRetrievalUnavailable
}

type RetrievalQuery struct {
    File CID
}

// File Retrieval Deal Proposal and Deal
type RetrievalDealProposalError struct {}
type RetrievalDealProposalRejected struct {}
type RetrievalDealProposalAccepted struct {
    CounterParty  addr.Address
    Payment       PaymentChannel
}
type RetrievalDealProposalResponse union {
    RetrievalDealProposalAccepted
    RetrievalDealProposalRejected
    RetrievalDealProposalError
}
type RetrievalDealProposal struct {
    File      CID
    Payment   PaymentChannel
    MinPrice  actor.TokenAmount
}

type RetrievalProvider struct {
    NewRetrievalQuery(query RetrievalQuery) RetrievalQueryResponse

    // NewRetrievalDealProposal is called to propose a retrieval
    NewRetrievalDealProposal(proposal RetrievalDealProposal) RetrievalDealProposalResponse

    // AcceptRetrievalDeal is called to accept a retrieval deal
    AcceptRetrievalDealProposal(deal RetrievalDealProposal) RetrievalDealProposalResponse
}

Libraries used in Filecoin

FCS

Something's not right. The fcs.id file was not found.

IPLD - InterPlanetary Linked Data

type Store GraphStore

// imported as ipld.Object
type Object interface {
    CID() CID

    // Populate(v interface{}) error
}

CIDs - Content IDentifiers

type BytesKey string  // so that we can use it in go maps

type CID BytesKey  // TODO: remove util.

Data Model

Selectors - IPLD Query Language

// This is a compression of the IPLD Selector Spec
// Full spec: https://github.com/ipld/specs/blob/master/selectors/selectors.md

type Selector union {
    Matcher
    ExploreAll
    ExploreFields
    ExploreIndex
    ExploreRange
    ExploreRecursive
    ExploreUnion
    ExploreConditional
    ExploreRecursiveEdge
}

// ExploreAll is similar to a `*` -- it traverses all elements of an array,
// or all entries in a map, and applies a next selector to the reached nodes.
type ExploreAll struct {
    next Selector
}

// ExploreFields traverses named fields in a map (or equivalently, struct, if
// traversing on typed/schema nodes) and applies a next selector to the
// reached nodes.
//
// Note that a concept of exploring a whole path (e.g. "foo/bar/baz") can be
// represented as a set of three nexted ExploreFields selectors, each
// specifying one field.
type ExploreFields struct {
    fields {string: Selector}
}

// ExploreIndex traverses a specific index in a list, and applies a next
// selector to the reached node.
type ExploreIndex struct {
    index  UInt
    next   Selector
}

// ExploreIndex traverses a list, and for each element in the range specified,
// will apply a next selector to those reached nodes.
type ExploreRange struct {
    start  UInt
    end    UInt
    next   Selector
}

// ExploreRecursive traverses some structure recursively.
// To guide this exploration, it uses a "sequence", which is another Selector
// tree; some leaf node in this sequence should contain an ExploreRecursiveEdge
// selector, which denotes the place recursion should occur.
//
// In implementation, whenever evaluation reaches an ExploreRecursiveEdge marker
// in the recursion sequence's Selector tree, the implementation logically
// produces another new Selector which is a copy of the original
// ExploreRecursive selector, but with a decremented maxDepth parameter, and
// continues evaluation thusly.
//
// It is not valid for an ExploreRecursive selector's sequence to contain
// no instances of ExploreRecursiveEdge; it *is* valid for it to contain
// more than one ExploreRecursiveEdge.
//
// ExploreRecursive can contain a nested ExploreRecursive!
// This is comparable to a nested for-loop.
// In these cases, any ExploreRecursiveEdge instance always refers to the
// nearest parent ExploreRecursive (in other words, ExploreRecursiveEdge can
// be thought of like the 'continue' statement, or end of a for-loop body;
// it is *not* a 'goto' statement).
//
// Be careful when using ExploreRecursive with a large maxDepth parameter;
// it can easily cause very large traversals (especially if used in combination
// with selectors like ExploreAll inside the sequence).
type ExploreRecursive struct {
    sequence  Selector
    maxDepth  UInt
    stopAt    Condition
}

// ExploreRecursiveEdge is a special sentinel value which is used to mark
// the end of a sequence started by an ExploreRecursive selector: the recursion
// goes back to the initial state of the earlier ExploreRecursive selector,
// and proceeds again (with a decremented maxDepth value).
//
// An ExploreRecursive selector that doesn't contain an ExploreRecursiveEdge
// is nonsensical.  Containing more than one ExploreRecursiveEdge is valid.
// An ExploreRecursiveEdge without an enclosing ExploreRecursive is an error.
type ExploreRecursiveEdge struct {}

// ExploreUnion allows selection to continue with two or more distinct selectors
// while exploring the same tree of data.
//
// ExploreUnion can be used to apply a Matcher on one node (causing it to
// be considered part of a (possibly labelled) result set), while simultaneously
// continuing to explore deeper parts of the tree with another selector,
// for example.
type ExploreUnion [Selector]

// Note that ExploreConditional versus a Matcher with a Condition are distinct:
// ExploreConditional progresses deeper into a tree;
// whereas a Matcher with a Condition may look deeper to make its decision,
// but returns a match for the node it's on rather any of the deeper values.
type ExploreConditional struct {
    condition  Condition
    next       Selector
}

// Matcher marks a node to be included in the "result" set.
// (All nodes traversed by a selector are in the "covered" set (which is a.k.a.
// "the merkle proof"); the "result" set is a subset of the "covered" set.)
//
// In libraries using selectors, the "result" set is typically provided to
// some user-specified callback.
//
// A selector tree with only "explore*"-type selectors and no Matcher selectors
// is valid; it will just generate a "covered" set of nodes and no "result" set.
type Matcher struct {
    onlyIf  Condition?  // match is true based on position alone if this is not set.
    label   string?  // labels can be used to match multiple different structures in one selection.
}

// Condition is expresses a predicate with a boolean result.
//
// Condition clauses are used several places:
//   - in Matcher, to determine if a node is selected.
//   - in ExploreRecursive, to halt exploration.
//   - in ExploreConditional,
//
//
// TODO -- Condition is very skeletal and incomplete.
// The place where Condition appears in other structs is correct;
// the rest of the details inside it are not final nor even completely drafted.
type Condition union {
    // We can come back to this and expand it later...
    // TODO: figure out how to make this recurse correctly, so I can say "hasField{hasField{or{hasValue{1}, hasValue{2}}}}".
    Condition_HasField
    Condition_HasValue
    Condition_HasKind
    Condition_IsLink
    Condition_GreaterThan
    Condition_LessThan
    Condition_And
    Condition_Or
    // REVIEW: since we introduced "and" and "or" here, we're getting into dangertown again.  we'll need a "max conditionals limit" (a la 'gas' of some kind) near here.
}

type Condition_HasField struct {}
type Condition_HasKind struct {}
type Condition_HasValue struct {}
type Condition_And struct {}
type Condition_GreaterThan struct {}
type Condition_IsLink struct {}
type Condition_LessThan struct {}
type Condition_Or struct {}

GraphStore - IPLD Data Storage

// imported as ipld.Store
type GraphStore struct {
    Get(cid CID)   union {o Object, err error}
    Put(o Object)  union {cid CID, err error}
}

libp2p

import ipld "github.com/filecoin-project/specs/libraries/ipld"
import mf "github.com/filecoin-project/specs/libraries/multiformats"

// PeerID is the CID of the public key of this peer
type PeerID ipld.CID

// PeerInfo is a simple datastructure that relates PeerIDs to corresponding partial Multiaddrs.
// This is a convenience struct used in interfaces where we must specify both, or may specify
// either.
type PeerInfo struct {
    PeerID
    Addrs [mf.Multiaddr]
}

type Node struct {
    // PeerID returns the PeerID associated with this libp2p Node
    PeerID() PeerID

    // MountProtocol adds given Protocol under specified protocol id.
    MountProtocol(path ProtocolPath, protocol Protocol)

    // ConnectPeerID establishes a connection to peer matching given PeerInfo.
    //
    // PeerInfo.Addrs may be empty. If so:
    // - Libp2pNode will try to use any Multiaddrs it knows (internal PeerStore)
    // - Libp2pNode may use any `PeerRouting` protocol mounted onto the libp2p node.
    //     TODO: how to define this.
    //     NOTE: probably implies using kad-dht or gossipsub for this.
    //
    // Idempotent. If a connection already exists, this method returns silently.
    Connect(peerInfo PeerInfo)
}

type ProtocolPath string

type Protocol union {
    StreamProtocol
    DatagramProtocol
}

// Stream is an interface to deal with networked processes, which communicate
// via streams of bytes.
//
// See golang.org/pkg/io -- as this is modelled after io.Reader and io.Writer
type Stream struct {
    // Read reads bytes from the underlying stream and copies them to buf.
    // Read returns the number of bytes read (n), and potentially an error
    // encountered while reading. Read reads at most len(buf) byte.
    // Read may read 0 bytes.
    Read(buf Bytes) union {n int, err error}

    // Write writes bytes to the underlying stream, copying them from buf.
    // Write returns the number of bytes written (n), and potentially an error
    // encountered while writing. Write writes at most len(buf) byte.
    // Write may read 0 bytes.
    Write(buf Bytes) union {n int, err error}

    // Close terminates client's use of the stream.
    // Calling Read or Write after Close is an error.
    Close() error
}

type StreamProtocol struct {
    // AcceptStream accepts an incoming stream connection.
    AcceptStream() struct {
        stream    Stream
        peerInfo  PeerInfo
        err       error
    }

    // OpenStream opens a stream to a particular PeerID.
    OpenStream(peerInfo PeerInfo) struct {
        stream  Stream
        err     error
    }
}

// Datagram
type Datagram Bytes

// Datagrams are "messages" in the network packet sense of the word.
//
// "message-oriented network protocols" should use this interface,
// not the StreamProtocol interface.
//
// We call it "Datagram" here because unfortunately the word "Message"
// is very overloaded in Filecoin.
// Suggestion for libp2p: use datagram too.
type DatagramProtocol struct {
    // AcceptDatagram accepts an incoming message.
    AcceptDatagram() struct {
        datagram  Datagram
        peerInfo  PeerInfo
        err       error
    }

    // OpenStream opens a stream to a particular PeerID
    SendDatagram(datagram Datagram, peerInfo PeerInfo) struct {err error}
}

// type StorageDealLibp2pProtocol struct {
//   StreamProtocol StreamProtocol
//   // ---
//   AcceptStream() struct {}
//   OpenStream() struct {}
// }

Gossipsub for broadcasts

Kademlia DHT for Peer Routing

Filecoin libp2p Nodes

IPFS - InterPlanetary File System

BitSwap

GraphSync

UnixFS

Multiformats - self describing protocol values

Multihash - self describing hash values

type Multihash Bytes

Multiaddr - self describing network addresses

type Multiaddr Bytes

Algorithms

Expected Consensus

Algorithm

Expected Consensus (EC) is a probabilistic Byzantine fault-tolerant consensus protocol. At a high level, it operates by running a leader election every round in which, on expectation, one participant may be eligible to submit a block. EC guarantees that this winner will be anonymous until they reveal themselves by submitting a proof of their election (we call this proof an Election Proof). All valid blocks submitted in a given round form a Tipset. Every block in a Tipset adds weight to its chain. The ‘best’ chain is the one with the highest weight, which is to say that the fork choice rule is to choose the heaviest known chain. For more details on how to select the heaviest chain, see Chain Selection.

At a very high level, with every new block generated, a miner will craft a new ticket from the prior one in the chain appended with the current epoch number (i.e. parentTipset.epoch + 1 to start). While on expectation at least one block will be generated at every round, in cases where no one finds a block in a given round, a miner will increment the round number and attempt a new leader election (using the new input) thereby ensuring liveness in the protocol.

The Storage Power Consensus subsystem uses access to EC to use the following facilities: - Access to verifiable randomness for the protocol, derived from Tickets. - Running and verifying leader election for block generation. - Access to a weighting function enabling Chain Selection by the chain manager. - Access to the most recently finalized tipset available to all protocol participants.

Tickets

For leader election in EC, participants win in proportion to the power they have within the network.

A ticket is drawn from the past at the beginning of each new round to perform leader election. EC also generates a new ticket in every round for future use. Tickets are chained independently of the main blockchain. A ticket only depends on the ticket before it, and not any other data in the block. On expectation, in Filecoin, every block header contains one ticket, though it could contain more if that block was generated over multiple rounds.

Tickets are used across the protocol as sources of randomness: - The Sector Sealer uses tickets to bind sector commitments to a given subchain. - The Storage Miner likewise uses tickets to prove sectors remain committed as of a given block. - EC uses them to run leader election and generates new ones for use by the protocol, as detailed below.

You can find the Ticket data structure here.

Comparing Tickets in a Tipset

Whenever comparing tickets is evoked in Filecoin, for instance when discussing selecting the “min ticket” in a Tipset, the comparison is that of the little endian representation of the ticket’s VFOutput bytes.

Tickets in EC

Within EC, a miner generates a new ticket in their block for every ticket they use running leader election, thereby ensuring the ticket chain is always as long as the block chain.

Tickets are used to achieve the following: - Ensure leader secrecy – meaning a block producer will not be known until they release their block to the network. - Prove leader election – meaning a block producer can be verified by any participant in the network.

In practice, EC defines two different fields within a block:

  • A Ticket field β€” this stores the new ticket generated during this block generation attempt. It is from this ticket that miners will sample randomness to run leader election in K rounds.
  • An ElectionProof β€” this stores a proof that a given miner has won a leader election using the appropriate ticket K rounds back appended with the current epoch number. It proves that the leader was validly elected in this epoch.

Something's not right. The election.id file was not found.

Something's not right. The election.go file was not found.

But why the randomness lookback?

The randomness lookback helps turn independent ticket generation from a block one round back
into a global ticket generation game instead. Rather than having a distinct chance of winning or losing
for each potential fork in a given round, a miner will either win on all or lose on all
forks descended from the block in which the ticket is sampled.

This is useful as it reduces opportunities for grinding, across forks or sybil identities.

However this introduces a tradeoff:
- The randomness lookback means that a miner can know K rounds in advance that they will win,
decreasing the cost of running a targeted attack (given they have local predictability).
- It means electionProofs are stored separately from new tickets on a block, taking up
more space on-chain.

How is K selected?
- On the one end, there is no advantage to picking K larger than finality.
- On the other, making K smaller reduces adversarial power to grind.
Ticket generation

Something's not right. The expected_consensus.id file was not found.

Something's not right. The expected_consensus.go file was not found.

This section discusses how tickets are generated by EC for the Ticket field.

At round N, a new ticket is generated using tickets drawn from the Tipset at round N-1 (for more on how tickets are drawn see Ticket Chain).

The miner runs the prior ticket through a Verifiable Random Function (VRF) to get a new unique output.

The VRF’s deterministic output adds entropy to the ticket chain, limiting a miner’s ability to alter one block to influence a future ticket (given a miner does not know who will win a given round in advance).

We use the VRF from Verifiable Random Function for ticket generation in EC (see the PrepareNewTicket method below).

Something's not right. The storage_mining_subsystem.id file was not found.

Something's not right. The storage_mining_subsystem.go file was not found.

Ticket Validation

Each Ticket should be generated from the prior one in the ticket-chain and verified accordingly as shown in validateTicket below.

Something's not right. The storage_power_consensus_subsystem.id file was not found.

Something's not right. The storage_power_consensus_subsystem.go file was not found.

Secret Leader Election

Expected Consensus is a consensus protocol that works by electing a miner from a weighted set in proportion to their power. In the case of Filecoin, participants and powers are drawn from the Power Table, where power is equivalent to storage provided through time.

Leader Election in Expected Consensus must be Secret, Fair and Verifiable. This is achieved through the use of randomness used to run the election. In the case of Filecoin’s EC, the blockchain tracks an independent ticket chain. These tickets are used as randomness inputs for Leader Election. Every block generated references an ElectionProof derived from a past ticket. The ticket chain is extended by the miner who generates a new block for each successful leader election.

Running a leader election

Now, a miner must also check whether they are eligible to mine a block in this round.

Design goals here include: - There should be one block per miner per epoch at most (for simplicity) - Miners should be rewarded proportional to their power in the system - The system should be able to tune how many blocks are put out per epoch on expectation (hence “expected consensus”).

To do so, the miner will use tickets from K rounds back as randomness to uniformly draw a value from 0 to 1. Comparing this value to their power, they determine whether they are eligible to mine. A user’s power is defined as the ratio of the amount of storage they proved as of their last PoSt submission to the total storage in the network as of the current block.

We use the VRF from Verifiable Random Function to run leader election in EC.

If the miner wins the election in this round, it can use newEP, along with a newTicket to generate and publish a new block. Otherwise, it waits to hear of another block generated in this round.

In short, the process of crafting a new ElectionProof in round N is as follows in the DrawElectionProof function:

Something's not right. The storage_mining_subsystem.go file was not found.

It is important to note that every block contains two artifacts: one, a ticket derived from last block’s ticket to extend the ticket-chain, and two, an election proof derived from the ticket K rounds back used to run leader election.

Note: Miner power is drawn from the power table, accounting only for power that has been proven over time (see Power Table).

The miner can then check whether they drew a winning election proof by comparing their power fraction to the ElectionProofs value, as follows:

ElectionProof * TotalPower <