Skip to main content

· One min read

Status

Accepted

Context

We are in search for a means to describe our technical architecture.

We are a small team working in a very lean and agile way (XP), so we naturally prefer also light-weight documentation methods which also accomodate change easily.

Decision

  • We will use Architecture Decision Records, as described by Michael Nygard in this article.
  • We will follow the convention of storing those ADRs as Markdown formatted documents stored under docs/adr directory, as exemplified in Nat Pryce's adr-tools. This does not imply we will be using adr-tools itself.

Consequences

See Michael Nygard's article, linked above.

· One min read

Status

Accepted

Context

We are looking for a way of expressing the Hydra Head protocol logic in a Hydra node.

The Hydra Head protocol is defined as a State machine in the paper, whose transitions are inputs that come from different sources which can emit outputs to other instances of the state machine or the mainchain. See the FC2021 paper for details

It should also be easy to review / feed-back to researchers.

We are familiar with React's redux way of structuring applications, which in turn is inspired by The Elm Architecture which itself is a simplification of Functional Reactive Programming principles.

We have experienced benefits with Event Sourcing in the domain of persistence in the past

Decision

Implements the Hydra Head core logic as a loop that:

  1. Consumes input events from an event queue,
  2. Applies each event to the current state yielding potentially an updated state and a sequence of effects,
  3. Execute all effects.

Consequences

The internal state is only ever changed through Events.

The core state machine transition function is pure and reviewing it requires minimal Haskell knowledge.

Side-effects are all handled at the level of the Node.

· One min read

Status

Accepted

Context

The reactive nature of the Hydra node means that clients produce a stream of inputs to a node which in turns issues a stream of outputs representing the outcome of previous inputs or resulting from interaction with peers in the network.

For example, a client may send a command as input, upon which the node might do something. When that something is finished, a output does indicate that. However, there might also be an output emitted to the client when another peer interacted with "our" node.

Queries, messages by clients which do only fetch information from the node, are not in scope of this ADR.

Decision

  • We use a single, full-duplex communication channel per client connected to a Hydra node
  • This is implemented using a simple Websocket with messages corresponding to Inputs and Outputs.

Consequences

  • Clients needing a synchronous API need to implement it on top
  • Clients can receive outputs decorrelated from any inputs and at any time

· 2 min read

Status

Accepted

Context

Given we are structuring Hydra node as a reactive core we need a way to ensure a strict separation of pure and impure (or effectful) code.

We want to be able to test those impure/effectful parts of the code. This requires a means for exchanging the actual implementation for e.g. the function to send messages over a network.

Also we want the ability to swap implementations not only for testing, but also be able to accommodate different usage scenarios, e.g. use a different middleware depending on peer configuration.

In Haskell there are various common patterns to model effects:

  • Tagless final encoding also known as MTL-style although using typeclasses to implement is not necessary, whereby Effect(s) are expressed as typeclass(es) which are propagated as constraints
  • Free monads, or any variant thereof like Eff, freer, extensible-effects, whereby effect(s) are expressed as ADTs which are interpreted in the context of an Effect stack
  • Handle pattern also known as record-of-functions whereby effects are grouped together in a datatype with a single record constructor

(These tradeoffs also appear in other functional languages like F#)

There is not one most favored solution though and we all have various experiences with these techniques.

Decision

Effectful components of the Hydra node (our code) will be defined using the Handle pattern.

There might be other techniques in use because of libraries used etc.

Consequences

For example, the network component is defined as:

newtype Network m = Network
{ broadcast :: MonadThrow m => HydraMessage -> m ()
}

There might be multiple createNetwork :: m (Network m) functions

· 2 min read

Status

Accepted

Context

Although we try to contain the use of IO at the outskirt of the Hydra node using Handle pattern and Reactive core, low-level effects are still needed in various places, notably to define concurrently executing actions, and thus need to be tested

Testing asynchronous and concurrent code is notoriously painful

The ouroboros consensus test suite and hydra-sim simulation have demonstrated the effectiveness of abstracting concurrent primitives through the use of typeclasses (MTL-style pattern) and being able to run these as pure code, harvesting and analysing produced execution traces.

There are other such libraries, e.g. concurrency and dejafu, as well as the venerable exceptions (for abstracting exception throwing).

Decision

For all IO effects covered by the library, use functions from typeclasses exposed by io-classes. As of this writing, this covers:

  • All STM operations through MonadSTM
  • Time and timers through MonadTime and MonadTimer
  • Concurrency through MonadAsync, MonadFork
  • Exceptions through MonadThrow, MonadCatch and MonadMask

Consequences

We can use io-sim to evaluate IO-ish functions easily

Instantiation to concrete IO is pushed at the outermost layer, eg. in the Main or tests.

As some of these functions and typeclasses clash with the cardano-prelude we might want to define a custom prelude (candidate for another ADR)

· One min read

Status

Accepted

Context

The simplified Head protocol in the Hydra paper requires unicast and multicast messaging between participants. However, this can be simplified to only multicast by also sending AckTx messages to all participants and removing the necessity for ConfTx.

There is already a battle-tested implementation for broadcasting messages over networks with any kind of topology (mesh), namely the TxSubmission protocol of ouroroboros-network.

If the network connects only to interested peers, broadcast is essentially the multicast required by the protocol. If this is not the case, some addressing scheme is required and broadcast would be a waste of resources.

Decision

  • All messages emitted by a Hydra node through the Network component are broadcasted to all nodes in the network
  • This implies the emitter shall itself receive the message

Consequences

  • The network layer is responsible for ensuring sent messages effectively reaches all nodes in the network. How this is achieved is left as an implementation detail, i.e. whether it uses relaying or not.
  • We need to make sure all Head participants are connected to the same network.

· 2 min read

Status

Accepted

Context

The with pattern or bracket pattern is a functional programming idiom, a particular instance of Continuation-Passing Style, whereby one component that controls some resource that is consumed by another component of the system, is created via a function that takes as argument a function consuming the resource, instead of returning it. This pattern allows safe reclaiming of resources when the "wrapped" action terminates, whether normally or unexpectedly.

TODO "Tying the knot"

Decision

We use this pattern to provide interfaces to all active components, which exchange messages with other components of the system. A prototypical signature of such a component could be:

type Component m = inmsg -> m ()
type Callback m = outmsg -> m ()

withXXX :: Callback m -> (Component m -> m a) -> m a

Note that withXXX can also allocate resources in order to provide Component or use the Callback, e.g. fork threads which invoke Callback, but also make sure they are cleaned up.

Consequences

Components can be layered on top of another to provide additional behavior given the same interface. This also similar to "decorating" in the object-orientation world.

If the Component is agnostic about the messages it consumes/produces, it can be defined as a Contravariant functor and the Callback part as a (covariant) Functor. This makes it possible to use map and contramap operations to transform messages.

· 2 min read

Status

Accepted

Context

In a Haskell project, we often get to use and re-use the same libraries and functions. Haskell comes with a default Prelude package with the base library, which provides a good and sensible starting point. However, the base Prelude also comes with a few quirks:

  • Many commonly used functions or constructors are not exported by default (e.g. bracket, foldM, first, lift, forM, when, SomeException, Set, & ...etc).
  • Many functions in the base Prelude are partial, like head or read.
  • Many functions simply happens in plain IO, whereas applications usually try to push IO to the boundary as much as possible (for example, using mtl-style class constraints).
  • The interface for I/O operations in the base Prelude is String, which comes with quite major performance hit and often forces to convert back and forth to Text or ByteString equivalents.

All-in-all, while it does the job, the base Prelude may not necessarily be the most convenient prelude for an active project development.

Decision

We'll use a custom prelude to help us get more productive and more importantly, to reduce the daily friction of our interactions with the base prelude. While relude makes for a good candidate, we still chose to re-wrap it in a custom Hydra.Prelude module to grant us the ability to add or remove a few things specifics to Hydra and Cardano in general. In particular, we will hide from relude all the re-exports of the stm library in favor of io-classes which we already use pervasively and which provides (among other things) most of the same capabilities.

Consequences

  • Remove uses of 'cardano-prelude' in favor of a new 'hydra-prelude' module.
  • Cleaning up of imports from existing file modules.
  • Happier feeling day after day from using a developer-friendly prelude.
  • Stop loosing time in often re-importing the same functions over and over.
  • Have an explicit point for discouraging / blessing usage of one or the other function, as well as documenting such decisions

· 2 min read

Status

Proposed

Context

  • Logs are critical to provide observability to Hydra nodes' operators
  • Providing the needed components and tools to be able to configure logging and monitoring to each operator's liking should not be the responibility of the Hydra node, and requires complex machinery that will need to be maintained and evolved
  • When a problem occurs in production, if the process is not verbose enough it can be very hard to analyse the problem
    • Enabling dynamic changes of verbosity in logs is both complex to implement and comes too late
    • Deciding in the code on what's the right "severity" for a log entry leads to dropping important information on how some error occured

Decision

Therefore

Hydra node provides a very simplified logging mechanism whereby:

  • All logs are emitted as JSON-encoded structures providing some metadata (timestamp, threadId) around well-defined data
  • Each log entry is written to the hydra-node process' stdout port, one line per entry
  • The definition of the logged items is considered to be part of the public AγPI of the Hydra node

Note: Proper redaction of sensitive information contained in log entries should still be handled in the code.

Consequences

  • The schema of the logged items should be properly documented in a JSON schema, just like we do for client side API
  • It is the responsibility of the node operator to consume the logs and process them

· One min read

Status

Accepted

Context

  • On-Chain Validation is a critical part of the Hydra protocol, it requires both the ability to submit transactions to the Layer 1 chain advancing the state of a Head, and observing those transactions as the Plutus contracts are validated
  • The Plutus Application Framework is expected to provide the necessary machinery to allow "Smart Contracts" applications to interact with the chain but it's still under active development and not ready for deployment on-chain
  • We want to gather feedback as early as possible and deliver a fully functional Hydra Head node for early adopters to test on a "real" chain (testnet)
  • Our experiment connecting directly to a Cardano node has been conclusive. We can:
    • Connect to a node using local protocols,
    • Build and submit Head transactions triggering smart contracts validation, and
    • Observe transactions using chain-sync protocol.

Decision

Therefore

For the time being, until Plutus Application Framework is released, we will implement on-chain interaction of Hydra nodes directly, connecting to a Cardano node through a local socket.

Consequences

  • Limit Hydra dependencies to plutus repository
  • Remove all PAB-related code as well as off-chain Contracts code and related dependencies