• a.bout
  • t.witter
15 Aug 2019 c.e.
Quick Note on Malleability

tl;dr: There are a bunch of potentially valid signatures for the same transaction hash and public / private key pair. Read on to find out why.


I'm working on getting my dual-funding implementation working. I've written a bunch of code, and now I've got to make sure it all works as expected. To do this, I've been writing 'protocol' level tests using a framework one of my teammates put together a few weeks ago. It allows you to specify wire calls that the test runner will send to the program under test, and then verifies that it receives the correct result back.

It's been a really great tool for debugging all the small programming errors that I've got in the code I wrote. Recently I ran into a verification error that stumped me.

The test files that I'm using are generated using a script that calls bitcoind. It builds a 'funding transaction' and all the required signatures, and then outputs that into a test case, that I can run the test runner against.

Part of what it generates is the expected signature data that the c-lightning node will produce for the signature. When c-lightning sends back its signature data, the test runner verifies that the bytes exactly match what it's expecting.

The problem I ran into is that the bytes aren't matching. The signature that my test-script maker produces is not the same as the signature data that c-lightning generates.

What's equally confusing, at least to me, is that the transactions are identical. So are the keys that they're signing with. How are they getting different signatures then?

Transaction they're both signing (they're signing the first input)


Witness data that I get from bitcoind:


Witness data that I get from c-lightning:


You'll notice that the second line of these is the same 02d6a3c2d0cf7904ab6af54d7c959435a452b24a63194e1c4e7c337d3ebbb3017b, but the first line is different.

I thought that transactions in SegWit were supposed to not be malleable? Why are my signatures different?

Short Explainer on ECDSA Signatures

Turns out that every ECDSA signature includes a nonce, called r. This is a randomly chosen number. The data that is sent in a 'bitcoin signature' includes this r value and the actual signature s. You use r and the public key to verify that the signature is correct for the transaction.

In fact, the second line that was the same above, 02d6a3c2d0cf7904ab6af54d7c959435a452b24a63194e1c4e7c337d3ebbb3017b, is the compressed public key. We know that both c-lightning and bitcoind are using the same public / private key pair to sign it. The 'problem' is that the r values are different.

This isn't actually a problem. Both signatures are valid. We can dissect the signatures to get the r values out.

c-lightning signature, dissected:

3044 02 20 
    // r   78afb02e8c8802f65fe92096eb851c623a3bfb631cc8e41878728f35fc944482
     02 20 
   // s   3958fb2ed3d58fcd30d1bdd7ab90e1e2dfb524d8423c457719065e7ea5bf98c0

bitcoind signature, dissected:

3045 02 21
    // r 00e4ac2f9b298df16be6c287f7b452468ca09d79816f89674ea4fbe0999a3ef6b8
     02 20
    // s 179186b253e1df02714a01b2c19144efa141a7072049e51cddd299572a0cbbc8

The r value for the bitcoind signature is 78afb02e8c8802f65fe92096eb851c623a3bfb631cc8e41878728f35fc944482. The r value for the c-lightning signature is 00e4ac2f9b298df16be6c287f7b452468ca09d79816f89674ea4fbe0999a3ef6b8. Clearly not the same value.

Which is why the signature data is different. Sigh.

What Happened to Malleability?

SegWit is known for having made transactions non-malleable. This does have something to do with the signatures, but not in the way that I thought.

Prior to segwit, you could take a transaction, change the transaction enough to change the transaction id, and still use the same signature. Which is to say that one signature would be valid for different versions of the same transaction. This is bad for lightning, in particular, because we rely on transaction ids staying the same, as we build transaction chains for commitments and the like.

This kind of malleability has to do with the range of transactions that any given signature is valid for. Basically, it means that one signature is only good for that one version of the transaction. If you have a signature, you can't change the transaction and publish it and still have the signature be valid.

It says nothing about the range of valid signatures for that one transaction.

A short diagram might be kind of nice here.

One Signature -> Only valid for one transaction.

One Transaction -> Lots of valid signatures! (with different `r` values)

So, are my test cases ok? Yes they're perfectly fine. What's the solution here? Probably not to verify the signature data that the node under test sends back. I should really just check that the public key works, and do a few modifications so that the node under test has to have their published transaction be accepted by bitcoind in order to continue (currently the test runner also broadcasts the transaction).

In Exitus

Malleability only applies from the "I have a signature, what transactions is it good for" sense. Not "I have a transaction, what signatures are valid for it?".

#bitcoin #sigs #malleability
3 Aug 2019 c.e.
The Importance of History

I work on c-lightning and one thing that I spend a lot of time doing, as a new c-lightning contributor, is making my git commit history as perfect as possible. What is a perfect git commit history? There are two things that our team uses to judge whether a Pull Request's commits are good for committing: how bisectable it is and how readable it is. Bisectability is a concern for the machines, or keeping your commit history interoperable with a useful tool, git bisect. Readability has to do with documentation, keeping your commit history interoperable with other humans that are maintaining the codebase.


Tooling. It's important. It's especially important that you keep it in a state where you can continue to use it effectively. git bisect is one of those tools that is incredibly powerful, but only if you keep an incredibly high bar of 'cleanliness' in your commit history. Basically what this boils down to is that every commit must, at a bare minimum, build. At its core, this has to do with how git bisect works.

For the uninitiated, git bisect is a built-in git utility that helps to identify the patch that introduced a bug. In order for it to work effectively, however, you need to be able to build (and hopefully run the tests) on every commit.

To start a git bisect session, you run git bisect start. Next, it expects you to mark a good and a bad commit. If the current HEAD is bad, you'd say

$ git bisect bad

Assuming you know at least one commit that was good, you then tell git bisect what the 'good' commit is. As an example, here's how you'd mark a commit hash good.

$ git biset good 9a7b7a8e

From there, git bisect will binary search through the commits between the known good and known bad points until you've identified the commit where the error takes place. It automatically checks out commits, and then you run your tests. If the test fails, you mark the commit is bad (with git bisect bad); if the test passes, you mark it as good (git bisect good). When you're finished, return to head with git bisect reset.

Since git bisect picks 'random' (actually they're halfway between the last marked good + bad) commits, if any commit in your project doesn't build and you land on it with git bisect, you're not going to be able to figure out if it's actually a good or a bad commit. Same for breaking tests as well.

To summarize: non-buildable commits break your abilty to 'easily' git bisect through a codebase, and build at any given commit.


The other thing that c-lightning reviewers ask for is readability in commit messages. This means that each commit should only contain a single 'logical' change, and that the commit message should accurately describe it. Further, if you submit a set of commits in a pull request, the ordering of the commits should be such that they organize a 'narrative' around what you're changing. If you need to make changes to helper functions, or pull something out so that you can use it elsewhere, that would go before the commit where you use it in the new thing. Same for renaming of things!

Ideally a reviewer will be able to step through the commit set on a PR and be able to understand how, step by step, you got to the final goal that the PR represents.

In Practice

I find all of these things hard to do in practice, but I've been working on it. One reason for this is that I tend to change a bunch of things at once. Another is that often times when you start in on a project you don't have a good idea of the scope of the change that's necessary to accomplish what you're looking to do. So you start in one place, only to discover that there's a number of other things that need to change in order to move forward. I tend to end up with a huge heap of unrelated changes staged in git.

git add -p

There's a couple of ways to get around this. One is to commit more frequently. You make a small change, or rename something, or pull up a method. Then you stop and check it in. Another good tool is to use git add's -p flag. This lets you 'pick' what changes you'd like to add.

$ git add -p

This starts an interactive program that presents you with every edited 'hunk'. You can add it to the staging area (y), leave it out (n), or if it's too big (you want to add a few lines but not the whole hunk) you can edit it, with 'e'. There's a lot more options, you'll see them if you try this. Try them out!

I find myself using 'edit' more than I probably should. I like how fine-grained a control it gives you for adding things to a commit.

If you accidentally stage a 'hunk', you can similarly unstage hunks with the same tool using git reset HEAD -p.

git rebase -i

Once you've made your commits, you can reorder them, merge commits together, or execute a commands against them using git rebase. I use this all the time to add more things to other commits with fixup, to re-arrange the order that commits come in, or to re-write a commit message. A word of caution about re-ordering commits: this is a really great way to end up with conflicts if you're not careful. If you end up with a mess in the middle of a rebase, you can always abort with git rebase --abort.

Typically when I start on a thing, I'll have 20+ commits that will get whittled down to 2/3rds or half of that number for the PR.

I've never done this, but you can confirm that git bisect works by running git rebase -i with the -exec flag. Kamal[1] has a great blog post on how this works here. Typically what you'd exec is the build and check make commands, and also your linter. For c-lightning this would probably be something along the lines of make check check-source.

git rebase is great. Score one for neater commit history and preserving bisectability!

In Exitus

Yes, I spend a lot of time these days massaging commits.

[1] Hi Kamal!

#git #history #bisect
23 Mar 2019 c.e.
10 Years of Twitter

I signed up for Twitter on March 5, 2009. I can't remember why or how I heard about Twitter, or who thought it'd be a good idea for me to get on it. I was in college at the time, and worked as a TA for a professor in the school of Management Information Systems. I strongly suspect she asked me to get on to help promote events that she was running and the like, but it's hard to say. One of the most defining memories I have of it was tweeting from the bus via flip-phone SMS messages to a short code.

Anyway. I've been on the platform, off an on for the last decade. In honor of a decade of tweeting, I downloaded my archive and compiled some stats about how I've used the platform since then.


I downloaded the entirety of my Twitter archive from the settings page on March 19th, and then used this python script to sort through things. It's not really well stitched together, but if you want to try it out, here's a general set of commands that I run to get it going:

$ python3
>>> exec(open('script.py').read())
>>> tweets = load_tweets('tweets.csv')
>>> tweets[-1]['text'] # this is your first ever tweet!


Ok, so here's some basic @niftynei tweet stats, from Mar 5, 2009 to Mar 19, 2019:


    Total tweets: 18,780
    Retweets: 2,334
    'Self authored' tweets: 16,446

    Total characters: 1,187,164
    Average characters per tweet: 72
    Total 'words' tweeted, excluding retweets: 186,316


    Replied to: 1,448 accounts
    Most replied to: myself (3,532)
    Top five replied to, links are to first reply ever:
      @zmagg (245)
      @vgr (189)
      @jc4p (169)
      @turtlekiosk (130)
      @lchamberlin (108)

    Longest tweet (by characters): 302 chars, 278 w/o link     Longest tweet (by words): 56 words, 271 characters

    Longest gap between tweets: 319 days, 1:17:08
    Second longest gap: 138 days, 4:36:53
    Shortest gap between tweets: 0 days, 00:00:00
    Median gap length: 0 days, 0:06:21
    Average gap length: 4:01:30

    Most tweets in a day, with retweets: 120


I also found some 'collections' of tweets that I did, based on hashtag. Here's a set of 'quotes' from 'Bob Moses, Software Project Manager' I wrote in 2015, right after reading Caro's Robert Moses book.

  • Jill who'd you tell about our plans to shut down that API? Well, Tim Cook just Slacked me about it #BobMosesSoftware
  • We've already invested two weeks. If we cut it now, it'd be a waste of developer, server, and your time #BobMosesSoftware
  • Make the button blue? Impossible. #BobMosesSoftware
  • These AB test numbers have links to private interests. #BobMosesSoftware
  • Look, the PM who sponsored this feature was a pitiful excuse for a person, and a crank. #BobMosesSoftware

Here's a collection of #words, some with more meaning than others:

  • ex pose say #words
  • bingo buzzchain bandit #words
  • concurrently battling an ur-reductive mental trip #words
  • physical manifestation at the hilt of representation #words
  • strong bold memories of Europe sunshine in the spring #words
  • It's a trinidadian dance funk kind of afternoon of the likes only Pitbull can satisfice #words
  • axiotic dimensional #words
  • "There were 6 right answers but I only knew one" #words
  • some trips you don't come back from 🍃 #words
  • dispatches from the hallowed halls of productivity theater #words
  • Vinculated to the predicated #words
  • What would it mean? To never know the joy of driving a nail, firmly flush with the wood top of your coffin. #words
  • deliciously voyeuriste #words
  • Skewed perceptions: a relativistic model #words
  • Cognoscenti is probably the best word. #words
  • The opiate for the masses. The opiate for the masters. The opiate for the missus. #words
  • tragi-comic #words
  • sliding swiftly into the obdurate past #words
  • spinning dystrophies of inalienability #words

Expanding on the literary theme, here's a series of sentences that might make good starts to novels:

  • I can't stop thinking about the silver Mary I saw at the Sacre Couer #novelstarts
  • And thus began my long love affair with the Q train. #novelstarts
  • She started pointedly: my guilty pleasure is stalking you. #novelstarts
  • It was all the things I had not done yet that kept me awake, instead of all the things that I had that put me to sleep. #novelstarts
  • all of our conversations were just lines of this screen play i was unwittingly writing called You & Me #novelstarts
  • The saddest sadist you'll ever meet lives ... #novelstarts
  • Character assassination was the strategy. Twitter bots, the chosen methodology. #novelstarts
  • "I just want to spend the rest of tonight at disco karoke with Hotline Bling on repeat," she said, breaking into a slow robot. #novelstarts
  • El Doctor te veras. #novelstarts

Heat of the Tweet

Finally, here's a 'heat map' of tweeting from the last 10 years. And yes, I did swipe the formatting from the Github repo's heat map.

2009 Mon Wed Fri
2010 Mon Wed Fri
2011 Mon Wed Fri
2012 Mon Wed Fri
2013 Mon Wed Fri
2014 Mon Wed Fri
2015 Mon Wed Fri
2016 Mon Wed Fri
2017 Mon Wed Fri
2018 Mon Wed Fri
2019 Mon Wed Fri
#twitter #blogging #stats
9 Feb 2019 c.e.
A Taxonomy of Lightning Nodes

It never ceases to amaze me how little the general crypto population knows about how the lightning network works, so I thought I'd write down something that's been quite obvious to me for a while, with the hopes of influencing others to see it my way.

Lightning is a network of node operators. Each node has a wallet with funds, that are then apportioned amongst a set of channels to other nodes. Each channel that is opened has a balance, and each node in the channel has the right to spend a certain amount in that channel. This "right to spend" gives every channel a directionality to it. In other words, which direction the funds can move at any given moment depends on which side has the right to spend them. For this reason, the Lightning network is a directed graph.

Every payment that moves through the system changes the balance of payments in every channel that it flows through. As payment volume grows, managing the 'flows' and ability to send payments from one node to another will become an important and non-trivial management task.

Drawing Lines Between Nodes

A key to understanding how these flows will affect ability to make payments is to understnad that not every Lightning node has the same 'goal'.

In fact, you can classify these nodes into three distinct groups. Each of these groups represents a different policy on liquidity in their channel balances. As such, the actions they will each regularly perform on their channel balances will be distinct. A channel balance is only useful if it allows you to do what you need to on the network, and each of these three actors will have different goals.

Theses three node groups are:
- consumers
- vendors
- liquidity providers


This is probably the most intuitive group to understand, since it's every one of us. A consumer is a net supplier of funds to the Lightning network. On a whole, they spend more money over Lightning than they receive. There is a certain amount of exchange that happens among nodes of this type, but this amount is dominated by their outflow to Vendor nodes. Typically, their payments will be to a relatively closed set of repeated contacts.

Generally, the actions a consumer takes will be one of:
- Adding more money to their wallet/outgoing channel balances
- Sending payments to vendors
- Creating new channels to pay new vendors

The apps that these users use are typically mobile wallets and web browser extensions. They're generally interested in centralized/custodial services. Probably not running their own node unless it is their mobile client or they've invested in a small home node.


This is the Amazons and Blockstream stores of the network. A vendor is a net drain of funds on the Lightning Network -- they receive more payments in than they send out. They are typically receiving inflows in exchange for a good or service, which means that they'll be withdrawing funds from their channels to cover their costs.

Generally, the actions a vendor takes will be one of:
- Withdrawing money from their channels
- Opening channels with liquidity providers, to get inbound capacity
- Originating invoices

The apps and infrastructure that these vendors use will generally be a bit more intensive and always on than consumers. Their ability to transact will be closely tied to their ability to reliably source inbound capacity. Backups and watchtowers are of a bigger concern to these users than to consumers.

Liquidity Providers

This is the HODLers, people who have a chunk of crypto that they want to put to work but aren't interested in spending it and don't really have much of anything to sell. They've got the time, know-how, and resources to set up a more 'industrial strength' node than the general 'consumer' population. They're interested in writing custom algorithms that can help them figure out how to price their liquidity and are willing to spend the time and energy (generally speaking) to figure out what configuration of channel balances and flows will bring them the best return on their node setup, in terms of routing fees. They earn money by providing liquidity between consumers and vendors.

Generally, actions a liquidity provider will take are:
- Opening new channels to vendors, to provide inbound capacity
- Advertising liquidity
- Rebalancing their channels between vendor + consumer accounts
- Network analysis to discover lucrative avenues to open/create new channels

In Exitus

It's my understanding that the Lightning Network needs all of these types of nodes to function. Providing a visible market for liquidity will make these roles even more apparent. I'm incredibly excited about the inclusion of liquidity advertising in the 1.1 spec, as it will give another lever for liquidity providers and vendors to make decisions about how to most effectively allocate channel balances across the network, in a decentralized and transparent manner.

#lightning #markets #liquidity #taxonomy
28 Jan 2019 c.e.
Reflections on the Art of JSON in Golang

Last month, I put a good bit of time into writing a little library to help bridge the gap between the requirements of JSON-RPC's spec and Go.

The Go standard library provides functionality for the version 1.0 of JSON-RPC. There is no standard library implementation for the 2.0 spec, but there's plenty of other implementations, some of which seem to get pretty close to the idioms that I landed upon for my version of it. I ended up writing my own library for a few reasons. First off, I wanted some practice implementing a spec. The work I'm looking to do for lightning over the next few weeks is basically spec writing and implementation; it seemed like a good idea to get some practice following a very simple and well documented spec like the JSON-RPC 2.0 spec.

Secondly, my motivation for needing a JSON-RPC implementation is that I was looking to write a 'driver' for the new plugin functionality that Christian Decker has been adding to c-lightning. c-lightning's plugins have a few very specific needs[1] that would likely require modifying another JSON-RPC implementation. Plus there's the overhead of figuring out how another person's library works.

I leveraged the json encoder/decoder from the Go standard library as much as possible. The trickiest bit was getting a good idiom put together for how parameters are declared and marshalled into command calls. There's a lot more that went into putting the whole plugin/rpc thing together, but I think for this post it'd be the most delightful to just walk through the design decisions that I made for the way the params parsing works.

Problem Statement

Let's talk a bit about what's going on during a JSON-RPC command message exchange. The general gist is that there's a client who wants to execute a method on the server. In order to do this, we need to tell the server what method we'd like to call (by providing a name) and then also passing in any and all of the information that the method needs (these are typically called 'parameters' or 'arguments'. The JSON-RPC spec terms them params).

Our job then is to provide an interface such that the client can smoothly call a method and then receive a response from the server. The ideal interface for such an interaction would look identical to any normal method call. For example:

func hello(greeting string, to User) (string, error) {
    // magically invoked on the server
    return "result", nil

Go provides a json marshaler/unmarshaler, a package called encoding/json. The problem is that the marshaler works on structs, not method signatures.

Instead, jrpc2 takes the tack of asking users to write their method calls as structs. Here's how the hello method that we saw above would be rewritten as a struct.

type HelloMethod struct {
    Greeting string `json:"greeting"`
    To *User        `json:"user,omitempty"`

Each of the method parameters is now represented as a public struct field. When we send this across the wire, we'd expect our library to generate the following json:

            "last_name": "Neal"

We need a way to signal to our library that this is in fact a 'method' that our jrpc2 library knows how to marshal into a valid command request object. We do that with an interface, that defines a single method, Name(). Any struct that implements this method will be considered ok for sending over the wire to the server.

func (r *HelloMethod) Name() string {
    return "hello"

We still need a way to pass this method call to the server, but from a client perspective that's all we need in terms of defining a new method.

On the Server End

c-lightning's plugin framework requires your app to serve as both a JSON-RPC client and server, since users can invoke method calls from c-lightning that are then passed to your plugin to execute. Server RPC method objects are mostly the same as above, with two additional methods added to the interface, New and Call.

When the server receives a request from a client, it 'inflates' the json request into a ServerMethod object. The New method gives you the ability to do any initialization or setup needed for the new instance. If there's state that needs to be shared between instances of the ServerMethod, you can pass them along here. Here's an example of where you want a New version of the GetManifestMethod to have access to the plugin object.

// definition
type GetManifestMethod struct {
    plugin *Plugin  // private so it's not mistaken for a method parameter

func (gm *GetManifestMethod) New() interface{} {
    method := &GetManifestMethod{}
    method.plugin = gm.plugin
    return method

This is nice because it lets you share state between method calls. Then there's the actual Call part of the ServerMethod, which obviously is where you do work. Since the 'inflated' struct is 'passed in' as the object of the call (i.e. the whole (gm *GetManifestMethod) declaration, you have access to all of the parameters that were sent by the client.

func (gm *GetManifestMethod) Call() (jrpc2.Result, error) {
    // example of using the plugin obj
    for i, sub := range gm.plugin.subscriptions {
        // ...
    return result, err

If you return a non-nil error from the Call, the server will ignore the result and send the client back an error object. As a final note, if you want your Result to be formatted for json correctly, you'll need to add good json annotations for its fields. We use the default encoding/json package to marshal and umarshal everything over the wire.

A Few Things on The Way to the Forum

The trickiest part of the whole jrpc2 mechanism is the custom marshalling for the param struct. The JSON-RPC spec defines two different ways that params can be passed from the client: either as an ordered array or as a named dict. i.e.

// As an ordered array 
"params": [1, 2, "hello", {"inner":"object"}]

// As a named dict
"params": {"first": 1, "second": 2, "greeting":"hello", "extra":{"inner":"object"}}

Basically, we're wrapping client calls in an outer object, with the 'method struct' being serialized into the params. jrpc2 includes methods to serialize calls as either an ordered array or a named dict, but defaults to the named dict when used as a client. It's worth noting that the order of appearance of fields in a method struct is how they'll appear in the array. If you re-arrange the ordering, and have switched it to use 'vectorized params' (aka an ordered array) then they should be switched in the param call.

Reflection Dragons

In order to do this correctly, I ended up digging in pretty hard to the reflect library. There's a bunch of nuance around deflating and re-inflating objects from json that I really struggled to find good resources on. Most golang articles on reflection stop and start with Rob Pike's article on the Go Blog, The Laws of Reflection, but it doesn't dig in much beyond the basics.

Re-creating a new version of the method struct is fairly straight forward, you can just call the New method. However, for any param that is a pointer on the method struct, we have to allocate a new 'extra' object and then run the json Unmarshaler on it. There's a few steps to this.

First, we need to determine what type of object we should be inflating. We can use the method struct's field declaration to determine what type of new struct to inflate.

When you 'inflate' a new object from a field type, it initially comes to you without a pointer address, because no memory has been assigned to it yet.

Short aside. Originally, method structs on the server didn't have a New command, instead I inflated it directly. Figuring out how to do this took me some amount of time. Unfortunately, I replaced it with the New method, as I wanted a way to be able to share objects across every method call, and then I completely (and accidentally) destroyed my git repo and lost my commit history so I can't show it to you exactly but, it involved inflating a new copy from an existing one and then figuring out a way to get it assigned to an address space so that we'd have a pointer to pass around. This isn't such a problem for sub-fields on the struct, since creating a new one allocates space for all of its member fields.

The only place that you need to do allocate a new object is for a field on a struct that's a pointer. Here's a short example.

// Method struct to inflate
type IdkMethod struct {
    Clue *Clue

When we're serializing this to json, we'll pass the Clue object as serialized json (if the pointer exists) or pass null if there is nothing assigned. On the server side, we need to 'inflate' this back into a Clue object, with a pointer that we can assign to the new IdkMethod object. Here's how we do it.

if fVal.IsNil() {

We use reflect.New to create a new version of the type of field. We have to use Type().Elem() because the type is a pointer -- we want to create a new struct of the type of the element that the pointer is pointing at, not a new 'pointer to element'. reflect.New returns a pointer to the new object that it has just allocated, which we can directly set the value of that field (e.g. fVal) to.

Another short aside, I don't know how you're supposed to figure out how any of this more complex pointer magic works if you've never dealt with pointers on a fairly intimate level. Language level abstractions are great ...until you fall into the pit of object marshalling.

There's a lot of other little neat things that I ended up needing to figure out to do, like filling in a slice or map. Briefly, here's the code for inflating a set of map objects:

    // the only types of maps that we can get thru the json
    // parser are map[string]interface{} ones
    mapVal := value.(map[string]interface{})
    keyType := fVal.Type().Key()
    for key, entry := range mapVal {
            eV := reflect.New(fVal.Type().Elem()).Elem()
            kV := reflect.ValueOf(key).Convert(keyType)
            err := innerParse(targetValue, eV, entry)
            if err != nil {
                    return err
            fVal.SetMapIndex(kV, eV)

You can find all of these great things and more at work in the innerParse function of the jrpc2 library. Currently it lives here.

In Exitus

I'm half-convinced there's a construction of param parsing where you only need to declare the method, and you can somehow 'shadow compose' the request objects that I settled on above. But! After using the library for making a few plugins plus the RPC object for c-lightning calls, I think there's a nice balance between declarativeness and flexibility. Particularly, while at first it seemed a bit redundant, having an explicit Name() function hook for the Method objects nicely decouples the declared method name from whatever is the nicest way to express it in Go.

By way of example, there's an RPC method on c-lightning called dev-rhash. With the Name() idiom, it's easy to handle this:

func (r *DevRhashRequest) Name() string {
    return "dev-rhash"

Under the 'more syntactically sugarful' and also imaginary (because I'm not entirely certain you can do it) way that I've been imagining, you'd have to write the Go method like this:

func dev-rhash() string {

And then every place you wanted to use it, you'd have all kinds of ugly dev-rhash() calls. Say nothing of the fact that Go uses upper and lower case letters on functions and objects to denote the 'visibility' of a method -- as written you wouldn't be able to call this method outside of the containing package, which for a library function renders it quite useless. Anyway, I think the API that I landed on is a decent one, for this reason alone, almost.

[1] The c-lightning plugin to c-lightning relationship is a bit complicated. A plugin is both a 'server', in JSON-RPC parlance, and a client. For most of the commands and the like, a plugin plays the role of a server, providing methods that c-lightning can call out to. Notifications from c-lightning to your plugin take advantage of the client -> server notification framework that's included in the JSON-RPC spce. The one exception, so far at least, is that you can pass back logs from the plugin to c-lightning, such that plugin logs will appear in the getlogs command on c-lightning. In order to do this, your plugin sends a log notification to the c-lightning parent, which inverts the server -> client relationship.


I cobbled together info on how the more magique aspects of reflection works from a variety of places. Here's some of the ones that I found the most helpful.

How to create an object with Reflection via reddit
Writing into a slice via blog
The exhaustive list of reflection tests in the golang source golang.org
And of course the seminal "The Laws of Reflection" Go Blog

#json #golang #encoding #static #reflection
28 Dec 2018 c.e.
The Demo at 50: Looking Forward

December 9th, 2018 marked the 50th anniversary of Doug Englebart's Mother of All Demos. (You can watch the actual demo on YouTube or read about it on Wikipedia). To commemorate the occassion, Doug Englebart's daughter and some of his long time collaborators pulled together an all day symposium for the still surviving demo crew members and other early Internet luminaries. I, like all the other lumpenproletariat of the modern Silicon Valley, bought a ticket to attend.

The day's festivities were held at the Computer Science Museum down in Mountain View, about a forty minute drive from San Francisco early on a Sunday morning. My friend and I arrived early, which gave us time to grab coffee, almost front-row seats at one of the twenty or so ten-person tables that filled the hall that the day's lectures would be held in, ogle the paper signs on tall cocktail tables that marked where the in-person demos of similar tech projects would be held, and traipse down to the first floor museum exhibit, one of Google's prototypes for a self-driving car.

It was mostly a day of reminiscing, with a few more modern speakers talking about projects they're currently working on to make the Web a more annotated and sourceable place. The main drive of most of the projects seemed to be HyperLinking. Ted Nelson, the closing speaker and an early Web researcher, is still going on about how HyperLinks should have been bi-directional.

On the System Itself

There was a panel discussion from a few original ARC researchers. We had a hardware guy, a couple of software guys, and Doug Englebart's daughter, Christina Englebart. The hardware guy, Martin Hardy, had created a hypterlinked diagram to show us all how the original demo computer system had been constructed. The demo itself was held at a hall in San Francisco -- the actual computer mainframe lived in a research center in Menlo Park, south of SF by a few tens of miles. In the demo, the computer screen printout and video feeds from several different cameras are broadcast onto the screen so that we can see researchers in Menlo Park, as well as a camera feed pointed at Doug's face, on stage. In order to get these video streams to show, they had to pipe all the data back to the mainframe in Menlo Park, where the computer composed the stream to feed to projector. They used a microwave tower to beam the feeds, as the Internet hadn't been invented yet. It'd be a few decades until fast speed Internet was installed between here there and everywhere.

Once the reminiscing and story recounting was done, they had a little bit of time to ask to audience for questions. There may have been a few, but the only one I remember was from a man who wanted to know, definitively, what room the Demo had occurred in. Given the spirited debate that follows, it seems that the biggest controversy surrounding the event was the actual location that it happened at. Good thing we have a video recording of it, otherwise we may not be sure that it happened at all.

Another gizmo that came up during the day was the projector machine that the group developed that could stop a film strip on a single frame. You used to not be able to pause film projectors because the heat from the bulb would burn the frame that you stopped on. Anyway, somehow the ARC research group was able to build a projector that would let you stop the film at any arbitrary point. One day, someone was showing the presentation to a group that wanted to know more about the project and happened to stop the film exactly on a frame that showed the computer had crashed. In the middle of the Demo. If you watch the film, you may be surprised to hear this, as you'd know that during the Demo, the whole project works pretty flawlessly. Well, it turns out that it did, in fact, crash. The reason you can't see it when watching the film is one that the digitization process probably lost that exact frame and two that the computer system they built was so incredibly quick to come back online that it restarted without anyone noticing. Turns out that the computer system crashed so frequently that they tuned it to come back so that no one would notice it had even failed. It's hard to square that with how slow my laptop takes to start some days.

Web Researchers, Then and Now

There were a number of great panel discussions about web technologies from a host of different web pioneers. Even Alan Kay made an appearance -- they put him on one of those teleconferencing robots and he beamed in from his home. He got up a few times to get a thing; I wasn't sitting quite close enough to get a good look at the books on the bookshelf behind him.

I think the rowdiest panel was probably the one with Wendy Hall, a UK researcher who's been working on web hyperlinking technology projects since the Demo, and Peter Norvig, the chief researcher for search at Google. There was a strange amount of hostility in the room towards Silicon Valley Money, chiefly coming from the people, a majority in the room to be clear, who had spent their lives in academia and decidedly not made it rich on the Internet and Software boom that came to be after their demos. Unfortunately, I don't remember the exact issues that showcased Hall and Norvig's ideological differences, but I believed it turned around a responsibility to filter out fake news and propaganda. Wendy had done a lot of work on being able to easily show provenance for information, so it was interesting to see her in conversation with Norvig, big wig of Google Search. As an aside, I'm not sure where the line on authoritarianism comes down between censorship and the promotion of truth, but we definitely seemed to be flirting with it. Even Vint Cerf had some strong things to say about the quality of information on the Internet.

Yet another presenter put up on the screen a Mosaic listserve email from Marc Andreesen, one that talked about how he had hacked into the browser the ability to add annotations to any webpage and asking for beta testers.[1] On page annotations seemed to be one of the biggest wishes from the bevy of Internet luminaries we heard from. Well, that and a way to get rid of fake news. Dan Whaley from Hypothes.is was on a panel as well. It was interesting, to me, to see modern efforts to bring annotation to the web. I'm not sure what every website would be like with a comments section, but it seems that the effort to find out hasn't died out yet.


One thing that Doug's daughter really brought home for me was the question of what the impact and legacy was of the Demo. The company that bought the technology wasn't able to turn it into a successful product. That wouldn't happen until later, much later, after Microsoft and Apple got their introduction to the mouse and such at Xerox's Palo Alto Research Center. In fact, Doug's ARC project was largely dismantled after the team was bought by Tymeshare. It seems that he had worked hard to open up the lab to researchers from other projects and universities -- almost everyone who was alive and working in the field at the time had, at one point or another, been to the ARC lab to see the software system at work in person. I can't help wonder if it as the collaboration and openness with the lab that led to some of the technological marvels that the group demoed that day in '68 to actually getting out into the world, in some form or another. Sure there were plenty of other insights and research that the team had done, but the reality is that annotations and bi-directional hyperlinks don't have mass adoption in the same way that the mouse and graphical user interfaces achieved.

How much of this idea leakage was due to the work that Doug did to make their projects available to others outside of their group? How much of it was a result of the same researchers ending up at Xerox's PARC which then let Steve Jobs and Bill Gates inside to see what they had built? It's hard to say, exactly.

[1] I wasn't able to find the original email, but Marc himself uses the feature to explain his investment in Rap Genius

#mother-of-all-demos #impressionism #conference-swag
27 Dec 2018 c.e.
Explaining Replace By Fee

I apologize in advance to those readers of mine that have zero interest in Bitcoin. I'm personally quite absorbed with the project, and am hoping that by writing about it incessantly, I might be able to convince you to at least appreciate the project for its vast complexity, if not for the riches it might make you, if only you invest at the right time.

I'd like to spend some time today writing out everything I know about a small corner piece of the Bitcoin puzzle, a transaction replacement protocol colloquially termed "Replace By Fee", or RBF for short.

A short description of the problem space

In order for a Bitcoin transaction to be considered valid, you must first have it included in a block by a miner. Normally, the way that would happen is as follows:

  1. You compose and sign a valid Bitcoin transaction. I'm leaving the details out here, but think of it like a HTTP packet that is ready to be sent out across the network, if that's helpful.
  2. You broadcast your transaction out from your wallet, onto the Bitcoin network.
  3. Other Bitcoin nodes on the network see your transaction and add it to their 'mempool'. This is the set of all Bitcoin transactions that have not yet been included in a block. They are candidates for inclusion.
  4. A miner receives your block. The miner finds a winning hash that makes its block a block. Your block is now mined.
  5. The newly mined block is transmit from the miner's computer to all the other computers on the Bitcoin network.
  6. Upon receiving this block, the Bitcoin node evicts all of the now-mined transactions from its mempool.
  7. Rejoice. Your Bitcoin is Spent!

You may remember that the topic we're discussing today is known as 'Replace By Fee'. When, you might ask, in this sequence of events might you want to replace your Bitcoin transaction?

The answer is sometime between steps 3 and 4 above. After you've broadcast your transaction, there is a chance that it will be seen and mined by a miner. Once your transaction has been mined, you can no longer broadcast a new version of that transaction, as the inputs to it have now been marked as spent.

There are a few cases, however, where your transaction might get trapped or evicted from the mempool without being included in a block. One common case for when this might happen is when the number of transactions that are looking to be included in a block (ie the mempool size) is larger than the available blocksize. In this case, transactions tend to be processed or mined based on the feerate per kilobyte that they offer to pay the miner for their inclusion.

If you've broadcast a transaction with a low feerate, and suddenly the mempool fills up with a lot of transactions that are looking to be included in a block, you may want to update your transaction to provide a higher feerate, so that your transaction will be confirmed in the next available block.

There's currently two mechanisms that people use to try to get their transaction included. The first is what we'll be talking about more in depth here, Replace By Fee. The basic gist of Replace by Fee is that you're rebroadcasting a previously broadcast transaction, but with a greater fee paid than the prior transaction.

The other strategy that wallets use to get transactions included in full blocks is called Child Pays For Parent, or CPFP for short. It involves issuing a new transaction, one that spends the earlier, still unconfirmed transaction. This second, child transaction will pay a larger feerate than it might on its own, with the hope that the now pair of transactions' total feerate will be high enough to merit inclusion in the next block. CPFP only works if the transaction you broadcast has an output that you can spend.

RBF: The Existing Algorithm

Replacing By Fee replaces the earlier transaction that you broadcast in other node's mempools. That's where the replacing happens. There is a set of rules governing whether or not a transaction is eligible for being evicted from the mempool and replaced by a new one. Here's a few things that the 'accept into mempool' code checks...

  • The transaction that you're attempting to replace has flagged itself as eligible for replacement. This is flagged at a transaction level, but is retroactive for any as yet unmined inputs that you're spending. If any of a transaction's inputs, or it's input's inputs, are flagged as replaceable, then this current transaction is also considered eligible for replacement. If a transaction or any of its parent inputs are not marked as replaceable, any transaction with an input conflict (that is they'd be spending the same inputs) is rejected with the error "txn-mempool-conflict".

  • Requires that all inputs already exist in the UTXO set. No currently unmined inputs are allowed in a replacement transaction. This is a tighter rule than the desired one, which is to check that the replacement doesn't require 'low fee junk' to be mined first. You can avoid this by rejecting any replacements that aren't using already mined inputs.

  • A replacement candidate must pay more in fees than all the transactions it replaces. The rationale for this is that sending transactions across the network consumes bandwidth. The higher feereate of the new transaction, in theory, pays for its increased usage of bandwidth: once for the original broadcast and then again for every subsequent replacement. Note that the nodes keeping and broadcasting this transaction don't get paid -- only miners do. In that sense the fee is more of a social justice than a net payment to every node that sees the transaction.

Note that this is in total fees, not fee rate. Any replacement transaction must pay more in total fees than the entirety of any and all transactions that the replacement would displace from the mempool. There's the potential that you'll be replacing an entire "package" of umined transactions, a parent-child chain of transactions that are looking to be mined. If you're a small transaction and you're trying to replace another who has an extremely large sized child also in the mempool, your effective fee rate (roughly calculated as the fee paid per byte of transaction that is included in the block) will be much higher than the original as you need to cover a larger amount of fees with a smaller number of bytes.

  • Finally, if the 'package' of transactions that you're looking to replace numbers greater than 100, your transaction replacement won't be added to the mempool. In other words, if someone has attached 99 transactions onto the transaction you'd like to RBF, you're shit out of luck. You'll have to wait until there's enough room in a block for your original to be mined.

Proposed Changes

Russell O'Connor published a proposal to change how the RBF rules work, at least two of them. The proposal would update the total fees rule. Instead of a replacement needing to beat the absolute fee amount of all transactions that it would be replacing (aka the "package" of transactions), it'd only need to be beat the effective feerate of the original. Additionally, the proposal would amend the 4th rule, such that the fee on your replacement is at least as much as the minrelayfee on the total package you're looking to displace from the mempool.[1]

Why is minrelayfee used as a minimum? A transaction that's replacing a larger set of transactions removes already transmitted bytes from the mempool. This rule change makes sure that the replacement transaction 'pays' for the cost of relaying those removed bytes.

Ok this is all pretty tedious. Let's take a look at some examples.

Miner Incentives, A Consideration

There's two cases that we should consider: a larger transaction wants to replace a smaller transaction (small txn -> larger txn) and that of a smaller transaction replacing a larger set of transactions, or package (large package -> small txn).

Current Rules

small txn -> large txn: Rule 3 stipulates that the total fees must be greater, with no regard to fee rate. In practice, no replacement is accepted if it lowers the total feerate of the mempool. (source). In practice, this shouldn't happen anyway. The motivation for RBF'ing a transaction is that the block inclusion feerate cutoff has spiked -- replacing one transaction with another larger one with a lower fee rate makes it less, not more, likely that your transaction will get mined in the next block.

large package -> small txn: The smaller transaction must pay more total fees than the existing package. The miner doubly wins: they're making the fees of a large transaction in a smaller byte footprint.

Proposed Rules

small txn -> large txn: Miner's choice strictly improves. The fee rate per byte that they're including has increased and the net fee of the new, larger replacement transaction is greater. This is no change from the current scheme.

large package -> small txn: Miner's choice also improves. Although the total fee that they will make for mining the smaller replacement transaction is net-net smaller than the fees the entire large package would have earned them, given a competitive environment for blockspace (ostensibly why the RBF was triggered in the first place), the smaller transaction with the higher per byte fee rate is more likely to be mined than the larger, lower fee per byte package it's replacing. The incentives of the miner (highest fee per block byte) and the RBF'er (having the transaction confirmed for the lowest reasonable fee) align.

Wherein We Contemplate a Word Problem

Let's take a closer look at the large package -> small txn case, as that's clearly the one where the proposed rule change has the greates impact.

A 1ksipa size transaction with a 10ksipa sized child transaction is in the mempool. The current feerate on the block is 2 satoshis / sipa[2]. The total fees that these two transactions, or package, pay is 2ksat + 20ksats = 22ksats.

Under the current scheme, a replacement transaction of size 1ksipa would need to pay at least 23k satohis, a feerate of 23 satoshis / sipa. This is an 11.5x increase in feerate from the original package's rate of 2 satoshis / sipa.

Under the proposed scheme, a replacement transaction of size 1ksipa would need to pay 12k satoshi in fees in order to replace a set of transactions of size 11ksipa. The effective feerate on the replacement transaction is 12 satoshis / sipa, a 6x increase in feerate above the package it's replacing.

The proposed ruleset strictly improves the feerate of the mempool, while lowering the fee ceiling for replacing a large or weighty transaction.

Notus Commentarius

RBF mechanics closely resemble that of an auction, where the rules for replacement are actually the next price that the auctioneer will accept a bid at. The current rules set the floor for the next bid to be extortionately high if the number of bytes you're looking to replace is quite large. Russell's proposed rule change lowers the bid floor to a more reasonable metric.

One of the largest arguments against changing the replacement fee rules, as far as I can tell, hinges on the argument that without a fee hike, anyone could spam the network with RBF requests, creating mempool churn and eating up network bandwidth. I'd argue that any RBF mechanism leaves an opening for this style of DoS attack on a node. The difference between these two proposals is not the mechanism, but merely the floor cost for waging such an attack -- at some point your transaction will be mined and the fees you've offered up will be paid. Further, the only case where this attack would be truly expensive is in the case where they're looking to replace a large number of bytes in the mempool -- perhaps that truly is the most likely DoS attack vector, however.

Thanks for sticking with me! Hope you enjoyed learning more about how mempool transaction replacement works! I left a few things off, but the main gist of how RBF works is all here.

[1] Russell O'Connor's proposed RBF rule changes (source Bitcoin ML) vs BIP125, the current RBF rules.
[2] A sipa is a byte/weight measurement. For simplicity's sake you can consider a sipa to be a byte.

#rbf #bitcoin #explainers
26 Dec 2018 c.e.
Blockchains Against Evil, Impressions

Takeaways from a blockchain ethics conference I attended earlier this month, Blockchains Against Evil

I attended a day-long conference/seminar earlier this month, that pulled together a bunch of people in the 'blockchain' space to talk about trends in the industry, especially around security and lawlessness.

The Event, Specifics

The event itself was held in a rented conference space off Divisidero, in San Francisco. There were about 30 people in attendance, if I had to guess. Most everyone who attended worked or invested in the 'blockchain' space. There was a good mix of job types and roles: programmers, investors, company-runners, cypherpunks, non-profit directors, etc. I knew a few people from the Internet, but most were new faces.

The day was split up into a bunch of round-table talks. I honestly can't remember most of the themes. I took notes, but I've since misplaced the notebook. I'm planning to write up a longer piece on the insights the discussions gave me that specifically related to privacy and secrecy and how cryptography and the state interplay in this, but that piece is far more ambitious that I have the time or inclination to reason through now. Much like my lost notebook -- it'll be dug out later.

Themes and Thematics

Instead, I'll leave you with a short overview of the most salient points that were discussed. Most of these are a paraphrasing of other's points and ideas. I take credit for only the spotty transcription.

  • Crypto has provided a secure mechanism for ransomware makers to get paid. The global nature of the web plus Bitcoin's ubiquitous reach[1] mean that ransomware is truly a viable attack for anyone who's got access to a Bitcoin wallet. This is all of you. Another lens to put on this one is that it's put a premium on securing networks of valuable data. If your data being inaccessible makes your work impossible, it's likely only a matter of time until you're a target for a ransomware play.

  • While ransomware has placed a bounty on your databases, Bitcoin and other Proof of Work currencies have placed a directly calculable value on a computer's CPU cycles. Previous hacking rings have focused on skimming credit card numbers[2]; the past decade has seen more and more viruses that aim to steal compute power rather than credit cards or identities. That's because they can make money by stealing computation cycles and your power to mine crypto. I'd be curious to see stats on how the rise of ASICs has affected the profitability of botnet miners. Bonus points for an analysis that includes the impact of the recent price drop on said profitability.

  • Personal security is hard to measure. There've been several high profile cryptocurrency and 'blockchain' project attacks recently that involved getting a phone company to port a target's telephone number to a new SIM card, giving the attackers access to their SMS two-factor authentication backup codes. The general advice for avoiding this sort of problem is to ask your phone company not to port your number without being provided with a secondary PIN number or the like; others at the conference had switched to Project Fi, Google's phone service, for the express reason that they don't have a customer support telephone number. (Personally, I already use Project Fi). More generally speaking, there seemed to be generally an interest in hiring a hacker to do a personal security audit. If you or someone you know runs this kind of a service, let me know. I'd love to hear more about what kind of people you work with and what your price point is for an individual investigation.

  • Demand for decentralized services historically has been rather complex, if not a bit on the weak side. Often, they crop up as alternatives to more centralized services when a core user group is pushed off of the more centralized services (i.e. music and film piracy, right-wing punditry, and most recently sex work with SESTA/FOSTA[3]). As difficult as it is, it's pretty wild to imagine existing in a fully decentralized world, one where no one has the power to deplatform anyone else. It's hard to imagine a world where everyone runs their own decentralized server, a la the Urbit dream. Curation and searchability seem like they'd be particularly high value services in this kind of world. It definitely would be heading into 'pure free speech' territory, of the likes we only dream of currently but also remember folks that while speech may be free, slander is still illegal.

  • Personal anonymity. What right do you have to decide who and what can see where your money is going? I've got a lot of unfinished thoughts on this that I'm hoping to put up later in a separate piece. If and when I do, I'll update this to link to it.

  • Closely related to that, do anonymous payment networks breed demand for dark market goods? I'm talking about child pornography and buying hitmen for untraceable cash. I think the recent Epstein revelations[4] points towards no, vice isn't necessarily driven by access to invisible money. Honestly, if anything it's moving illicitness from the cash economy to the digital economy. Cash is largely untraceable. If you lose it in a fire, it's gone. In some ways, this is oddly similar to problems with keeping private keys and wallets safe for digital cash. But I digress. To what extent has a traceable money supply kept people exercising base desires that a lack of traceability now enables? Again, I think this is smaller than we suspect, but maybe I'm wrong. If anything, I think dark money and dark Internet (Tor) have made buying illegal drugs and child pornography much easier than they were in the past, but does ease of use drive volume? These things are still illegal. I'd love to read a study on the impact of digital darkness on illicit good trade, though I imagine hard numbers on this are hard to come by.

In Exitus

Digital money has created huge new opportunities for criminals and privacy lovers alike. I feel like the cat's largely out of the bag with the existence of digital money systems such as Bitcoin and Zcash (and Grin soon!). I'd love to see personal and institutional privacy and security become both more widely understood and practiced -- though at its core this problem involves an even greater investment into even basic computational understanding.

Will we, as a society, be able to educate ourselves fast enough to protect our systems and selves against the rising tide of spying nation states and exploitative hackers? I guess we'll find out.

I really enjoyed spending a day hearing about the in's and out's of blockchain ethics. I'm really grateful that there's people in SF who want to have these conversations, and went so far as to organize a space where we could discuss them. Huge <3 to all the organizers and other attendees that made the day incredibly worthwhile.

[1] By Bitcoin I really mean any value-acknowledged cryptocurrency.
[2] See the story of The Iceman
[3] A lot of this discussion hinged on the stuff John Backus has been digging up lately, I really like his article on Music Piracy
[4] The man basically ran a prostitution ring for wealthy and well-connected men, from a cadre of underage women that he developed. Miami Herald has the story.

#blockchains #conference-swag #impressionism