• a.bout
  • t.witter
28 Nov 2018 c.e.
Getting AMPed Up or Reflections on Lightning post Adelaide

I've recently been thrust head first into my first open source software ecosystem. I love it; I also feel like I'm struggling to contribute anything worthwhile because I've been spending so much time just getting up to speed -- the particular subsystem of software that I've landed in is incredibly complex and has a bit of scattershot documentation, spread across a couple of mailing lists and two enormous projects.

I want to give some meta commentary on the mechanics of getting involved in a new, active space, and then give a more nuts and bolts overview of the considerations that are shaping the edge of Lightning at the moment. I'm sure I've left things out, so know that my list is just a subset of all the things.

Finding Active Edges

There's a difference between getting up to speed and active in a currently evolving field versus learning a topic or subfield that's pretty much static. By way of example, I'd largely consider calculus and functional programming, as fields, to be pretty static, i.e. there's interesting stuff happening at the margins, probably, but there's not a lot of paradigm shifting research going into how to describe functionalism or what a second derivative is. As a field and practice, the borders of meaning and scope have largely been well defined.

'Active' spaces are different. They have action, or people actively working on new approaches or building out software and new ideas. The presence of people and the messiness of definition and conversation are beacons to what the interesting and new things the future will hold.

Arriving at an edge or beehive of activity where there are people working is like descending into a bit of chaos. In an active field, there's usually a lot of independent research and motivations and interests that keep the actors on this edge a bit spread out. Figuring out where the edges lie is difficult because the definition of the edge is its lack of a roadmap. Sometimes you can find artifacts that strictly define at least a subset of those edges -- the wiki tracking decisions made at the Lightning Summit in Adelaide two weeks ago is one such example.

I was lucky with Lightning, in a lot of ways. The biggest one is that due to the team I joined, I have a lot of direct access to people that have been working on the edge of the space basically since the beginning (h/t to cdecker). The other is that I joined just in time to attend the latest spec update meeting. These meetings are rare -- the last one happened over two years ago in Milan for the first lightning spec.

I'm not going to talk directly about what happened at the meeting; if you're interested check out the lightning mailing list, where we're currently in the process of hashing out the decisions made at the summit (which you can see here), or take a look at the PRs currently in progress on the lighting-rfc Github project.

Rather, I'd like to give some really meta impressions of what kind of thinking it takes to get involved in a project like Lightning -- hopefully this metaness will give you a portrait of what kind of conversations you need to be having or questions you should be looking to get answered when getting involved in a new field.

First off, it's hard to contribute to a field if you don't really understand the underlying system that it's operating on top of. Sure, this is easy enough to say, but just figuring out the contours of the system that define the problem space can be tricky. A lot of the stickiest problems that Lightning developers deal with, especially when looking to expand the protocol or improve the experience, are either limitations in the underlying Bitcoin protocol or a self-imposed mandate for privacy. If you don't have a good grasp on the goals of Lightning with regard to privacy (keep it, as much as possible), or a pretty deep knowledge of how Bitcoin itself works, you're not going to be able to contribute much to the conversation around Lightning -- mainly because you're going to struggle to even understand, let alone communicate with, people who are already working in the space.

I'm an incredibly quick study, but still relatively new to the Bitcoin and Lightning space. My largest contributions to date can mostly be summed up as asking clarifying questions. This may seem trivial, but I've come to see that it's an important contribution nonetheless -- comprehensibility is an incredibly important aspect of a system that needs and wants newcomers to both feel welcome to the space and able to contribute. And Lightning definitely could be more comprehensible!

Into the Deep

With an eye to making the Lightning space a bit less opaque, I'd like to run through a few of the higher level considerations that seemed to come up with some frequency during the weeks leading up to and at the summit itself. I think it's safe to say that these themes will be continuing problems and on-going discussions in the Lightning ecosystem.

Bitcoin

Bitcoin protocol limitations come in a variety of flavors. Here's a quick, condensed (and definitely contains omissions) rundown of things in Bitcoin that hold up or complicate Lightning feature development:

  • Fees. Lightning is a 'second layer' protocol, sure, but at some point it has to publish transactions on the Bitcoin blockchain. Lightning's security mechanisms (ie your ability to successfully pull your money out of a channel) rely on the ability to get a transaction into a block within a reasonable amount of time. Lately, this hasn't been a problem, but if and when fees spike, there's a lot of potential to run into trouble if your transactions aren't able to get confirmed. Fees are complicated by the fact that 1) there's two parties involved in creating and spending all the transactions, 2) commitment transactions are usually composed, signed and stored long before you might actually need them, 3) economic incentives mean that you're probably looking to pay the smallest fee possible to accomplish what you want, but this means that you're probably in a bad position in terms of being able to get your transactions on chain in a fee spike event. Lightning as a protocol would like to move away from the business of needing to know what the fees should be, but that means we're going to run into another corner case of the Bitcoin transaction ecosystem...

  • RBF and CPFP. If you're not deep in the Bitcoin wallet management weeds, there's a good chance you've never heard these acronyms before. Briefly speaking, these are two mechanisms that the Bitcoin protocol provides for getting a transaction through that has largely been pushed to the back of the queue for being included in a block (mines/confirmed etc) because of a fee spike. RBF stands for Replace By Fee, whereby you basically re-issue a new copy of a transaction, but one with more fees per sipa[1]. CPFP means Child Pays For Parent. It takes advantage of the chained nature of Bitcoin transactions, and attempts to 'sweeten the deal' for miners such that they'll mine your first, low fee transaction in order to also be able to mine a high fee child transaction. The parent plus child chain is typically termed a 'package'.

  • Schnorr. What is Schnorr? Schnorr is a proposed change to multiparty signature composition. Including it in Bitcoin will require a revision of the signature verification mechanisms.In addition to more compact and easier to verify signatures, Schnorr unlocks a certain amount of obfuscation and script burying. Schnorr can make Lightning channel openings invisible on chain (right now they're a bit easy to spot[2]). There's a few other nice things that Schnorr signatures enable, that I don't exactly remember the details of, but they'll Lightning to send payments in parts more easily and securely[3]
  • Script Sighash Flags. Christian Decker's been spearheading an effort to update the way that Lightning balances are enforceable on chain. (The updated protocol is called Eltoo, you can read more about it in this high level article I wrote, or the paper itself, if you want something a bit more in depth.) This requires a change to Bitcoin script, specifically the addition of a new sighash flag called SIGHASH_NOINPUT[4][5]. Work on the new, improved state management protocol is basically stalled until this gets merged into the Bitcoin reference implementation. On another note, there's some other boutique, existing sighash flags that will probably start being utilized by Lightning transactions as part of the attempts to dodge the fee problem. Watch this space.
  • Transaction malleability. This is an ancient problem now in Lightning land, as it was resolved when SegWit landed. If you're going to be doing Lightning, you should know how SegWit works, as that's the only type of transaction protocol that Lightning wallets speak. As a historical note, transaction malleability basically refers to how fixed the transaction hash is. Lightning, in it's current form, requires the guarantee that the hash of a signed transaction can't be changed (by a miner or the other party etc). SegWit fixed this -- it's practically never mentioned now. In other words, this problem has moved off the edge, largely because it's settled.

Privacy

This feels like one that's taken for granted more than most things, but it largely informs a lot of architectural decisions that get made. Maintaining privacy is important, and it manifests itself in a bunch of ways. Here's a short list of things that privacy considerations impact.

  • Error handling. How do you know who bungled your payment?
  • Payment correlation / decorrelation. Can an observer figure out if payments being sent over different channels or the same payment over different time periods, routes, are the same?
  • Getting a clear picture of current network health. It's hard to a payment success rate if the payments themselves are localized and unreadable
  • Autopilots. How much information should nodes reveal, to help other nodes figure out who to connect to?
  • Anything that might leak private or proprietary information including but not limited to: channel balances, node wallet UTXOs, payment origination, payment destination

Other assorted things

  • Liveness. Payments can get stuck if nodes along the route aren't responding. This is particularly bad if a payment has to 'go to chain', ie be finalized via the blockchain.
  • Liquidity. Lightning payment capacity is a constantly mutating DAG. Channels' total value is known, but the balance of funds within that channel is often kept secret (see Privacy, above). This makes it hard to predict which routes will fail until you try it -- the advertised channel capacity may be pointing in the wrong direction. This is exacerbated by the fact that channel funding is one-sided at the moment. Splicing and dual-funding will help this problem.
  • How important are receipts? This deserves a much longer post and honestly I need to do more research around it; I won't get into it here.

In Exitus

I'm having a great time.

[1] A sipa is another term for a kiloweight, which is a Bitcoinic way of weighting bytes in a transaction to calculate the fee rate of a transaction. As a general rule, miners prefer transactions with the highest fee rate per byte. If a fee rate spike is happening, you're going to want to up your transaction's effective rate.
[2] As an aside, we green lighted work on a different signature scheme (some 2 party single ECDSA sig algorithm) that can let private channels remain invisible on chain. Nice because it doesn't rely on Schnorr.
[3] There's been a lot of discussion around AMP (base AMP, OG AMP, low versus high AMP). This deserves a longer discussion, but know that Schnorr sigs will provide a way to do split-payments with fewer drawbacks than any of the current proposals. In fact the coming of Schnorr is a background vibe underpinning a lot of the discussion, as it makes the timeline question more important.
[4] I believe the final name is settling somewhere near SIGHASH_NOINPUT_UNSAFE for #reasons.
[5] What's a sighash flag you ask? Briefly, it's a bit that's added to a transaction signature that tells the verifier what fields in the transaction that the signature signed. You can read more about them here.

#lightning #bitcoin #oss #edges
22 Nov 2018 c.e.
A Brief Love Letter to XOR

I'm taking an online crypto class[1] right now, and it's been forcing me to get more intimate with the bitwise operator XOR. On top of being incredibly lightweight, there's a few really cool things that XOR can do.

In the spirit of the Thanksgiving season, here's a brief love letter to my favorite little boolean operator, XOR.

What is XOR?

XOR stands for 'eXclusive OR', where 'or' refers to the boolean logic operation. What does that mean, a boolean logic operation? Briefly, it's what conclusion you draw from two truth values. It's kind of like a predetermined agreement mechanism. Boolean logic is a rule that you apply to two results, to resolve those two results to a single true or false.

A simple example is probably helpful. Let's say that we've got two voters, and we're trying to take their two votes (either YES or NO) and return a single decision for the 'election'. How these two imaginary voter's votes are counted is the role of the Boolean logic operator.

There's two, fairly common boolean operations that you might have heard of before: and & or. The decision for 'and' is fairly intuitive: if both voters vote YES, then the result is YES. Otherwise, the result is NO. We'll only get a final YES vote if both of the people we're asking say YES. If either voter votes NO, the final result from the boolean operation will be NO. The 'and' decision framework requires 100% agreement.

'Or', on the other hand, says that if either voter says YES, then we'll take the result to be YES. The 'or' decision framework requires only one single 'voter' to say YES in order to return a YES.

So what is XOR? XOR only returns true if the voters disagree. If both voters say YES, 'xor' will return NO. Same thing if both voters say NO: 'xor' will still return a NO. It's only when one 'voter' has chosen YES and the other NO that XOR resolves to a YES. It doesn't matter which voter says YES and which one NO, as long as the voters disagree XOR returns YES.

Why is it called exclusive or? Great question. I have no idea, but you can probably find out on the Internet.

XOR As Your Encryption Friend

XOR does some pretty fancy things. If you take a series of bits and XOR it together with another series of bits, the original series of bits can be retrieved out of the resulting string, but only if you know what the second series of bits was. It's almost impossible to tell what the original bit series was. Here's a quick example, to show you what I mean.

// If I take the bit series 0101 and XOR it with 1010  
0101 xor 1010 =  1111

A result of 1111 doesn't tell you what bits belong in which of the strings that you xor'd together. You could have xor'd 1111 with 0000. Or 1100 with 0011. But! If you do happen to know one of inputs, you can easily extract the other.

// If I know 1010 and the result, 1111, I can extract the other input   
1010 xor 1111 = 0101

This is incredibly useful in cryptography. If you take a message and XOR it with a 'secret key' (a random series of bits) as the same size as your message, viola, your message is now encrypted. If your 'secret key' is a random enough series of bits, then it will be practically impossible for anyone to know what the original message bits were. To decrypt this message, all you need is the encrypted message and the key that was used to encrypt it.[2]

// How to encrypt a message   
message xor key = encrypted_message  

// How to decrypt a message  
encrypted_message xor key = message

Other XOR Magic

XOR has a little bit of 'magic' that happens when you use either a set of all 0's or all 1's to XOR against.

You can 'bitflip' any series of bits by XOR'ing it with a series of 1's.

// Flip a bit set!
111000 xor 111111 = 000111

XOR'ing by a set of 0's is an 'identity fucntion' -- it'll return the same series of bits as what you originally XOR'd in. It's probably not a good idea to use a set of 0's as your encryption key -- it'd be like putting your message behind a piece of glass. XOR'ing by 0 is transparent!

// Show me the same!
111000 xor 000000 = 111000

In Exitus

The next time you use encryption to send a message with a friend over the Internet, give a little thanks for your crypto workhorse bestie, XOR.

[1] Dan Boneh's Crypto I on Coursera
[2] This method of encryption is generally called the One Time Pad encryption, as the key is as long as the message. So long as you never reuse the same key on a different message and your key is a random stream of bits, this method of encryption (xor'ing the message with a key) has what's known as perfect secrecy. The biggest, practical problem with this method of encryption is that the person decrypting your message needs to know the key. You'd need a secure way to send them the key, as anyone who gets the key can then decrypt the message. The key is as long as the message though! If you have access to a secure method of communication that can transmit something as long as the message, you should just send the message itself over that secure communication channel. It's just as long, and your chatting partner won't have to decrypt it. This equal length key problem is why they say that perfect secrecy is practically impractical.

#xor #boolean #love-letter
20 Nov 2018 c.e.
iOS First Impressions

A long time Android user, I made the switch over to my first iOS phone this week. I've never used any Apple phone before, in any true capacity, despite knowing a good number of iOS devs. I'm excited to finally see their work. Here's a few of my first impressions on the platform!

The Gestures are Intuitive

I've watched other people swipe their way through iOS interfaces and wasn't really all that confident that I'd be able to figure it out. Surprisingly, it didn't take me all that long to get to a point to where I could get them working. I did need someone to show me how to get to the notifications screen though -- I kept landing on the screen with all the widgets instead. Otherwise, they're pretty great. I especially love how the camera and flashlight buttons on the lockscreen feel like actual buttons.

Getting Back is Hard

Sometimes I end up back on the 'home' screen and it sucks. Luckily, apps seem to have a really good memory of where you left off, so tapping into them from the home screen is super intuitive. Unlike Android, where tapping the home icon has fairly unpredictable behavior, based on how they programmed the original launch intent to work. Flexibility is nice, but this is one place where having a predictable user experience is really reassuring.

Buttery Smooth

Everything animates so smoothly. It's incredible. The way chat bubbles slide around on the page. The smooth swiping motion I can make in the Twitter app and get back to the previous page. I can't get over how great it is, how pervasive. Everything moves in beautiful ways. This phone is an absolute delight to interact with.

Moving All My Settings Over from Android

I tried to use the Move to iOS app to get all of my accounts and things moved over from my Android phone, but couldn't get the bluetooth pairing to work. It suspect there was something wrong with my Android phone, as I also had trouble when trying to pair it with my Garmin running watch. I eventually ran out of patience and went ahead and set up the phone without it, only to realize later that there's no way to come back and make it work without wiping the phone entirely. Luckily, most of the Google apps transfer over pretty cleanly. That's been nice!

The one biggest exception would be the Signal app. Switching cellphones changed my safety number, so now I can't use Signal on the Android phone as well. I also had to re-link my desktop app since I switched phones. I really thought it'd work as a secondary device that I could just add to my account, but it seems that the whole ecosystem is pretty strongly tied to a concept of there being a Single, Blessed install of the Signal phone app. Kind of a bummer for wanting to be able to switch between phones on the reg, as your messages don't get propagated between devices (and you'd have to reregister every time you make the switch). I don't think I'll be switching that often, but it is a bit of a bummer nonetheless.

Switching SIM Cards

I use Android, which also means that I use Project Fi. I spent a decent amount of time and effort researching alternative ways to use a different phone provider but T-Mobile was hellishly expensive (the iPhone is locked to T-Mobile) and there wasn't a clear cut solution for what I really wanted to do (have one number ring two phones). I really want to be able to keep my phone number the same, so that I'm easy to reach by anyone, anywhere, but Project Fi isn't supposed to work with Apple phones. Turns out that it does work, somewhat. I hear there's limitations (it only uses the T-Mobile network, none of the international data works), but since I'm not planning to get rid of my Android phone anytime soon, I should be able to switch back without too many problems.

How to Share Things

I'm still pretty confused with what that arrow out of a box even means. I hate it. It's ugly. I don't like it. Someone make it go away.

The Notch and Other Unaesthetic Things

The title for this section is a lie. There is only one unaesthetic thing that I've observed so far about the iOS XS that I've got, and that's the notch. It's terrible and you're lying to yourself if you think otherwise. I can smell the Stockholm Syndrome from here.

Discovering Which of Your Friends Are Discriminating Assholes

"Hey you're blue now! Whoohoo". Fuck you. Fuck all of you.

That color discrimination runs deeper than you think, man. The last company I worked at the full time employees had blue badges. The contractors's badges? Green.

You're Not Getting My Face

Or my finger prints. This is platform independent, but it does suck. I'm pretty anti-dead man switches in general, as in anything that lets you into my phone when I'm dead or otherwise incapacitated is generally off limits. I hate how sexy smooth the login experience looks though. I also resent how they only switched to face detection (and away from the equally problematic fingerprint scan) because they needed more screen space.

They got rid of the fingerprint scanner but they couldn't get rid of the notch. Terrible.

In Exitus

I'm incredibly impressed at how easy it's been to switch over, even without the Move to iOS app working as intended. In a lot of ways, this is because Google has made so many of their apps available for iOS! Thanks Google.

All in all, I'm a little embarrassed at how long it's taken me to give iOS a try. I really love it. I feel a bit bad for how quickly I've come around to liking it, given how staunch and how deeply entrenched of an Android user I've been. I've always known that the design practice at Google left a lot to be desired, but seeing and experiencing an iOS machine in practice has really been eye opening to how many misses Google made at some really serious decision junctures.

Or maybe Apple just patented all of it. Assholes.

#iOS #first #impressions #android
29 Oct 2018 c.e.
Understanding Eltoo

Simplified Channels, Simply

This article assumes base knowledge of the existing Lightning Network contracts and Bitcoin transaction composition. This is a lot of base understanding to have, and in fact, I'd argue that it's probably the biggest challenge to fully understanding what eltoo is really getting at.

That being said, I'll do what I can to explain it such that it's understandable.

Let's start by first understanding how the existing Lightning network contract invalidation system works.

The original Lightning protocol relies on a series of half-signed transactions. When the channel balance needs to be updated, you exchange a new set of half-signed transactions that update your balance. In order to keep your channel partner from broadcasting an old, invalid transaction that you've signed, every time that you exchange a new, updated transaction that reflects the current state of the payment balances, you also exchange a 'penalty' transaction, of sorts, that allows you to claim all of the Bitcoin in the channel, if the other person in the channel accidentally or intentionally publishes an old transaction state.

Each of these exchanged transactions spends the same output -- the one created by the Funding transaction.

It'd probably be useful to spend a bit of time here talking about how Bitcoin transactions work, as it'll be handy when we get into eltoo. Every Bitcoin transaction is a global state update. It takes existing, unspent output objects, spends them by providing a signature that proves you can spend them, and creates new unspent output objects. The set of previous outputs that your Bitcoin transaction "uses up" are called inputs. Every input is another, previous transaction's outputs.

Let's bring this back to the Funding transaction then. A funding transaction has a single output. This output can only be spent by providing two signatures, one from each party in the channel. This is called a 2-of-2 multisig transaction. The funding transaction is committed to the Bitcoin blockchain, which then makes this Funding output eligible to be spent by the channel parties. The only way these funds can be spent is if both parties sign a transaction. The channel balance is updated, then, by creating new, ephemeral transactions that spend this output, re-apportioning the total value in the channel to each party as a reflection of their current balance.

As a concrete example, let's say you and I wanted to create a Lightning channel between ourselves. I'm going to offer up 2 Bitcoin, you're putting in 1 Bitcoin. We'd make a funding transaction that takes two inputs: my 2 bitcoin and your 1 bitcoin, and creates one output of 3 Bitcoin. This 3 Bitcoin can only be spent by a transaction that has both of our signatures on it.

To record what the original balance is, we'd create a transaction then that has, as an input, the 3 Bitcoin funding transaction result, and that pays out two outputs: one to me for 2 Bitcoin and one to you for 1 Bitcoin. It's a lot more complicated than this, but for the sake of understanding how eltoo works, this simplification will suffice. The whole point of Lightning is that you and I can now Do Business between each other. I buy you lunch, worth 0.25 BTC. Rather than paying me back with Square Cash or Venmo, we could just create a new transaction that spends the funding transaction, throwing away or invalidating the first one that we exchanged. The new, updated transaction would reflect the new balance of accounts between us: it'd pay me out 2.25 BTC and you'd get .75 BTC.

The problem is that the first transaction we exchanged, with the original balance values, still exists. In it, I get just 2 BTC and you still get your original 1 BTC. Let's say that you decide you'd like to stiff me the lunch I bought you (jerk), so you publish the original transaction onto the blockchain. I wouldn't be able to publish the later transaction that actually reflects the balance between us, because I'd also be trying to spend the same output from the Funding transaction. You can't do this -- only one Bitcoin transaction can spend a single output. So I'd be shit out of luck.

The original Lightning proposal solves this problem of past state transactions getting published by introducing a concept of penalties. Without going into too much detail, it effectively gives me the ability to penalize you for reneging on our lunch deal, by spending a special output from the stale state transaction that you published. The penalty for your actions is built into the outputs of the state transaction, and basically gives me the ability to take all of the Bitcoin in the channel for myself. So the punishment for trying to take back your .25BTC is a total loss of all of your channel Bitcoin.

Crime doesn't pay, at least not with lightning.

Every time we want to update the balance of accounts in the channel though, we need to exchange new penalty transactions, that invalidate the previous state. Or not really invalidate it, but provide a huge disincentive for you, if you decide to publish it anyway. Half of the security of the system, then, relies on you and me saving all of the penalty transactions that we exchange, because they map 1 to 1 to expired or invalid transactions. If you lose the penalty transaction for a particular old, invalidated state, and for some reason the other party finds out that you've lost it, they can publish that old transaction and you wouldn't be able to do anything other than accept your loss.

In in that way, Lightning, as it exists today, largely resembles a practical implementation of the theory of guaranteed mutual destruction. If you lose or reveal your nuclear arsenal (in this case, a set of transactions that invoke penalties for unfair actions on behalf of your adversary aka channel partner), you're shit out of luck.

As any Soviet-era super power knows, nuclear arsenals are costly to maintain. Here's where eltoo comes in. eltoo is an elegant proposal to do away with private-arsenals of penalty transactions. In fact, eltoo does away with penalties entirely. Rather, it provides an elegant mechanism for allowing any later agreed upon state to override any previously agreed upon state. As long as you have a copy of the most recently agreed upon transaction, you can publish it any time and it's guaranteed to be spendable.

The key to this is being able to decouple a transaction from any specific output. The original Lightning transaction scheme relied on all transactions spending from the Funding transaction. Instead of pegging a signed transaction explicitly to a prior one, signed transactions can spend any transaction for which it has a valid spend script. These types of transactions are called floating transactions.

This is huge, because it gives you the ability to 'fix' the channel balance. Previously, if your channel partner published a stale balance transaction, you couldn't do anything to 'fix' it because all of the existing state transactions that you have spend the same output: the funding transaction output. With a floating transaction, however, you have the flexibility to spend either from the funding transaction, or from any previous state transaction.

Let's go back to the previous example. I've bought you lunch, the most current transaction that we share between us says that I get 2.25 Bitcoin, you get .75 Bitcoin. You decide to publish the older transaction, where you get 1 Bitcoin and I get 2. The transaction spends the funding transaction output. In the original Lightning scheme, I can't fix this because I all of the transactions that I have must spend from the funding transaction, and the funding transaction's single output has now been spent, by you. With an eltoo floating transaction, I can broadcast the most up-to-date transaction, spending from not from the funding transaction output, but from the old state transaction that you've just published. This flexibility as to which transaction you're spending is what makes eltoo so elegant. There's no need for punishment, because old state updates are fixable, you just spend them again, with the most up-to-date balance, ideally.

How does eltoo do this? By removing the concrete identifier of an input to a transaction. To do this, you'd construct a transaction using a the flat SIGN_NOINPUT, which means that the signature doesn't include the unique identifier of the input it's spending. Currently, this isn't a part of the Bitcoin spec; there's a proposal out for it's inclusion.

Once a transaction can spend any other transaction in a channel though, what's to keep your cheating partner from just over-writing the correct channel balance with another, older transaction? eltoo solves this in a hackishly elegant manner, by repurposing the nLocktime field that already exists as a part of the transaction format. The locktime field already serves a dual purpose: it either limits the spending of this transaction by a required blockheight, or by a timestamp. If the locktime is beneath 500k, it assumes that it's a block height lock. Anything above this is a timestamp lock. Any number above 500k but beneath the current time, then, can safely be used as a series number for a set of eltoo channel transactions, as they will be immediately spendable. Eltoo transactions can be ordered by locktime. Update transactions are configured such that they can only be spent by a transaction that has a higher locktime than their own, thus preventing an earlier transaction from overriding a later, committed transaction.[1]

eltoo is incredibly elegant in its simplicity. Floating, ordered update transactions obviate the need for arsenals of penalty transactions. They also do away with the need for secrecy -- any update transaction is valid and publishable. This greatly reduces the memory overhead needed for each channel that a lightning server maintains, as old state update transactions can be safely discarded.

There's a bit more nuance to update and spending transactions that I've largely glossed over. You can read more in the eltoo paper itself. There's a lot of other really nice features that eltoo adds, like making multi-party channels feasible, and a better mechanism for dealing with fee spikes.

As it stands currently, eltoo won't be possible until the SIGHASH_NOINPUT flag has been added to the spec, but I'd be surprised if it isn't included within the next few months to a year.

For further reading, on the original Lightning spec, see Poon and Dryja's paper, The Bitcoin Network, Scalable Off-Chain Instant Payments.

[1] The key to understanding how this works is to know that the CHECKLOCKTIMEVERIFY doesn't compare with the actual time, but rather to the nLocktime specified in the transaction. See BIP65.

#bitcoin #lightning #eltoo #explainer
25 Oct 2018 c.e.
Reproducible builds with Bitcoin, Tor and turtles

Within the last few years, modern open source software, particularly ones that deal with vulnerable or important systems[1], have worked to make the built binaries that they publish for download verifiable. A binary is considered verifiable if anyone can download the source code, build it, and end up with a binary that exactly matches the publicly available ones.

Ending up with an exactly matching binary, however, is a non-trivial task. As such, software projects such as Bitcoin and Tor have undertaken the project of making their builds deterministic and reproducible.

There's two parts to ensuring deterministic, reproducible builds. The first is to eliminate any amount of non-determinism from your build itself. The second is to remove any amount of non-determinism from your build environment.

The first is typically managed by removing timestamps, fixing the order of outputs or inputs via a stable sorting algorithm, and stripping out variable version information. You can see a more through treatment of the various sources of non-determinism in builds on the Reproducible Builds working group's website, under the heading "Achieve Deterministic Builds".

The second, removing non-determinism from your build environment, typically entails creating a clean room container that the build will take place in.

As part of an investigation to better understand reproducible builds, I spent some time walking through both the Bitcoin and the Tor project's reproducible build process. What follows is a short overview of how the Bitcoin reproducible build process works and a comparison between the Tor project's build process and Bitcoin's. Finally I'll talk a bit about turtles, a work in progress project that takes the trustworthiness of reproducible builds a step further.

Building Bitcoin

Bitcoin, in keeping with the spirit of the project, relies on a public, multi-party verified binary. Any individual can download the source code for bitcoin core from Github, check out a tag, and then build the project. Gitian, the build verification tool that Bitcoin makes use of, outputs an assert file that lists all of the inputs, outputs, and packages used to build the source along with the SHA256 hash of each of them. An independent verifier then signs this file with their PGP key and submits a pull request to the gitian.sigs repository. I did this a few days ago, for the linux and windows versions of the binaries; you can see the PR I submitted here. It's got two assert files, one for the linux binary and one for the unsigned Windows binary, plus two separate PGP signature files, which is just the assert file signed with my PGP key.

Bitcoin's reproducible build setup uses a script-wrapped version of the Gitian builder. There's a couple of steps involved in setting up your machine to do a reproducible build and this doc in the bitcoin-core notes does a good job of walking through the specifics; I'll go over them briefly.

First, you need to decide how and where you're running Gitian itself. If you're running a Debian/Ubuntu distro you can set up a secondary user on your computer to 'host' Gitian builds. You can also set up a virtual machine (via Docker or VirtualBox) to host Gitian builds within. I'm sort of lazy, so I went the route of setting up a secondary user on my machine to run Gitian in.

Once you have Gitian configured, you can start the Bitcoin build process. Mostly, this involves cloning the bitcoin gitian.sigs repository (where it'll create the output asserts files for you) and then running gitian-build.py, a script included in bitcoin/bitcoin's contrib directory.

gitian-build.py acts as a wrapper around gitian-builder, that organizes the build process into a few general tasks: setup, build, verify, and sign. Once you have Gitian setup, you can run the verified build process with the following command, where username is your github handle, and 0.XX.0 is the source tag that you'd like to build and verify.

./gitian-build.py --detach-sign --no-commit -b username 0.XX.0

The gitian-build.py Bitcoin script has a couple of extra options, that you can run, for instance, using -B instead of -b will also build signed versions of the Windows and OSX binaries. To be honest, I'm not sure what the difference is between a signed and non-codesigned binary, but there's options for both!

You may notice that my PR for verifying the 0.17.0 Bitcoin binaries is missing the OSX binaries; in order to compile the OSX binaries (particularly on a non OSX device) you need to install extra packages.

There's a few more pieces of Bitcoin's Gitian setup that I should mention. Both the gitian builder and the gitian-build.py scripts are wrappers for ordering and running the actual build script. So where is the actual build script for bitcoin? They're defined as a set of YAML files, in bitcoin/contrib/gitian-descriptors. There's one for each of the build targets (linux, windows-nonsigned, windows-signed, mac-nonsigned, mac-signed). The gitian-build.py file is hardcoded to load these YAML files into gitian-builder, depending on what options you've passed it and your current system setup (ie do you have the OSX dependencies downloaded?).

Now that we've got a pretty good idea of what the setup is that Bitcoin's established for the build, let's talk a bit about how Gitian itself works. Gitian itself will spin up a container (defaults to KVM, but can also be configured to use LXC) that it will download the listed dependencies into (these are specified in the YAML file), and then run the build script (also outlined in the YAML file).

After the build has completed, Gitian's gverify script is run against the built binaries, which outputs an assert file. These are the files that you sign and upload to Bitcoin's gitian.sigs repo, signalling that you also have independently verified the Bitcoin binary!

Notice that if you've setup Gitian to run inside a VM on your machine, the build itself will take place inside yet another container, one spun up by Gitian itself. It's a bit of a build turducken.

There's a number of reasons for this. The container that Gitian spins up has its time set to a known value, such that all builds use the same time, it also uses the same container architecture, which ensures that the file system irregularities are hopefully largely eradicated. Having users to use the same architecture for the build machine removes a good deal of possible variability from the build process.

Comparing Bitcoin and Tor

To be honest, I didn't spend a lot of time digging into the Tor build process, but a few things about it stuck out. First, they use something called rbm, which is based on runc, another container based solution. They used to have a Gitian driven process, but moved away from it - their original Technical Details blog post on reproducible builds mentions Gitian, but their links to the build setup instructions leads to a git commit about how it's been deprecated. I didn't do a deep dive into how rbm or runc works, but this comment on their docs leads me to believe that I'm not missing much.

We have written a pair of blog posts that describe in more detail why this is important, and the technical details behind how this previously got achieved when using the Gitian system, if you are curious. The new build system based on rbm is working similarly and is facing pretty much the same issues.

The new rbm based process isn't well documented, but appears to be invoked when you run make - in this way it's much easier than the Bitcoin Gitian build process, as you run the rbm process every time you build the project.

Finally, Tor doesn't have a set of signed verified assert files, rather you can individually build and check the shas yourself. It feels incredibly Bitcoinic to have a publicly available set of signed signatures that verify the build, whereas with Tor you have to do it independently, if you care, rather than having a published set of signed verifications.

Turtles

There's a problem with Gitian, however, in that you're largely depending on the binaries of the packages you download from Debian being uncompromised. If, for some reason, the gcc compiler binary that you've downloaded from the Debian repo is compromised, all of the software that you compile with it will also be compromised. Gitian trades off relative speed for some amount of trust in the Debian packages.

There's another project[2] that's currently in the works that would replace Gitian. Instead of downloading a VM to run a build in, you'd instead build an entire runtime from scratch, first by downloading the source for a compiler, and then compiling another version of gcc, etc. What were packaged binaries that were downloaded in a VM in Gitian are now source compiled on your machine. The project's still a work in progress, but if successful, it would both decrease the required, systematic trust for build verification, as well as potentially exponentially increase the time required, at least on first run.

If you're curious, you can check out turtles here.

[1] Bitcoin (money) or the Tor project (anonymity) are two examples that currently have a public reproducible build process. Debian (computing platform) is currently in the beginnings of an enormous effort to create a reproducible build process for all of the packages that they publish.

[2] Thanks Carl for clueing me in to the work you've been doing on this!

#bitcoin #tor #reproducible-builds #turtles
30 Sep 2018 c.e.
On Fluency

This feels like a strange post to write. It's strange because nothing has changed -- my code still gets bugs, I still struggle with programming, with problems that require thinking through. I don't feel any amount of better at problem solving or actually writing code itself. But something is different. Within the last few weeks or months, my perspective on computing and programming, as a field and academic endeavor, has totally shifted.

Of A Mindset Past

I don't know how to convey the depth of certainty and absolute conviction that I had about my relationship with the 'field' of computer science, the years of passive curiosity and detachment. I have been working as a programmer for over six years, confident and vaguely content at the self-classification I had reached: one of "code plumber". I didn't love the type of work that I did with code, but I was managing to find opportunities to learn new things, and stretch myself in new ways.

It was nice to be so certain about a thing in life, to know that computer science was not a field for me. I've long talked of going back to school, as a retirement project even, but always knew that it wouldn't be in computer science. It was simply obvious that I'm not a 'computer science' type of person.

There's a history to this, a history of approaching computer science as a realm of very smart people, of a type of smart that I was not, nor had any hope to be. I took an algorithms class a few years back, when I was first starting to get into programming and it largely confirmed a lot of suspicions I had held about my capacity for 'CS'. The class was an online one, taught by Tim Roughgarden. I did ok, but I certainly didn't excel at it. I learned enough to sound notionally knowledgeable about big O notation, and to fully internalize a complex about 'algorithms' as a subject matter that I would never master. Since then, I've had a tendency to shy away from anything related to 'real CS'.

Mental Shift

Something has changed though, just in the last few weeks. I find myself able to follow the basic mathematics that are used to describe and evaluate algorithms: set notation makes sense to me. I'm working my way through an algorithm's book with my sister - exercises in it that even a few weeks ago might have been a struggle to really understand are now decipherable.

Further, I've found that I can read highly technical papers and piece together the systems that they're describing. I churned through Tanenbaum's Distributed Systems book in a week. The Bitcoin BIPS, which as early as April of this year were a struggle, are now intelligible. I can read a single book on C, and directly and frictionlessly apply that knowledge to a complex C codebase.

It is the most unsettling and yet riveting experience that I have ever had. The only thing that comes close to this is the moment that I realized I was fluent in Portuguese. Real, true fluency is an unforgettable experience. It's as if a veil is pulled away and you're suddenly swimming in a wide, expansive, boundless ocean. You open your mouth, and where once there was stumbling and a stunted desire for expression, suddenly there are only words. Full, coherent words.

The power I feel from this newfound technical fluency is intoxicating and terrifying. There are books that I want to read, so many code libraries I can just look at and understand. There's so many new places to go, so many new things to learn. I want to learn about cryptography and physics. I want to learn more about set theory notation. Most terrifying of all, spending all of my time learning and exercising my fluency feels like the most natural and right thing I could ever imagine. Computer science, despite my best intentions, has found me.

What the fuck has happened to my brain. Why did the switch flip? And how? Because something has happened, my way of seeing the world has irrevocably changed.

A Series of Cascading, Unrelated Events

If my experience in becoming fluent in Portuguese is any indication, finding your way to it is never a straight nor predictable path. It felt like a series of random difficulties and struggles that suddenly, without any warning, magically flipped into comprehension. The experience I've had with computer science literature feels incredibly similar. The path to learning a foreign language, however, is fairly well known: take classes, immerse yourself in the language, if at all possible, put yourself into a situation where that language is the only one you can hear and communicate in, for months. How one gets to fluency in a technical field was far less straightforward and arguably much longer; here's a few things that I've done recently that I believe strongly contributed to the mental shift.

In the last year, I've read a lot of philosophical books, specifically Hannah Arendt's The Human Condition and The Origins of Totalitarianism. Arendt's work is incredibly difficult -- it's also deeply rewarding. So rewarding in fact, that I found myself incredibly motivated to fully understand it. I spent hours reading The Human Condition, and even more going through Origins. In order to understand a philosophical work, there is a particular form of world building that goes on. Philosophers, good ones at least, build a coherent and consistent world through a few definitions and primitives. Understanding a work, then, requires building the same logical framework in your mind, from the description that they've established on the page. This ability to construct real, localized and personal meaning from a written account is incredibly similar to the process I find myself going through as I read technical papers and textbooks.

I've realized that I can actively seek out the answers to longstanding holes in my knowledge. Let me give you an example. In July of this year, I found myself in a small hotel room in Redding, California with a couple of hours to burn. Before I sat down to read Ingrid Burrington's book on the Networks of New York, I tried to write down everything that I know about computer networks already. I got down to IP in the network stack and then blanked out. I read the book, which didn't get anywhere near the network stack. Instead of letting it go, I went and looked up the IP RFC and read it. It was a lot more readable than I was expecting.

The third experience came from a tweet. Mary Rose Cook published a small code reading experiment that I came across a few weeks ago. In it, she presents you with a small JavaScript function, which you're then asked to guess what the output will be, for a given input. She's set it up as an A/B test so there's no guarantee that you'll see them, but in exercises I did, the code questions were accompanied by writing prompts. At the beginning, you answered a prompt asking what you were hoping to accomplish by the end of these exercises -- I put down the truth, which was that I wanted to become much faster at reading code.

Reader, I did terribly. After every botched attempt, another prompt would pop up asking me to reflect on what I had done incorrectly. It finally dawned on me that the real problem was that I didn't understand the subroutines that her experiment was calling. That, the real goal of understanding code wasn't speed. That you can't magic your way to speedy code reading. There are no shortcuts to reading code. The only way to really, truly understand how code, any piece of code, no matter how small, works, is by reading it. By actually sitting your butt in the chair and finding the goddamn code that is being called and executed. Until you do that, any hope for accuracy is as good as gone. And what use is speed without accuracy? Needless to say, after a few exercises, I completely and totally and forever abandoned any hope of being a 'fast code reader'.

My goal had, and has, changed; now I want to understand it.

On Fluency

Across these experiences, at a low level, my brain has recognized that it can understand things, that given the time and resources, I can figure out what is going on, in almost any domain!

More than anything, it's this newfound confidence in my ability to understand that's really changed the game. It's a confidence built from hard extracurricular reading, and curiosity, and relaxing the arbitrary time constraints that I've put on myself in the past. I give myself the space now to figure things out.

There is no struggle now; it's just pure fun, bounded only by my own curiosity and the number of hours in a day.

#computer-science #knowledge #fluency
5 Aug 2018 c.e.
Dear Mother

What is a mother, in the context of user experiences?

I recently found myself at a blockchain user group meetup, with a bunch of other engineers. Someone, at one point, made an offhand comment of how his mom would find the easiness of doing a thing. He said it hesitantly, perhaps worried that invoking the image of a mother as an inexperienced user might be perceived as sexist, in that it plays to the stereotype of women not being particularly bright in terms of anything men know, really.

This isn't a new comparison, and it definitely wasn't the first time that I'd heard it, but it did get me thinking. What is it about moms that they never know how the latest and greatest technology works? Why, years after they've been first invoked in the role of the know-nothing foil, do they continue to play the role of techno-noviate?

More importantly, what can moms teach us about how we see non-technical people?

The Role of Mom

Although the person who put me on this particular brainwave may not have meant it sexist-ly, I'm going to take a bit of a sexist lens and apply it back on him and all of the manly brethren that have and continue to invoke mother. My relationship with mom isn't a man's relationship with his mother, so I'm mostly conjecturing here. Please pardon any inaccuracies or over-simplification that I might induldge in.

Who is 'mom' to a devoted son? My guess is that it's a person in your life that supports your projects and interests, that is interested in hearing what you're up to, and is willing to sit through your explanations patiently, even if she's lost the thread of your invention. She's, in my imagining, a sympathetic and interested listener, one who lacks any context at all for the things that you're telling her.

The lack of context is important, as is the interest in learning more. But it's a bounded interest in learning more, if you talk too long or get too lost in the weeds, this fictional ur-Mom character that I've created will give you a "that's nice dear" and move on to the next topic of conversation.

Let's translate this to a broader understanding of users

A 'mom' is a person who's interested in hearing what you're up to, even if her way of experiencing what you're doing is naive or completely unattainable. She brings goodwill and patience, but only so much. Her general understanding of configuration settings and workflow is rudimentary at best.

Mom is also usually from an older generation, one that, to date, didn't grow up with apps or computers or smart phones. While there's a good number of the older generation that learned how to use email and text messages and YouTube, and, if the President is any indication, Twitter. They're not tech-unsavvy, they're tech naive.[1]

Is there a better Mom?

While having a default 'mom' character to fall back on is instructive, I do still find the proliferation of her as a fallback naive tech character a bit stereotypical. I also admit that finding a replacement go-to is difficult, as the particular blend of interest and naivete that the 'Mom' character represents isn't particularly common among human relationships.

[1] I mean naive here in the sense of 'writing a naive implementation' is usally one that is sub-optimal yet gets the job done.

#moms #user-experience
5 Jul 2018 c.e.
On the Nature of Bitcoin

I just finished reading David Graeber's book Debt: the first 5,000 years. In it, Graeber shows that how we think about money and exchange is fundamentally flawed. To do so, he digs into his experience as an anthropologist, the historical and archeological record of actual human societies, to give a more honest accounting, not of how money systems should work, but did and do.

My motivation for reading this book was fairly pointed -- I wanted a historical perspective in which to place digital currencies generally, Bitcoin specifically. I wasn't disappointed. What I found really surprised me, and honestly, completely re-wrote the way I think about digital currency, friendship charms, and my communal economic relationships more broadly.

The Local Value of Currency

A large focal point of Graeber's debunking in Debt concerns where hard, physical currency comes from. How did humans actually come to regard coins as a store of value? Classical economists use Adam Smith as their jumping off point for how currency arose: because it's hard to imagine a trade economy without currency. But Graeber says that the common 'market' of trading that we all imagine didn't really exist, back in the beginnings of human trade. Instead, he proposes that humans merely kept local, personalized accounts of who owed who what. You always were a bit in debt to someone, and someone was always a bit in debted to you. That's how societies worked -- everyone owed everyone else.

At some point in ancienct Mesopotamia, these debts came to be recorded on clay tablets. One person would owe another four bushels of grain, for example. So you'd write onto a clay tablet, twice maybe, that so and so owed 5 barrels of wheat. The tablet would then be broken in half, and each party to the transaction would get half. When the time came for payment to be made (Graeber wasn't entirely specific about how these tablets got redeemed), the tablet would be destroyed. According to Graeber, at some point people started trading these promises to pay with other parties. If Bob owed me four bushels of grain, I could exchange it with you for a new toga. Then, when the debt came due, Bob would pay you, the holder of the other tablet half, four bushels in exchange for the contract that you're holding.

The first version of 'currency', as in not an actual, obviously useful good, Graeber proposes, were these temporary, two-party contracts.

If these person to person promissory notes were the first version of currency, when did the gold and silver coinage come into play? Graeber asserts that coinage is almost always and explicitly the work of a governing body. A government can pass out gold and silver to its citizens as coinage, and then, as a way to give the coins some kind of value, would make it such that official gold or silver coins were the only way to pay taxes. Or, phrased another way, the government gave coinage its value by demanding that all citizens acquire enough of it annually to pay tribute to the government. It's pretty perverted when you think about it: governments dug deep into their treasuries, melted down their treasures or spent years of human capital building mines, such that they could divide it up into small pieces, stamp their image into it, distribute it among the people, only then to turn around and ask their citizens to hand it back at the end of the year, at least some of it anyway. It feels a bit out of scope to go into why governments would do this; let's just accept Graeber's explanation that they did it such that the government was able to afford goods and services from people, and then, eventually needed to extend that same ability onto it's army. So it gave one of the only things that a government can get control of -- treasure -- as payment to soldiers who then were able to buy what they needed using the coins the government gave them.

All of this may sound exceedingly hard to swallow without further proof, and I'm really not doing Graeber's arguments justice. But, have you ever heard of anyone successfully being able to pay taxes with anything other than the coin of the realm? It's not physically possible. In fact, it's one of the biggest reasons that early employees at non-publicly traded startups get stuck with options they can't exercise. You literally can't trade the shares for money, so you have nothing to give the government as its part of the tribute to the spoils you've won.

One thing that really stood out to me in Graeber's explanation of how even gold and bronze based currencies got their start was how the power of the government that issued the coin largely dictated the extent of that coin's value, independent of materials. You can see this phenomena today. Copper has a price in the open market that is often completely different than the 'monetary' value of copper printed into a penny.[1] The fact that the reach of a government's power was, and still largely is, the extent of the value of its currency says a lot about the actual nature of a coined instrument.

A currency is a locally understood store of value. It's accepted in certain territories and markets because it has a value to the people of that realm. Usually that realm is defined by the government party who's laws the state elects, or is forced, to follow.

Hence, the government has the power to drive the value of its currency by requiring it from its populace as tribute at tax time. More people who owe a greater debt to the government, payable only in the government's own currency, makes the value of the currency go up. So the government has the ultimate manipulatory power, in that they can raise or lower taxes, inherently changing the value of the currency that's used to pay the taxes. Taxes and the value of a government's money are intimately linked.

What's a Bitcoin Worth?

If currencies are only able to derive their value from their use of payment for governmental tribute, where does the value of a cryptocurrency like Bitcoin come from? You can't use Bitcoin to pay your taxes.[2]

Under the lens of currency as a token for state obligations, Bitcoin is not a currency. Thus as far as any national government is concerned, Bitcoin has no value.[3]

But is Bitcoin valueless? Even the coins of old empires had a monetary value based on their physical substrate -- gold and silver have plenty of applications in manufacturing and jewelry making, if nothing else.

Let's consider the 'substrate' that makes up Bitcoin.

In the classical sense of governmental fiat, Bitcoin is not a currency. It has no value in terms of being accepted by the government to pay a debt. But, given the market and exchanges that have developed around Bitcoin, it clearly has a value. Why? What about Bitcoin is valuable, in and of itself, independent of its ability to be exchanged for other goods? Doesn't that make it like a currency?

It's tempting to delve into aspects of contracts or old style tokens that were promises to pay. Bitcoin shares a lot of common features with these, but inherently isn't rooted in a debt or a promise to pay. That's because Bitcoin, at its core, isn't a ledger of who will pay who, but rather a permanent record of who owns what. So being a debt that one person owes another doesn't really apply here.

Rather, Bitcoin's value comes from the system that it's built upon. Bitcoin is a globally available, persistent, decentralized accounting ledger with a genuinely verifiable timestamping machine. This timestamping mechanism is an important feature and value proposition of Bitcoin as a value store -- it's what gives you the ability to order payments in time. I'd argue that it's the most important, valuable aspect of the computer system that makes Bitcoin possible.

Satoshi didn't invent the time keeping machine that backs Bitcoin[4]. In fact, it was first proposed in Haber and Stornetta's 1991 paper in the Journal of Cryptography "How to timestamp a digital document". In the paper, Haber+Stornetta propose two different mechanisms for creating a global and perpetual timestamp verification machine. Satoshi used the first mechanism, of including the hash of a previous document in the following document, creating a chain of time verifiable documents. Bitcoin blocks are time verifiable documents. This is, to a large extent what makes them incredibly valuable. Due to their timestamped nature, and the lack of central control over this machine, they are unspoofable. The value of Bitcoin then, is in its digital timestamping value.

A Short Digression on The Historicity of One-Way Functions

I stated earlier that Bitcoin isn't a debt system, but in a lot of ways the way that value is passed from one holder to the next closely resembles early currency systems of the Ancient Middle East and the European Middle Ages. In these systems, debts were often marked in notches on a rod or tablet and then broken. The debtor would carry one half, the owner of the debt the other.

I struggled for a while to understand how a broken rod or tablet was good as a contract, but it's quite simple and ingenious. Curiously, it functions very similarly to a cryptographic one way function. A clay tablet is easy to break into two parts. It is also easy to tell if two parts of a broken tablet belong to each other, merely by seeing if the broken edges fit back together. However, it is very difficult to break a second clay tablet in such a way as to absolutely mirror the first. This is why clay tablets were broken -- to create signatures that only the other half could fulfill.

Cryptographic one-way functions work in an incredibly similar manner, except that, instead of relying on the random ordering of physical tablet particles in a break, they rely on the difficulty of finding and factoring large primes. Cryptographers and mathematicians have largely succeeded in copying the ease of tablet breaking and matching with the use of public and private keys.

The only downside to the numeric device is that you have to keep your private key a secret, whereas a tablet's contents can be public. It's a common trope of modern technology to convert a physical device into data -- in this case transforming the security from physical space to informational. Put another way, the clay tablet version of document verification was based on what you have, a matching clay tablet, the new Bitcoin mediated version of verification is based on what you know, a large number. [5]

Supply is Irrelevant

The supply of Bitcoin is irrelevant in terms of its value, because the value of the system isn't like gold or silver. Gold and silver's value is based, to some extent, on how scarce it is. While it's true that Bitcoin isn't infinite in supply, it gets it's true value from existing as part of a global, always-available verifiable timestamping machine.

The supply of Bitcoin is limited, but when newly mined blocks are no longer subsidized, I believe that people will still be willing to pay the required fees for values to be transferred, because being able to transfer obligations is a valuable enough service to continue operating the system. The external value of the currency may go up, but Bitcoin as a system won't crash because the digital ledger will still be a valuable service in and of itself.

The real trick to understanding this is to compare Bitcoin to the actual type of currency that it most closely resembles: wooden rods or clay, not gold and silver. Gold and silver, in some sense, are understood to derive their value from their scarcity or how much work it takes to get them, and admittedly, there is some work done in order to 'mine' more Bitcoin, and running the computer network that makes up the Bitcoin system has a non-negligible cost, but at the base level of computers bits and memory incurs a cost on the order of clay or wood, not gold or silver. Further, gold and silver have a more primary value derived not from their scarcity, per se, but instead from the governmental tax requirement levied on every citizen.

So how scarce is Bitcoin? Internally, it's limited to 21 million Bitcoin, total. It may seem like a small number, but it works out to 262,500 satoshi apiece for 8 billion humans. One could argue that that's enough, at a raw level, for every human to transact with each other. If we, as a society, got organized, we could distribute every human a non-trivial allotment of Bitcoin from birth, no questions asked.

On a technological level, the bits and computer infrastructure that makes up Bitcoin is cheap and widely available. You can see this assumption of a certain ubiquity of bits in Bitcoin's distributed model. As a system, it is designed to run on most medium-range consumer grade computers.

Traditional gold and silver coinage systems make for a bad comparisons with digital currencies. What we think of as the metal money coinage system is inherently inseparable from government influence and meddling in exchange. The value of the coin rests largely on the ability of the government to stay in power. It's tied to the state and its power.

Clay tablets and wooden rods, on the other hand, need no higher authority. They're a record of a debt owed between two private parties. They're made out of materials that everyone has access to, there's nothing special about the object in and of itself. Its value comes from who hodls it, and the fact that there are only two people who can set that debt to right. No state power is needed to enforce the value of the contract.

Bitcoin, then, isn't a store of value, it's a store of past, paid debts.

Bitcoin and The State

I wondered a lot about why China would be so anti-Bitcoin. Graeber's linkage of fiat (ie money coins) to state control and taxes explained a lot of the state's resistance to it. Under Graeber's explanation of the ties between the state and coins, this antipathy makes more sense.

Graeber shows that the state has a tendency to 'take over' or replicate independent stores of value, in official, sanctioned ways. This is how paper money became a thing -- the original paper spec started from private citizens issuing paper promissory notes. Eventually the state began to copy this method for accounting value.

The same thing is happening in cryptocurrency. Bitmain and Circle are supposedly in the process of putting together a cryptocurrency that would mirror the USD. Admittedly, it's not the US government making the coin, but so much of private market is an extension of the state (prisons, healthcare, debt collection, money printing) it wouldn't be unique for a private party to take on this role. (China's President Xi has even gone on record stating that he's in favor of cryptocurrency, in general, just not Bitcoin in particular. I have no doubt that the government is currently working on a cryptocurrency that it controls.)

So is Bitcoin a Currency?

So is Bitcoin a currency? Viewed through the lens of debt tallies, the answer is an unequivocal yes.

From the lens of actual spec, that is gold and silver based currency issued by a central government authority, the answer is an unequivocal no.

Bitcoin has no utility for raising and paying an army, its value extends as far as people are willing and ready to incur debts between each other. In the larger view, Bitcoin is valid as long as the underlying computer network that accepts and timestamps transaction blocks still exists.

Like a clay tablet, Bitcoin is only usable as long as you retain your 'half' of the broken rod, or access to your private key. Bitcoin is not actually a 'coin' at all, it's a digital store of broken tablets, with a bunch of private wallets holding the matching half.

[1] Does anyone actually know how diluted copper pennies are these days? When's the last time someone did a physical inspection of the amount of copper in a penny?

[2] Most places. Seminole, Florida and Arizona are two exceptions though I will mention that they trade the Bitcoin into USD immediately so I'm not really sure that this counts.

[3] Tellingly, 3 of the 8 listed references in Satoshi's whitepaper are for papers on timestamping machines.

[4] As of this writing, but see note 2 about how a few governments will accept Bitcoin and immediately convert it into the state fiat.

[5] In this way, the storage of Bitcoin private keys is an interesting, hybrid challenge, mostly because humans aren't very good at remembering things -- you have to secure the place that you store the big secret number that is your private key.

References:
Satoshi's Whitepaper https://bitcoin.org/bitcoin.pdf
How to Timestamp a Digital Document https://www.anf.es/pdf/Haber_Stornetta.pdf

#bitcoin #monetary-systems #cryptocurrency #debt #david-graeber
More...