30 Sep 2018 c.e.
On Fluency

This feels like a strange post to write. It's strange because nothing has changed -- my code still gets bugs, I still struggle with programming, with problems that require thinking through. I don't feel any amount of better at problem solving or actually writing code itself. But something is different. Within the last few weeks or months, my perspective on computing and programming, as a field and academic endeavor, has totally shifted.

Of A Mindset Past

I don't know how to convey the depth of certainty and absolute conviction that I had about my relationship with the 'field' of computer science, the years of passive curiosity and detachment. I have been working as a programmer for over six years, confident and vaguely content at the self-classification I had reached: one of "code plumber". I didn't love the type of work that I did with code, but I was managing to find opportunities to learn new things, and stretch myself in new ways.

It was nice to be so certain about a thing in life, to know that computer science was not a field for me. I've long talked of going back to school, as a retirement project even, but always knew that it wouldn't be in computer science. It was simply obvious that I'm not a 'computer science' type of person.

There's a history to this, a history of approaching computer science as a realm of very smart people, of a type of smart that I was not, nor had any hope to be. I took an algorithms class a few years back, when I was first starting to get into programming and it largely confirmed a lot of suspicions I had held about my capacity for 'CS'. The class was an online one, taught by Tim Roughgarden. I did ok, but I certainly didn't excel at it. I learned enough to sound notionally knowledgeable about big O notation, and to fully internalize a complex about 'algorithms' as a subject matter that I would never master. Since then, I've had a tendency to shy away from anything related to 'real CS'.

Mental Shift

Something has changed though, just in the last few weeks. I find myself able to follow the basic mathematics that are used to describe and evaluate algorithms: set notation makes sense to me. I'm working my way through an algorithm's book with my sister - exercises in it that even a few weeks ago might have been a struggle to really understand are now decipherable.

Further, I've found that I can read highly technical papers and piece together the systems that they're describing. I churned through Tanenbaum's Distributed Systems book in a week. The Bitcoin BIPS, which as early as April of this year were a struggle, are now intelligible. I can read a single book on C, and directly and frictionlessly apply that knowledge to a complex C codebase.

It is the most unsettling and yet riveting experience that I have ever had. The only thing that comes close to this is the moment that I realized I was fluent in Portuguese. Real, true fluency is an unforgettable experience. It's as if a veil is pulled away and you're suddenly swimming in a wide, expansive, boundless ocean. You open your mouth, and where once there was stumbling and a stunted desire for expression, suddenly there are only words. Full, coherent words.

The power I feel from this newfound technical fluency is intoxicating and terrifying. There are books that I want to read, so many code libraries I can just look at and understand. There's so many new places to go, so many new things to learn. I want to learn about cryptography and physics. I want to learn more about set theory notation. Most terrifying of all, spending all of my time learning and exercising my fluency feels like the most natural and right thing I could ever imagine. Computer science, despite my best intentions, has found me.

What the fuck has happened to my brain. Why did the switch flip? And how? Because something has happened, my way of seeing the world has irrevocably changed.

A Series of Cascading, Unrelated Events

If my experience in becoming fluent in Portuguese is any indication, finding your way to it is never a straight nor predictable path. It felt like a series of random difficulties and struggles that suddenly, without any warning, magically flipped into comprehension. The experience I've had with computer science literature feels incredibly similar. The path to learning a foreign language, however, is fairly well known: take classes, immerse yourself in the language, if at all possible, put yourself into a situation where that language is the only one you can hear and communicate in, for months. How one gets to fluency in a technical field was far less straightforward and arguably much longer; here's a few things that I've done recently that I believe strongly contributed to the mental shift.

In the last year, I've read a lot of philosophical books, specifically Hannah Arendt's The Human Condition and The Origins of Totalitarianism. Arendt's work is incredibly difficult -- it's also deeply rewarding. So rewarding in fact, that I found myself incredibly motivated to fully understand it. I spent hours reading The Human Condition, and even more going through Origins. In order to understand a philosophical work, there is a particular form of world building that goes on. Philosophers, good ones at least, build a coherent and consistent world through a few definitions and primitives. Understanding a work, then, requires building the same logical framework in your mind, from the description that they've established on the page. This ability to construct real, localized and personal meaning from a written account is incredibly similar to the process I find myself going through as I read technical papers and textbooks.

I've realized that I can actively seek out the answers to longstanding holes in my knowledge. Let me give you an example. In July of this year, I found myself in a small hotel room in Redding, California with a couple of hours to burn. Before I sat down to read Ingrid Burrington's book on the Networks of New York, I tried to write down everything that I know about computer networks already. I got down to IP in the network stack and then blanked out. I read the book, which didn't get anywhere near the network stack. Instead of letting it go, I went and looked up the IP RFC and read it. It was a lot more readable than I was expecting.

The third experience came from a tweet. Mary Rose Cook published a small code reading experiment that I came across a few weeks ago. In it, she presents you with a small JavaScript function, which you're then asked to guess what the output will be, for a given input. She's set it up as an A/B test so there's no guarantee that you'll see them, but in exercises I did, the code questions were accompanied by writing prompts. At the beginning, you answered a prompt asking what you were hoping to accomplish by the end of these exercises -- I put down the truth, which was that I wanted to become much faster at reading code.

Reader, I did terribly. After every botched attempt, another prompt would pop up asking me to reflect on what I had done incorrectly. It finally dawned on me that the real problem was that I didn't understand the subroutines that her experiment was calling. That, the real goal of understanding code wasn't speed. That you can't magic your way to speedy code reading. There are no shortcuts to reading code. The only way to really, truly understand how code, any piece of code, no matter how small, works, is by reading it. By actually sitting your butt in the chair and finding the goddamn code that is being called and executed. Until you do that, any hope for accuracy is as good as gone. And what use is speed without accuracy? Needless to say, after a few exercises, I completely and totally and forever abandoned any hope of being a 'fast code reader'.

My goal had, and has, changed; now I want to understand it.

On Fluency

Across these experiences, at a low level, my brain has recognized that it can understand things, that given the time and resources, I can figure out what is going on, in almost any domain!

More than anything, it's this newfound confidence in my ability to understand that's really changed the game. It's a confidence built from hard extracurricular reading, and curiosity, and relaxing the arbitrary time constraints that I've put on myself in the past. I give myself the space now to figure things out.

There is no struggle now; it's just pure fun, bounded only by my own curiosity and the number of hours in a day.

#computer-science #knowledge #fluency
5 Aug 2018 c.e.
Dear Mother

What is a mother, in the context of user experiences?

I recently found myself at a blockchain user group meetup, with a bunch of other engineers. Someone, at one point, made an offhand comment of how his mom would find the easiness of doing a thing. He said it hesitantly, perhaps worried that invoking the image of a mother as an inexperienced user might be perceived as sexist, in that it plays to the stereotype of women not being particularly bright in terms of anything men know, really.

This isn't a new comparison, and it definitely wasn't the first time that I'd heard it, but it did get me thinking. What is it about moms that they never know how the latest and greatest technology works? Why, years after they've been first invoked in the role of the know-nothing foil, do they continue to play the role of techno-noviate?

More importantly, what can moms teach us about how we see non-technical people?

The Role of Mom

Although the person who put me on this particular brainwave may not have meant it sexist-ly, I'm going to take a bit of a sexist lens and apply it back on him and all of the manly brethren that have and continue to invoke mother. My relationship with mom isn't a man's relationship with his mother, so I'm mostly conjecturing here. Please pardon any inaccuracies or over-simplification that I might induldge in.

Who is 'mom' to a devoted son? My guess is that it's a person in your life that supports your projects and interests, that is interested in hearing what you're up to, and is willing to sit through your explanations patiently, even if she's lost the thread of your invention. She's, in my imagining, a sympathetic and interested listener, one who lacks any context at all for the things that you're telling her.

The lack of context is important, as is the interest in learning more. But it's a bounded interest in learning more, if you talk too long or get too lost in the weeds, this fictional ur-Mom character that I've created will give you a "that's nice dear" and move on to the next topic of conversation.

Let's translate this to a broader understanding of users

A 'mom' is a person who's interested in hearing what you're up to, even if her way of experiencing what you're doing is naive or completely unattainable. She brings goodwill and patience, but only so much. Her general understanding of configuration settings and workflow is rudimentary at best.

Mom is also usually from an older generation, one that, to date, didn't grow up with apps or computers or smart phones. While there's a good number of the older generation that learned how to use email and text messages and YouTube, and, if the President is any indication, Twitter. They're not tech-unsavvy, they're tech naive.[1]

Is there a better Mom?

While having a default 'mom' character to fall back on is instructive, I do still find the proliferation of her as a fallback naive tech character a bit stereotypical. I also admit that finding a replacement go-to is difficult, as the particular blend of interest and naivete that the 'Mom' character represents isn't particularly common among human relationships.

[1] I mean naive here in the sense of 'writing a naive implementation' is usally one that is sub-optimal yet gets the job done.

#moms #user-experience
5 Jul 2018 c.e.
On the Nature of Bitcoin

I just finished reading David Graeber's book Debt: the first 5,000 years. In it, Graeber shows that how we think about money and exchange is fundamentally flawed. To do so, he digs into his experience as an anthropologist, the historical and archeological record of actual human societies, to give a more honest accounting, not of how money systems should work, but did and do.

My motivation for reading this book was fairly pointed -- I wanted a historical perspective in which to place digital currencies generally, Bitcoin specifically. I wasn't disappointed. What I found really surprised me, and honestly, completely re-wrote the way I think about digital currency, friendship charms, and my communal economic relationships more broadly.

The Local Value of Currency

A large focal point of Graeber's debunking in Debt concerns where hard, physical currency comes from. How did humans actually come to regard coins as a store of value? Classical economists use Adam Smith as their jumping off point for how currency arose: because it's hard to imagine a trade economy without currency. But Graeber says that the common 'market' of trading that we all imagine didn't really exist, back in the beginnings of human trade. Instead, he proposes that humans merely kept local, personalized accounts of who owed who what. You always were a bit in debt to someone, and someone was always a bit in debted to you. That's how societies worked -- everyone owed everyone else.

At some point in ancienct Mesopotamia, these debts came to be recorded on clay tablets. One person would owe another four bushels of grain, for example. So you'd write onto a clay tablet, twice maybe, that so and so owed 5 barrels of wheat. The tablet would then be broken in half, and each party to the transaction would get half. When the time came for payment to be made (Graeber wasn't entirely specific about how these tablets got redeemed), the tablet would be destroyed. According to Graeber, at some point people started trading these promises to pay with other parties. If Bob owed me four bushels of grain, I could exchange it with you for a new toga. Then, when the debt came due, Bob would pay you, the holder of the other tablet half, four bushels in exchange for the contract that you're holding.

The first version of 'currency', as in not an actual, obviously useful good, Graeber proposes, were these temporary, two-party contracts.

If these person to person promissory notes were the first version of currency, when did the gold and silver coinage come into play? Graeber asserts that coinage is almost always and explicitly the work of a governing body. A government can pass out gold and silver to its citizens as coinage, and then, as a way to give the coins some kind of value, would make it such that official gold or silver coins were the only way to pay taxes. Or, phrased another way, the government gave coinage its value by demanding that all citizens acquire enough of it annually to pay tribute to the government. It's pretty perverted when you think about it: governments dug deep into their treasuries, melted down their treasures or spent years of human capital building mines, such that they could divide it up into small pieces, stamp their image into it, distribute it among the people, only then to turn around and ask their citizens to hand it back at the end of the year, at least some of it anyway. It feels a bit out of scope to go into why governments would do this; let's just accept Graeber's explanation that they did it such that the government was able to afford goods and services from people, and then, eventually needed to extend that same ability onto it's army. So it gave one of the only things that a government can get control of -- treasure -- as payment to soldiers who then were able to buy what they needed using the coins the government gave them.

All of this may sound exceedingly hard to swallow without further proof, and I'm really not doing Graeber's arguments justice. But, have you ever heard of anyone successfully being able to pay taxes with anything other than the coin of the realm? It's not physically possible. In fact, it's one of the biggest reasons that early employees at non-publicly traded startups get stuck with options they can't exercise. You literally can't trade the shares for money, so you have nothing to give the government as its part of the tribute to the spoils you've won.

One thing that really stood out to me in Graeber's explanation of how even gold and bronze based currencies got their start was how the power of the government that issued the coin largely dictated the extent of that coin's value, independent of materials. You can see this phenomena today. Copper has a price in the open market that is often completely different than the 'monetary' value of copper printed into a penny.[1] The fact that the reach of a government's power was, and still largely is, the extent of the value of its currency says a lot about the actual nature of a coined instrument.

A currency is a locally understood store of value. It's accepted in certain territories and markets because it has a value to the people of that realm. Usually that realm is defined by the government party who's laws the state elects, or is forced, to follow.

Hence, the government has the power to drive the value of its currency by requiring it from its populace as tribute at tax time. More people who owe a greater debt to the government, payable only in the government's own currency, makes the value of the currency go up. So the government has the ultimate manipulatory power, in that they can raise or lower taxes, inherently changing the value of the currency that's used to pay the taxes. Taxes and the value of a government's money are intimately linked.

What's a Bitcoin Worth?

If currencies are only able to derive their value from their use of payment for governmental tribute, where does the value of a cryptocurrency like Bitcoin come from? You can't use Bitcoin to pay your taxes.[2]

Under the lens of currency as a token for state obligations, Bitcoin is not a currency. Thus as far as any national government is concerned, Bitcoin has no value.[3]

But is Bitcoin valueless? Even the coins of old empires had a monetary value based on their physical substrate -- gold and silver have plenty of applications in manufacturing and jewelry making, if nothing else.

Let's consider the 'substrate' that makes up Bitcoin.

In the classical sense of governmental fiat, Bitcoin is not a currency. It has no value in terms of being accepted by the government to pay a debt. But, given the market and exchanges that have developed around Bitcoin, it clearly has a value. Why? What about Bitcoin is valuable, in and of itself, independent of its ability to be exchanged for other goods? Doesn't that make it like a currency?

It's tempting to delve into aspects of contracts or old style tokens that were promises to pay. Bitcoin shares a lot of common features with these, but inherently isn't rooted in a debt or a promise to pay. That's because Bitcoin, at its core, isn't a ledger of who will pay who, but rather a permanent record of who owns what. So being a debt that one person owes another doesn't really apply here.

Rather, Bitcoin's value comes from the system that it's built upon. Bitcoin is a globally available, persistent, decentralized accounting ledger with a genuinely verifiable timestamping machine. This timestamping mechanism is an important feature and value proposition of Bitcoin as a value store -- it's what gives you the ability to order payments in time. I'd argue that it's the most important, valuable aspect of the computer system that makes Bitcoin possible.

Satoshi didn't invent the time keeping machine that backs Bitcoin[4]. In fact, it was first proposed in Haber and Stornetta's 1991 paper in the Journal of Cryptography "How to timestamp a digital document". In the paper, Haber+Stornetta propose two different mechanisms for creating a global and perpetual timestamp verification machine. Satoshi used the first mechanism, of including the hash of a previous document in the following document, creating a chain of time verifiable documents. Bitcoin blocks are time verifiable documents. This is, to a large extent what makes them incredibly valuable. Due to their timestamped nature, and the lack of central control over this machine, they are unspoofable. The value of Bitcoin then, is in its digital timestamping value.

A Short Digression on The Historicity of One-Way Functions

I stated earlier that Bitcoin isn't a debt system, but in a lot of ways the way that value is passed from one holder to the next closely resembles early currency systems of the Ancient Middle East and the European Middle Ages. In these systems, debts were often marked in notches on a rod or tablet and then broken. The debtor would carry one half, the owner of the debt the other.

I struggled for a while to understand how a broken rod or tablet was good as a contract, but it's quite simple and ingenious. Curiously, it functions very similarly to a cryptographic one way function. A clay tablet is easy to break into two parts. It is also easy to tell if two parts of a broken tablet belong to each other, merely by seeing if the broken edges fit back together. However, it is very difficult to break a second clay tablet in such a way as to absolutely mirror the first. This is why clay tablets were broken -- to create signatures that only the other half could fulfill.

Cryptographic one-way functions work in an incredibly similar manner, except that, instead of relying on the random ordering of physical tablet particles in a break, they rely on the difficulty of finding and factoring large primes. Cryptographers and mathematicians have largely succeeded in copying the ease of tablet breaking and matching with the use of public and private keys.

The only downside to the numeric device is that you have to keep your private key a secret, whereas a tablet's contents can be public. It's a common trope of modern technology to convert a physical device into data -- in this case transforming the security from physical space to informational. Put another way, the clay tablet version of document verification was based on what you have, a matching clay tablet, the new Bitcoin mediated version of verification is based on what you know, a large number. [5]

Supply is Irrelevant

The supply of Bitcoin is irrelevant in terms of its value, because the value of the system isn't like gold or silver. Gold and silver's value is based, to some extent, on how scarce it is. While it's true that Bitcoin isn't infinite in supply, it gets it's true value from existing as part of a global, always-available verifiable timestamping machine.

The supply of Bitcoin is limited, but when newly mined blocks are no longer subsidized, I believe that people will still be willing to pay the required fees for values to be transferred, because being able to transfer obligations is a valuable enough service to continue operating the system. The external value of the currency may go up, but Bitcoin as a system won't crash because the digital ledger will still be a valuable service in and of itself.

The real trick to understanding this is to compare Bitcoin to the actual type of currency that it most closely resembles: wooden rods or clay, not gold and silver. Gold and silver, in some sense, are understood to derive their value from their scarcity or how much work it takes to get them, and admittedly, there is some work done in order to 'mine' more Bitcoin, and running the computer network that makes up the Bitcoin system has a non-negligible cost, but at the base level of computers bits and memory incurs a cost on the order of clay or wood, not gold or silver. Further, gold and silver have a more primary value derived not from their scarcity, per se, but instead from the governmental tax requirement levied on every citizen.

So how scarce is Bitcoin? Internally, it's limited to 21 million Bitcoin, total. It may seem like a small number, but it works out to 262,500 satoshi apiece for 8 billion humans. One could argue that that's enough, at a raw level, for every human to transact with each other. If we, as a society, got organized, we could distribute every human a non-trivial allotment of Bitcoin from birth, no questions asked.

On a technological level, the bits and computer infrastructure that makes up Bitcoin is cheap and widely available. You can see this assumption of a certain ubiquity of bits in Bitcoin's distributed model. As a system, it is designed to run on most medium-range consumer grade computers.

Traditional gold and silver coinage systems make for a bad comparisons with digital currencies. What we think of as the metal money coinage system is inherently inseparable from government influence and meddling in exchange. The value of the coin rests largely on the ability of the government to stay in power. It's tied to the state and its power.

Clay tablets and wooden rods, on the other hand, need no higher authority. They're a record of a debt owed between two private parties. They're made out of materials that everyone has access to, there's nothing special about the object in and of itself. Its value comes from who hodls it, and the fact that there are only two people who can set that debt to right. No state power is needed to enforce the value of the contract.

Bitcoin, then, isn't a store of value, it's a store of past, paid debts.

Bitcoin and The State

I wondered a lot about why China would be so anti-Bitcoin. Graeber's linkage of fiat (ie money coins) to state control and taxes explained a lot of the state's resistance to it. Under Graeber's explanation of the ties between the state and coins, this antipathy makes more sense.

Graeber shows that the state has a tendency to 'take over' or replicate independent stores of value, in official, sanctioned ways. This is how paper money became a thing -- the original paper spec started from private citizens issuing paper promissory notes. Eventually the state began to copy this method for accounting value.

The same thing is happening in cryptocurrency. Bitmain and Circle are supposedly in the process of putting together a cryptocurrency that would mirror the USD. Admittedly, it's not the US government making the coin, but so much of private market is an extension of the state (prisons, healthcare, debt collection, money printing) it wouldn't be unique for a private party to take on this role. (China's President Xi has even gone on record stating that he's in favor of cryptocurrency, in general, just not Bitcoin in particular. I have no doubt that the government is currently working on a cryptocurrency that it controls.)

So is Bitcoin a Currency?

So is Bitcoin a currency? Viewed through the lens of debt tallies, the answer is an unequivocal yes.

From the lens of actual spec, that is gold and silver based currency issued by a central government authority, the answer is an unequivocal no.

Bitcoin has no utility for raising and paying an army, its value extends as far as people are willing and ready to incur debts between each other. In the larger view, Bitcoin is valid as long as the underlying computer network that accepts and timestamps transaction blocks still exists.

Like a clay tablet, Bitcoin is only usable as long as you retain your 'half' of the broken rod, or access to your private key. Bitcoin is not actually a 'coin' at all, it's a digital store of broken tablets, with a bunch of private wallets holding the matching half.

[1] Does anyone actually know how diluted copper pennies are these days? When's the last time someone did a physical inspection of the amount of copper in a penny?

[2] Most places. Seminole, Florida and Arizona are two exceptions though I will mention that they trade the Bitcoin into USD immediately so I'm not really sure that this counts.

[3] Tellingly, 3 of the 8 listed references in Satoshi's whitepaper are for papers on timestamping machines.

[4] As of this writing, but see note 2 about how a few governments will accept Bitcoin and immediately convert it into the state fiat.

[5] In this way, the storage of Bitcoin private keys is an interesting, hybrid challenge, mostly because humans aren't very good at remembering things -- you have to secure the place that you store the big secret number that is your private key.

References:
Satoshi's Whitepaper https://bitcoin.org/bitcoin.pdf
How to Timestamp a Digital Document https://www.anf.es/pdf/Haber_Stornetta.pdf

#bitcoin #monetary-systems #cryptocurrency #debt #david-graeber
1 Jul 2018 c.e.
What's Wrong with Nothing?

I had this idea to do a post that was a takedown of the incessant null bashing that every Java community seems to fall into, much to my own personal annoyance.

I find it annoying because I enjoy thinking through the case of nothingness. In some way, I attribute null complaining as a signal of laziness, of a dev who wanted things to just work and was annoyed when they didn't. A world without nulls, like all utopias, is a nice thought but entirely impractical.[1]

I've come around though. You won't hear me complaining about nulls quite yet, but, understanding that reality involves nulls and empty cases, your programming language can do a better job at forcing you to handle uncertainty than Java does. Go does a good job with this, I think, in two ways. The first is allowing a method to return more than one thing at a time, the second is by creating a convention of having one of the returned items be an error object. That, ironically, can be nil. So maybe Go hasn't gotten away from null, per se, entirely, but they've rearranged the emptiness such that it leans heavily in the direction of surfacing and forcing you to deal with the void.

From the viewpoint of forcing you, as the programmer, to acknowledge and handle the case where an item may have returned or been null, for whatever reason but typically some failure case, Java's recent introduction of Optionals is a good step. Optionals allow you to more concretely signal the potential for a failure case, but, in a lot of ways, it still feels about as useful as bolted-on types in Python. The language still gives you the flexibility to get around using an Optional how it should be. The compiler won't throw an error if you check if the returned Optional is null. It shouldn't be, if you're following convention, but let's be real, there's nothing stopping you from getting your Optional paradigms sunk into a wicked morass of nulls.

The other language feature that I've been hearing a lot about is the Groovy/Kotlin way of questionably handling null, with a question mark. This is nice from a syntactic viewpoint, as you can make calls without worrying about whether or not the underlying item is there or not. To be clear, I haven't used this much, but I do wonder how this kind of avoidance really gives you the opportunity to handle the null case. Sure, it's nice that you're not getting a null pointer exception because you tried to call a method on nothing, but you haven't really dealt with the fact that there's nothing there in the first place. Maybe my opinion will change after I've used the language for a bit.

So, while I won't be complaining about nulls anytime soon, I am looking forward to learning more about how Kotlin handles the issue. Unintended nothing in programming isn't going away anytime soon, but there is definitely some room for improvement in the paradigm that Java settled on a few decades ago.

[1] I also tend to find null complaints from Java programmers couched in some kind of 'syntax' argument, as if checking for null was a pain because it required an unholy 'if' to clutter their otherwise perfect roll of business logic. Optionals seem to be exclusively dedicated to reducing the number of times that you have to call if for reasons that I don't entirely understand. Something about the functional programming bruhaha of the last 10 years seems to have convinced procedural programmers that if blocks are a blight on the beauty of programming. As someone who's spent time writing really ugly code to make apps pretty, I'm not really sure that eliminating the if block makes your app any better. The if is still there, you're just hiding it somewhere. Most syntax 'features' seem disingenous this way.

#null #nothing #programming #syntax
29 Apr 2018 c.e.
On Outages, or Taking Time Off

I recently joined a large software engineering organization, working on a server side team for their fast growing consumer product. There's a couple of services that make up the backend, my immediate team works on a very important but not entirely critical path set of side feature services.

We had our first major outage this week. I'm lucky to work with some very competent engineers who did most of the heavy lifting, but it still took about 72 hours for us to get back to normal.

To help with getting us back up, we ended up enlisting some help from the networking team. The original catalyst for the issue had to do with some long running connections to an outside service getting shut down by our load balancer, so we needed their help to switch to a different web proxy, which would let us connect directly to our external services without going through the schizophrenic load balancer. The networking team has been doing a lot of work lately with helping our 'front end' server team break their monolithic service into sharded pieces.

One of the engineers made the comment that doing the sharding changes while trying to keep the service up was like 'trying to change the tires on a car going 60 MPH down the highway'.

Let's talk about this, for a minute. Our service struggles with our traffic load. On our high volume days, it's not uncommon for us to 'load shed' 10-30% of traffic so that our servers don't go down completely. This means that 10-30% of the people trying to use our service on high traffic times, will be unable to.

Once we've switched to a more robust, sharded architecture, we'll hopefully be much better able to handle the traffic coming in. But switching over to this sharded architecture is really hard to do while the servers are still running, because you have to worry about data updates and insertions happening while you're making these huge infrastructure changes.

So.... why do we do it? If it's merely a matter of switching over to a new service, why don't we shut the service down for a few minutes, do the switch and then come back up again? While we're expanding into international arenas, our userbase is still 98% based in the USA. There are slow periods. Surely a 100%, planned and communicated downtime done at a time when the majority of our users are sleeping is better than 10-30% of our customers being unable to get things done during our peak hours.

People sleep. Subways shut down. Gas station stores close. Factories have downtime for retooling and repair. What is it about software that its makers have decided that we need to perpetrate this illusion of always on availability. Imagine if GMail went offline every Sunday. Like, that's it, no GMail for anyone available on Sunday. I'd personally be pretty screwed because I store so much of my life and to dos and personal correspondence in GMail but also, on the other hand, maybe that's not such a bad thing to not be able to get to on a Sunday. I'd have to plan ahead. I'd have to figure out something else to look at on Sunday evenings.

Imagine Twitter turning off 'after hours'. You can tweet and read other people's posts from 6am to 10pm every day, but after that you're on your own. Would I miss things? Sure. But I'd also argue that I really, personally, don't think my life would be the worse for not being able to tweet asinine comments at 1am.

Internet services have pervaded our lives because they make themselves available, at all hours, in all forms.

So why don't we plan them?

I think there's something to be said about the fact that we don't 'plan' our outages. For my team, our outages happen more or less every big day. Could you call them planned? No, but they're hella predictable. There's always that chance, slight (and growing larger as we continue to make improvements to our load capacity) that nothing will go wrong, that we won't have to shed load or turn off important backend pipelines just so that requests can get served.

Here's the thing though: unplanned outages are 'blameless'. Customers can complain to us about being down, about being unpredictable, but they also just point to the weather gods and go "damn, I guess it's raining today". If, on the other hand, you plan an outage, that gives customers the opportunity to excoriate you. You're weak, you've admitted it publicly. You're unable to keep up with traffic and that's a signal to others that all is not well in your ecosystem. So by publishing downtime, you're, in a way, giving out information to competitors and investors that we'd probably otherwise do better keeping to ourselves.

So we just keep turning off the service for a random 10-30% of users every big day, and trying to do this Herculean task of changing the tires on a moving automobile.

Coda

I started this post off talking about the outage on my team, and veered into a long discursion about planned downtime. In case you're wondering about what happened to my particular team, we managed to get the service back up, days after the initial web connection problems caused us to more or less shut down. We haven't done a post mortem yet, but one interesting thing about our particular brand of outage is truly how sistemic and 'blameless' it was. There wasn't a single point of failure, rather a systemic series of problems that we had to figure out a solution to. It took us a while, because deploying code is slow, because running through millions of records to get back on track is even slower. Unlike the problems that plague our front end server team, this wasn't one that we would have seen coming without some amount of planned gamedays. And gamedays are always a nice to have until you're in the middle of a multi-day outage.

I feel really lucky that my co-workers are really experienced, and have written enough of the service to have a fairly deep understanding of the underlying structures of the application.

Finally, I'm sure there's a lot more nuance to downtime and outages than I've captured here, like how it takes hours to run an upgrade and sometimes you don't have hours to take a service down. But also, I find it fascinating that software systems, which in someways are more complicated than metro systems, try really hard not to take time off whereas the train that runs through my neighborhood is on vacation for repairs every other goddamn weekend.

#server-eng #outages #downtime #how-many-nines
21 Feb 2018 c.e.
Moon Clock, An Update

Wherein we announce the launch of moon.supply

I've been working slowly, but steadily on getting the Moon Clock project ready for Kickstarter primetime. It still feels a long way off, but we're getting there.

Since I last wrote in early January, I've built a two more moon clocks and ordered the parts for another few, found an informal circuitry advisor (hi Yuanyu), realized I know a lot more about hardware hacking than I thought, ordered (and am extremely excited about receiving) my first benchtop power supply[0], bought the wrong parts[1] on Digikey, found a friend with a 3D printer and some CAD exp (thanks Eric), talked my partner (<3) into dusting off his electronics 101 skills, found a designer for the KickStarter (hi Paul), and made some LEDs blink.

Oh, and I've ported the moon clock code over from some Golang/Python hybrid to pure Go and put an API where you can get the current moon image for any lat/lon[2].

Ya'll, I'm making a thing!

Here's some notes and impressions on the work I've been doing.

Wherein you realize you're almost but not quite in over your head

I could build a bunch of Moon Clocks using the same hardware I used for the prototypes, but there's a lot of problems with the current set up. First off, the parts for a Moon Clock, just off the shelf, are really expensive. The full version of a Raspberry Pi costs $30, plus the Adafruit HAT piece I'm using to wire the LED to the Pi is another $20. That's $50 right there just for the brains of it.

Secondly, the chips don't really fit in the frames I'm using. I might be able to find something a bit deeper, but designing my own board will let me get something that fits nicely into the existing frame. Further, the boards require a decent amount of soldering and setup. Having a customized chip with all the parts already soldered on will make assembly a lot faster.

On the face of it, designing a PCB from scratch is pretty daunting but, I'll let you in on a little secret. All that a PCB does is connect the differrent parts together. Most of which pins connect to what is specified by the datasheets (or open source library you're using to drive the display). You literally just connect the pluses to the pluses and the grounds to ground. Ok, maybe it's not that simple for every circuit, but the Moon Clock wiring really isn't that complicated! Winning!

The most electronics I've done is getting some Raspberry Pis to work, and hacking together a pretty complicated light switch a few years back.[3]

Suffice it to say, I know just enough about how circuits work to be extremely dangerous.

First steps towards a PCB

The first step toward making your own circuit is to figure out what you need. I'm at this stage right now, more or less.

Figuring out what you need is a lot harder than it sounds.

For starters, I know I needed a 'microcontroller' of some sort, the word 'microcontroller' being about the extent of what I knew about them. Luckily, The Prepared had sent out this really wonderful guide to sub-$1 microcontrollers. I passed the article to a friend of mine, who in return sent me back the page for the STM 32.

There's a lot of options for microcontrollers, mostly because there's a lot of different things that you can use a microcontroller for. Solar panel rigs, car interiors, low-power sensors, networked devices, outdoor LED panel displays, etc. Different applications need different things in a chip, hence a plethora of options.

I ended up buying a couple of devboards to try out; here's the criteria I used to help me figure out what to get:

  • Clock speed. Adafruit claims you need 16Mhz+ to run the LED panel without too much flickering. The slowest chip I got runs at 48Mhz (the F0 series), the fastest is up to 72Mhz (F3). Raspberry Pi's, for comparison, run at about 1Ghz.
  • RAM. This is how much you can hold in memory at once. For the LED display, we theoretically need to be able to hold a 32 x 32 x 8 x 3 byte image, which is about 24K. That's not counting the program that's running. I accidentally got one dev board that only has 20k of RAM; the beefiest board I've got has 128k (?).
  • Flash Memory. This is where the program we write will get stored. I'm not too worried about how big this needs to be but bigger is probably better.

There were a few things that I have come to realize might be important, but at the moment don't have any real way of evaluating boards for this:

  • Floating point hardware. The Moon Clock calculations do a lot of floating point math. Supposedly some chips have addons that make this a lot faster. Luckily, the clock face doesn't update that often, so we'll probably be fine.
  • Energy efficiency. Good thing the clock plugs into an outlet.
  • Number and type of pinouts. There's still a few circuitry parts to figure out[4] but I'm pretty lucky that the LED panel I'm driving doesn't have any complicated connector requirements. The STM32 dev boards I ordered ended up having Arduino compatible pin configuartions, which made finding documentation on them pretty straightforward.
  • Dev environments. The nice thing about the STM boards is how straightforward the dev environment is. There's a graphical image of the board! With pinouts that you can configure with a GUI! It generates C code for you! The STM family is known for having a good dev environment though, so I definitely lucked out.

So far, we've wired up one dev board and gotten it to flash an LED. Which feels like a huge success! There's still a lot of work to go in terms of wiring up the LED matrix, but it feels within reach!

After the LED driving bits are sorted out, I'll probably need to port the Moon calculating code over to C. That, or figure out if Go will run on a tiny chip. I'm stoked.

Putting up an API

On the software front, I thought it'd be fun to put up an API to let other people use the Moon engine I wrote to get Moon pictures. It's super simple because I was too lazy to add more functionality, but if you give it your latitude and longitude, it'll give you back a base64 encoded 320x320 image of the moon right now at that location.

You can try it for yourself with curl, or by hitting this endpoint in a browser:

curl "https://api.moon.supply/img?lat=37.7272&lon=-77.2918"

I got really lucky because AWS just launched Golang support in Lambda back in January. There's a lot of hacky projects that will let you run Go with a nodejs or python wrapper; but luckily I didn't have to deal with any of that. The moon engine is a really great example of a good use case for Lambdas -- the calculation engine is completely contained and you really only need it when you call it!

I couldn't find an example of how to send back the 'raw' png bytes using the AWS API Gateway event object so I encoded it to base64 to send back. If you know how to make it return an image, I'd be very curious to hear how to do it!

Getting the Moon engine Lambda ready meant re-writing the drawing code from a Python PIL library to using Go's image library. It was a bit easier to do, but required pulling out some trigonometry to get the circle distances correct.

To draw a circle in PIL, the API requires you to provide a bounding box for the circle and an angle which to fill in. Here's an example that returns a semi-circle.

img = Image.new('RGB', (320,320), 'black')  
draw =  ImageDraw.Draw(img)  
draw.pieslice((start,start,diameter+start,diameter+start),-180,0, 'white')

The Go image library is a lot lower level. The example that I based my draw-er off of instead requires that you return the color value for a given pixel coordinate. Basically, writing a function that returns either black or white for a given pixel. It ended up being easier to write than the PIL code, once I figured out how the trig worked.

Here's an example 'draw' function that will get you a white circle; it returns 0 (black) if the pixel point is outside the circle, and 255 (white) if it's inside.

func (c *Circle) Brightness(x, y float64) uint8 {  
    var dx, dy float64 = c.X - x, c.Y - y  
    d := math.Sqrt(dx*dx + dy*dy) / c.R  
    if d > 1 {  
        return 0  
    }  
    return 255  
}

Unfortunately, as soon as I put the website (moon.supply) up, I realized how nice it would be to have the API return an SVG formatted moon instead of image pixels, so that I could use it on the site without there being any rastering. I'll add it to the 'to-do' list.

Wrapping Up & Kickstarter Dates

So, things are slowly marching toward a Kickstarter! There's still a lot of work to do, but I'm pretty happy with where we're at right now. I'll keep you posted, but for now look for a launch sometime in mid- to late March! We'll have a few early bird specials, like original Moon Clock prototypes and a bit of a discount for our first handful of backers!

In the meantime, give the Moon API a try and let me know what you think! Or just visit the moon.supply site to see what the moon looks like for where you are now.


[0] A benchtop power supply lets you try out a bunch of different voltages and currents. If you're working with electronic components and aren't sure what the power requirements are yet, a power supply will let you easily fiddle around with different settings. I got this one from Amazon.

[1] https://twitter.com/niftynei/status/961469328297148416

[2] I think the images for the moon at the poles is broken. Sorry.

[3] I built a wireless circuit and Android app to control a power outlet a few years back (which I then wired up to a string of lights). I wrote it up in story form; I wish I had done more of a technical write up -- I learned a lot about wireless routers (specifically XBees) and pinouts with it. God I had no idea what I was doing.

[4] It'd be nice to have a dial to set the latitude and longitude with.

#moon-clock #api #aws #lambda
2 Jan 2018 c.e.
Introducing the Moon Clock, The World's First Wall-Mounted Lunar Timepiece

The Moon Clock Project

MoonClock

I spent the last half of 2017 working on a project that I'm calling a Moon Clock, a physical lunar timepiece that displays what the moon looks like for your current location. Apparently the moon is a trendy thing these days.[0]

I got the idea when decorating a new bedroom; the walls needed something that went with my celestial themed room and I already had a 'normal' clock. Why not make a lunar clock?

Making a Moon Clock

The moon clock is a simple, backlit screen print of the moon, illuminated to show what the moon looks like at your location. It's mounted in a frame and when hung looks like a backlit moon portrait.

I went through a few ideas of using mechanical moving screens to get the moon shadow, but ultimately settled on using an LED display that lights up a printed moon screen.

The prototype I've built uses an LED matrix, a RaspberryPi and the corresponding Adafruit HAT (for driving the LED). You can see how to set up an LED project for yourself via this Adafruit tutorial. Note that the underlying Gihub project that Adafruit uses as a driver is super out of date; the updated version of the original project now includes instructions for how to use it with the Adafruit HAT. I recommend using the updated driver version.

The moon clocks is simply this Adafruit LED project mounted in a shadowbox, with a screen mounted at the front.

Lunar Tracking

Getting the LED set up was fairly trivial; however the astronomical algorithms required to calculate the moon phase given a latitude, longitude and time turned out to be non-trivial. The book Astronomical Algorithms by astronomer Jean Meeus is generally considered to be the gold standard in computational astronomy, but as a layman I found the astronomical jargon to be largely unintelligible.

I spent a good amount of time figuring out exactly what the information was called, and more time yet figuring out how to use the collection of algorithms presented in Meeus' book to compute the values I was looking for.

The problem lay mostly in that the exact information I need (percent illimination and position angle of the bright limb, or in other words, rotation of the moon) is derived from the combination of about 15 or so different computations. Figuring out which equations proved difficult without a solid understanding of astronomical geometry. For this, I found W.M. Smart's Textbook on Spherical Astronomy to be a literal project saver.

Briefly, the phase and angle of rotation of the moon is derived from the position and distance of the sun and moon, in relation to your current position on the Earth. The percent of the moon that is illuminated is a direct result of the angle between these three bodies.[1]

There's a few other things involved with calculating a heavenly body's exact current position: the difference between the "mean" sun time and actual sun time, taking into account the elliptical shape of planet Earth, and adjusting the coordinates from geocentric to your location on the Earth's surface. I'm doing a real disservice by not fully explaining how it works, but it feels outside the scope of this blog post. I'll have to do a follow up post on the basics of spherical astronomy.

Luckily, Meeus' book provides algorithms for all of these things. Meeus doesn't give good test data for the calculation I was doing, so I used CalSky's excellent Moon calculator to verify that my implementation was returning accurate values. You can use it yourself to see what the moon looks like at your location, on your computer screen.

Display

Adafruit's LED samples included an example using Python's Image Library, or PIL. PIL allows you to compose image arrays, which the LED library would then render to the screen. PIL's API allows you to compose your own image; with a few more equations you can get an image of the moon with just two overlapping circle arcs.

I ran into two problems getting the moon rendered correctly.

First, Meeu's algorithms return the percentage of the moon currently illuminated. In order to draw this, I'd need to translate that percentage into a circle arc that could be drawn with PIL. Drawing a moon at 50% illuminated is quite straightforward: you just need a semi-circle. Any illumination above or below this requires knowing the arc and radius of the intersecting circle that draws the shadow across the moon's face.

Using the percent illuminated returned by our astronomy calculations and the radius of the moon (with a 32x32 square, your radius is 16), we can find the area of the illuminated portion with some math[2]. This ended up being less than straightforward to solve for directly, so we approximate within a few decimal places. Since this is a non-trivial computation and since the values are static, I ended up generating a lookup table for these values.

It turns out that rendering arcs at 32x32 pixels is incredibly grainy, or aliased. The resolution was so bad that the moon animation wasn't smooth -- instead of waning steadily the moon clock would appear to jumpily get bigger and smaller. However, reference images I was generating at higher resolutions transitioned smoothly. Since the LED matrix supports a wide gamut of colors, a friend suggested resizing the correct, larger images down to the LED matrix size using a bilinear filter, which smooths out the curves using color interpolation. SciPy provides this functionality for images; the resulting output is what you see on the Moon Clock today.

That about sums up the 'science' behind the Moon Clock, the last bit involved mounting the whole contraption into an Ikea Ribba frame.

LED matrix backing

Nota Bene

There are a few things the Moon Clock doesn't account for:

  • Elevation. Meeus' algorithms ask you to consider the elevation of an observer when calculating the parallax between your position on the Earth's crust and the center of the Earth (where almost all the planet location equations assume you are). In my tests, the difference between sea level and the top of Mount Everest (8,850m) had a negligible impact on the moon's appearance, at least in terms of percent illuminated and rotation; instead it's been pegged to a reference elevation of a few meters above sea level.
  • Whether the Moon is above or below the horizon. The Moon Clock shows what the moon looks like from your position on Earth, more or less assuming a transparent Earth. When the moon is below the horizon, the illumination shown will be accurate; the rotation however is an estimate.
  • Moon libations. Briefly, 'libations' refers to the oscillations the moon experiences as it rotates around the Earth. This oscillation slightly changes what part of the Moon face is illuminated and visible. Since the Moon Clock's moon image is screen printed, it doesn't accurately reflect the markings you'll see on the actual moon. You can read more on moon libations on Wikipedia

Upcoming work:

  • I'd love to get the Moon Clock to be battery powered, but I'm not sure this is possible with the current LED matrix set up I have.
  • I still need to update the matrix driver to the newer mainline version
  • I'd love to figure out how to run the Moon Clock on simpler PCB; the Raspberry Pi feels overfeatured for what I'm using it for.

So what's having a Moon Clock like?

In all honestly, I'm still discovering what it's like to have this background consciousness of the moon. I'm constantly being surprised at how the Moon has (or hasn't!) changed since I last looked. The phases seem to come and go a lot faster than I anticipated. And it's always changing! I'm traveling right now, and I feel a bit disconnected, not knowing what the Moon looks like today. It's weird not knowing.

What's next

I'm planning to Kickstart the Moon Clock some time in 2018. I'll let you know when that happens. Until then, I'm keeping the source code under wraps; I anticipate making it public will be part of the project launch.

Debts

This project has a many debts. I owe a big thank you to Karthik Raveendran for his endless support and deep graphics knowledge; Bryan Newbold and Sarah Gonzalez for lending their expertise and time to this project, especially at the early stages. For their true dedication to 'non-invasive living technology', the Moon Clock project is philosophically indebted to Jake Levine, Zoe Salditch and the rest of the Electric Objects team.

Moon Glamour Shot


[0] Charlie Deets just launched a moon app!

[1] Every computer's ability to provide you with UTC time came in handy for astronomical calculations. Before computers, figuring out the real time of your location involved taking the wall clock time, translating that to UTC time, and then recalculating the "actual" sun time for your specific longitude. The reason for this is that wall clock time isn't the actual time at your location -- it's set to a reference longitude which may be off by a few minutes. For our purposes, using your 'timezone time' rather than your actual time probably wouldn't make a noticeable difference, since the resolution of the moon available on a 32x32 LED matrix is fairly low, but for accuracy's sake, using the 'actual' time is preferable.

[2] See Circular Segment. In our case, we know the area and need to solve for the radius R and angle theta.

#moon-clock #projects #lunar-timepiece #astronomy
26 Nov 2017 c.e.
Finding O

I've written before about how I struggle with algorithm problems. I'm finally, eight months after diagnosing the problem, sitting down to figure out what it is about algorithms that I find difficult.

I'm doing two things differently. The first is that I'm using a textbook, specifically Skiena's Algorithm Design Manual. The second is that I'm doing them without time pressure. By comparison, the last time that I sat down to Learn Algorithms, around 5 years ago, it was during my time at the Recurse Center (nee Hacker School). I took an online class from Stanford taught by Tim Roughgarden; assignments were due weekly and the learning material was presented via video lectures.

I'm still going with them with a friend; for the Roughgarden course I found a study buddy among my batchmates. Now, I'm pairing with my sister on them.

I think the biggest thing that's changed is my perspective on programming. The first time around I was in awe of algorithms. They were this thing that separated the Real Programmers from the rest of us that merely toyed around at programming.

This time, I'm less intimidated by the concept, and far more awed by the awesome power of an O(lg n) equation.

I've realized a few things in the few short weeks since I started going thrugh Skiena's very well written book. The first is that, as always, the presentation style matters. Skiena's book is far and away the best work on algorithms for experienced programmers that I've yet encountered. The discussion is accessible, the progression is sensible. All of the homework sections for the chapters are relevant outgrowths of the foregoing discussion. The exercises are hard enough that completing them feels like an accomplishment, but they're not so difficult as to be discouraging. It's definitely not the best book for everyone, but it certainly feels as though I came across it at the right time.

Secondly, Big O thinking is incredibly practical. Writing a naive algorithm in and then tweaking it until it runs is n-something time is both incredibly satisfying as well as useful. Writing inefficient programs is incredibly impractical, both from a machine and energy standpoint as well as a human time cost. With this new lens of practicality, understanding how to evaluate routines seems like a common sense to programming. Why not learn how to make better use of your machine?

Another big realization that Skiena's book helped me with, greatly, is understanding what my role is in the realm of algorithms. It sounds ridiculous, but, at some level, I think that I had this notion that being 'good at algorithms' meant that I'd be expected to produce the next shortest path algorithm, a Neigut's answer to Djikstra if you will. Skiena does a great job of couching algorithmic thinking as a framework for problem solving. Given a problem, what tools in your toolset are the best ones to try and solve the problem.

#algorithms #learning #next
More...