Financial Cryptography

Syndicate content
Where the crypto rubber meets the Road of Finance...
Updated: 4 years 33 weeks ago

Identity is the New Money -- new book from Dave Birch

Thu, 03/27/2014 - 03:53
As many know, I'm in the business of building a new Identity framework wrapped around a cryptocurrency issuance infrastructure for social networks (in geek terms). Because I'm in this space directly, Dave Birch's new book entitled Identity is the New Money can be copied without hesitation! The only disagreement we might both agree on is why has it taken so long for people to understand the role of identity and money? Read on: Identity is the New Money £7.99 (including free P&P within the UK) This book will be published in late April 2014. Please click “Add to cart” above to pre-order a copy. David G.W. Birch is an internationally recognised thought leader in digital money and digital identity. In 2013 he was named one of WIRED magazine’s global top 15 favourite sources of news from the world of business and finance and was ranked the number 1 influencer in European emerging payments by Total Payments magazine. He is a Director of Consult Hyperion, the technical and strategic consultancy that specialises in electronic transactions. This book – which will be published in late April 2014 – argues that identity is changing profoundly and that money is changing equally profoundly. Because of technological change the two trends are converging so that all that we need for transacting will be our identities captured in the unique record of our online social contacts. Social networks and mobile phones are the key technologies. They will enable the building of an identity infrastructure that can enhance both privacy and security – there is no trade-off. The long-term consequences of these changes are impossible to predict, partly because how they take shape will depend on how companies (probably not banks) take advantage of business opportunities to deliver transaction services. But one prediction made here is that cash will soon be redundant – and a good thing too. In its place we will see a proliferation of new digital currencies. Dave Birch gives one of the best accounts available today on how we’ll navigate the challenges of the emerging payments landscape, and how traditional data points on identity don’t really make sense in a digital world. An outstanding piece of work which may well define our journey moving forward. — Brett King, Founder and CEO of Moven.com Dave Birch’s thoughts on digital identity were seminal to the UK’s Identity Assurance Scheme. Anyone entering the field of digital identity should take this book with them. — David Rennie, Identity Assurance Programme, Government Digital Service, Cabinet Office...

Update on password management -- how to choose good ones

Sat, 03/15/2014 - 05:25
Spotted in the Cryptogram is something called "the Schneier Method." So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like "This little piggy went to market" might become "tlpWENT2m". That nine-character password won't be in anyone's dictionary. Of course, don't use this one, because I've written about it. Choose your own sentence -- something personal. Here are some examples: WIw7,mstmsritt... ⇒ When I was seven, my sister threw my stuffed rabbit in the toilet. Wow...doestcst ⇒ Wow, does that couch smell terrible. Ltime@go-inag~faaa! ⇒ Long time ago in a galaxy not far away at all. uTVM,TPw55:utvm,tpwstillsecure ⇒ Until this very moment, these passwords were still secure. You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password. This is something which I've also recently taken to using more and more, but I still *write passwords down*. This isn't a complete solution, as we still have various threats such as losing the paper, forgetting the phrase, or being Miranda'd as we cross the border. The task here is to evolve to a system where we are reducing our risks, not increasing them. On the whole we need to improve our password creation ability quite dramatically if password crunching is a threat to us personally, and it seems to be the case as more and more sites fall to the NSA-preferred syndrome of systemic security ineptness....

How Bitcoin just made a bid to join the mainstream -- the choice of SSL PKI may be strategic rather than tactical

Mon, 03/10/2014 - 03:55
How fast does an alternative payment system take to join the mainstream? With Paypal it was less than a year; when they discovered that the palm pilot users were preferring the website, the strategy switched pretty quickly. With goldmoney it was pretty much instant, with e-gold, they never achieved it. With Bitcoin's new announcement, we can mark their intent as around four years or so. Belated welcome is perhaps due, if one thinks the mainstream is actually the place to be. Many do, although I have my reservations on this point and it is somewhat of a surprise to read of Bitcoin's choice of merchant authentication mechanism: Everyone seems to agree - the public key infrastructure, that network of certificate authorities that stands between you and encrypting your website, sucks. It’s too expensive. CA’s don’t do enough for the fees they charge. It’s too big. There isn’t enough competition. It’s compromised by governments. The technology is old and crusty. We should all use PGP instead. The litany of complaints about the PKI is endless. In recent weeks, the Bitcoin payment protocol (BIP 70) has started to roll out. One of the features present in version 1 is signing of payment requests, and the mechanism chosen was the SSL PKI. Mike Hearn then goes on to describe why they have chosen the SSL PKI. The description reads like a mix between an advertisement, an attack on the alleged alternates (such as they are) and an apology. Suffice to say, he gets most of the argumentation as approximately right & wrong as 99% of the experts in the field do. Several things stand out. I read from the article that there was little attempt to explore what might be called the "own alternative." From this I wonder if what is happening is that a conservative inner group are actually trying to push Bitcoin faster into the mainstream? Choosing to push merchants to SSL PKI authentication would certainly be one way to do it. However, this is a dangerous strategy, and what I didn't see addressed was the vector of control issue. This was a surprise, so I'll bring it out. A danger with stated approach is that it opens up a clear attack on every merchant. Right now, merchants deal under the radar, or can do so, and caveat emptor widely rules in Bitcoinlandia. Once merchants are certified to trade by the CAs however, there is a vector of identification, and permission. There is evidence. Requirements for incorporation. There are trade records and trade purposes. And, there is a CA which has ... what? Terms & conditions. Unfortunately, T&C in the CA industry are little known, widely ignored, and not at all understood. Don't believe me? Ask anyone in the industry for a serious discussion about the legal contracts behind PKI and you will hear more stoney silence than if you'd just proven to the UN that global warming was another malthusian plot to prepare the world for the invasion of Martians. Still don't believe me? Check what CABForum's documents say about them. Stoney silence, in words. But they are real, they exist, and they are forceful. They are very intended, as even when CAs don't understand them themselves, they mostly end up copying them. One thing you will find in them is that most CAs will decline to do business with any person or party that does something illegal. Skipping the whys and wherefores, this means that any agency can complain to any CA about a merchant on any basis ("hasn't got a license in my state to do some random thing") and the CA is now in a tricky position. Tricky enough to decide where its profits come from. Now, we hope that most merchants are honest and legal, and as mentioned above, maybe the strategy is to move in that direction in a more forceful way. The problem is that in the war against Bitcoin, as yet undeclared and still being conducted under diplomatic cover, any claim of illegality will take on a sort of state-credibility, and as we know when the authorities say that a merchant is acting against the law, the party is typically seen to be guilty until proven innocent &/or bankrupt. Factor in that it is pretty easy for an agency to take a line that Bitcoin is illegal per se. Factor in that all commercial CAs are now controlled via CABForum and are all aligned into one homogoneous equivalency (forget talk of competition, pah-lease...). Factor in that one sore thumb isn't worth defending, and sets a precedent. We should now see that all CAs will slowly but surely feel the need to mitigate against the threat to their business that is Bitcoin. It won't be that way to begin with. One thing that Bitcoiners will be advised to do is to get a CA in a safe and remote country, one with spine. That will last for a while. But the forces will build up. The risk is that one day, the meme will spread, "we're not welcoming that business any more." In military strategy, they say that the battle is won by the general that imposes his plan over the opponent, and I fear that choosing the SSL PKI may just be the opponent's move of choice, not Bitcoin's move of choice, no matter how attractive it may appear. But what's the alternative, Mike Hearn asks? His fundamental claim seems to stand: there isn't a clear alternative. This is true. If you ignore Bitcoin's purpose in life, if you ignore your own capabilities and you ignore your community, then ... I agree! If you ignore CAcert, too, I agree. There is no alternate. But what would happen if you didn't ignore these things? Bitcoin's community is ideally placed to duplicate the system. We know this because it's been done in the past, and the text book is written. Indeed, long term readers will know that I am to some extent just copying the textbook in my current business, and I can tell you it certainly isn't as hard as getting Bitcoin up and rolling. Capabilities? Well, actually when it comes to cryptographic protocols and reliable transactions and so forth, Bitcoin would certainly be in the game. I'm not sure why they would be so shy of this, as they are almost certainly better placed in this game than all the other CAs except perhaps the very biggest, and even that's debatable because it's been a long time since the biggest actually had the staff and know-how to do any game-changing. Bitcoin has got the backing of google who almost certainly have more knowledge about this stuff than all the CAs combined, and most of the vendors as well (OK, so Microsoft might give them a run for their money if they could get out of the stables). They've got the mission, the community, the capabilities and the textbook. Why then not? This is why I think that Bitcoin people have made a strategic decision to join the mainstream. If that's the case, then good luck, but boy-oh-boy! are they playing high-stakes poker here. Old Chinese curse: be careful what you wish for....

Eat this, Bitcoin -- Ricardo now has cloud!

Thu, 03/06/2014 - 05:28
Ricardo is now cloud-enabled. Which I hasten to add, is not the same thing as cloud-based, if your head is that lofty. Not the same thing, at all, no sir, feet firmly placed on planet earth! Here's the story. Apologies in advance for this self-indulgent rant, but if you are not a financial cryptographer, the following will appear to be just a lot of mumbo jumbo and your time is probably better spent elsewhere... With that warning, let's get our head up in the clouds for a while. As a client-server construction, much like a web arrangement, and like Bitcoin in that the client is in charge, the client is of course vulnerable to loss/theft. So a backup of some form is required. Much analysis revealed that backup had to be complete, it had to be off-client, and also system provided. That work has now taken shape and is delivering backups in bench-conditions. The client can backup its entire database into a server's database using the same point-to-point security protocol and the same mechanics as the rest of the model. The client also now has a complete encrypted object database using ChaCha20 as the stream cipher and Poly1305 as the object-level authentication layer. This gets arranged into a single secured stream which is then uploaded dynamically to the server. Which latter offers a service that allows a stream to be built up over time. Consider how a client works: Do a task? Make a payment? Generate a transaction! Remembering always it's only a transaction when it is indeed transacted, this means that the transaction has to be recorded into the database. Our little-database-that-could now streams that transaction onto the end of its log, which is now stream-encrypted, and a separate thread follows the appends and uploads additions to the server. (Just for those who are trying to see how this works in a SQL context, it doesn't. It's not a SQL database, it follows the transaction-log-is-the-database paradigm, and in that sense, it is already stream oriented.) In order to prove this client-to-server and beginning to end, there is a hash confirmation over the local stream and over the server's file. When they match, we're golden. It is not a perfect backup because the backup trails by some amount of seconds; it is not therefore /transactional/. People following the latency debate over Bitcoin will find that amusing, but I think this is possibly a step too far in our current development; a backup that is latent to a minute or so is probably OK for now, and I'm not sure if we want to try transactional replication on phone users. This is a big deal for many reasons. One is that it was a quite massive project, and it brought our tiny startup to a complete standstill on the technical front. I've done nothing but hack for about 3 months now, which makes it a more difficult project than say rewriting the entire crypto suite. Second is the reasoning behind it. Our client side asset management software is now going to be using in a quite contrary fashion to our earlier design anticipations. It is going to manage the entire asset base of what is in effect a financial institution (FI), or thousands of them. Yet, it's going to live on a bog-standard Android phone, probably in the handbag of the Treasurer as she roves around the city from home to work and other places. Can you see where this is going? Loss, theft, software failure, etc. We live in one of the most crime ridden cities on the planet, and therefore we have to consider that the FI's entire book of business can be stolen at any time. And we need to get the Treasurer up and going with a new phone in short order, because her customers demand it. Add in some discussions about complexity, and transactions, and social networking in the app, etc etc and we can also see pretty easily that just saving the private keys will not cut the mustard. We need the entire state of phone to be saved, and recovered, on demand. But wait, you say! Of course the solution is cloud, why ever not? No, because, cloud is insecure. Totally. Any FI that stores their customer transactions in the cloud is in a state of sin, and indeed it is in some countries illegal to even consider it. Further, even if the cloud is locally run by the institution, internally, this exposes the FI and the poor long suffering customer to fantastic opportunities for insider fraud. What I failed to mention earlier is that my user base considers corruption to be a daily event, and is exposed to frauds continually, including from their FIs. Which is why Ricardo fills the gap. When it comes to insider fraud, cloud is the same as fog. Add in corruption and it's now smog. So, cloud is totally out, or, cloud just means you're being robbed blind like you always were, so there is no new offering here. Following the sense of Digicash from 2 decades earlier, and perhaps Bitcoin these days, we set the requirement: The server or center should not be able to forge transactions, which, as a long-standing requirement (insert digression here into end-to-end evidence and authentication designs leading to triple entry and the Ricardian Contract, and/or recent cases backing FIs doing the wrong thing). To bring these two contradictions together however was tricky. To resolve, I needed to use a now time-honoured technique theorised by the capabilities school, and popularised by amongst others Pelle's original document service called wideword.net and Zooko's Tahoe-LAFS: the data that is uploaded over UDP is encrypted to keys only known to the clients. And that is what happens. As my client software database spits out data in an append-only stream (that's how all safe databases work, right??) it stream-encrypts this and then sends the stream up to the server. So the server simply has to offer something similar to the Unix file metaphor: create, read, write, delete *and append*. Add in a hash feature to confirm, and we're set. (It's similar enough to REST/CRUD that it's worth a mention, but different enough to warrant a disclaimer.) A third reason this is a big deal is because the rules of the game have changed. In the 1990s we were assuming a technical savvy audience, ones who could manage public keys and backups. The PGP generation, if you like. Now, we're assuming none of that. The thing has to work, and it has to keep working, regardless of user foibles. This is the Apple-Facebook generation. This benchmark also shines adverse light on Bitcoin. That community struggles to deal with theft, lurching from hack to bug to bankruptcy. As a result of their obsession over The Number One Criminal (aka the government) and with avoiding control at the center, they are blinded to the costly reality of criminals 2 through 100. If Bitcoin hypothetically were to establish the user-friendly goal that they can keep going in the face of normal 2010s user demands and failure modes, it'd be game over. They basically have to handwave stuff away as 'user responsibility' but that doesn't work any more. The rules of the game have changed, we're not in the 1990s anymore, and a comprehensive solution is required. Finally, once you can do things like cloud, it opens up the possibilities for whole new features and endeavours. That of course is what makes cloud so exciting for big corporates -- to be able to deliver great service and features to customers. I've already got a list of enhancements we can now start to put in, and the only limitation I have now is capital to pay the hackers. We really are at the cusp of a new generation of payment systems; crypto-plumbing is fun again!...

How MtGox Failed the Five Parties Governance Test

Wed, 02/26/2014 - 13:56
This was a draft of an article now published in Bitcoin Magazine. That latter is somewhat larger, updated and has some additional imagery. MtGox, the Bitcoin exchange, is in the news again, this time for collapsing. One leaked report maintains that MtGox may only have 2,000 Bitcoins in reserve over against 744,408 BTC in liabilities - which indicates a reserve of less than 1%. MtGox originally claimed that their troubles stem from a long-term exploit of the evil malleability bug, which was exploited by means of repeated double spending through an algorithm. However a loss of 99.7% of their reserves cannot be attributed to some mere market timing bug. It is clear that the failure of MtGox is a failure of governance. Trust Shall Not Live by Tech Alone One of the temptations for applied cryptographers is to think that we can solve all problems with clever mathematics and inspired code. Thus there has been much discussion over the past two decades about using cryptography to build trust models that work for untrusted parties over the Internet. This hope in cryptography is misplaced, and often dangerously so. In the first generation of the Internet, SSL was promoted to solve the trust and security problem. However, it failed to do that. Although it secured the line of communications, it left the end-points open to attack, and failed to solve the problem of knowing who the person at an end-point really is. As history shows, and MtGox confirms, the end-point security question is by far the dominating one, and thus we saw the rise of phishing attacks, “man in the browser” attacks, and server breaches throughout the 2000’s. Yet, SSL remains synonymous with Internet e-commerce security, and its very domination is a blindness that attackers benefit from. Bitcoin can be broadly described as an attempt to solve the problem of governance of a centralised issuer of currency through technology. By using a common protocol to manage a public blockchain, we can make sure everyone follows the rules and make it technically impossible to issue more Bitcoins than the protocol has decreed shall ever exist. However, like SSL, Bitcoin’s solution to the issuance problem has left open the weaker parts of the system to continued attack. In order to provide useful Bitcoin services, businesses must hold the users’ Bitcoins and/or their cash in trust. These businesses, such as exchanges, brokerages, online wallets, retail, etc, are at risk from insider theft, external hacking and loss through poor accounting. Bitcoin’s brilliant design for issuance governance may have obscured a complete lack of protection for end-point governance. How can a user trust a person to protect his or her value? This is not a new problem for finance. It is called the “agency problem” in reference to the fact that an agent acts for the user as a trusted intermediary. Institutions in the finance space have been dealing with the issue of trusted intermediaries for millennia. This field is broadly called “governance” and has many well known methods for achieving accountability and reliability for fiduciary institutions. Drawing from “Financial Cryptography in Seven Layers,” Governance includes the following techniques: Escrow of value with trusted third parties. For example, funds underlying a dollar currency would be placed in a bank account. Separation of powers: routine management from value creation, authentication from accounting, systems from marketing. Dispute resolution procedures such as mediation, arbitration, ombudsman, judiciary, and force. Use of third parties for some part of the protocol, such as creation of value within a closed system. Auditing techniques that permit external monitoring of performance and assets. Reports generation to keep information flowing to interested parties. For example, user-driven display of the reserved funds against which a currency is backed. As technologists, we strive to make the protocols that we build as secure and self-sustaining as possible; our art is expressed in pushing problem resolution into the lower layers. This is an ideal, however, to which we can only aspire; there will always be some value somewhere that must be protected by non-protocol means. Our task is made easier if we recognise the existence of this gap in the technological armoury, and seek to fill it with the tools of Governance. The design of a system is often ultimately expressed in a compromise between Governance and the lower layers: what we can do in the lower layers, we do; and what we cannot is cleaned up in Governance. The question then is how to bring those practices into a digital accounting and payment system. To address this weakness of customer escrowed funds, back in the late 1990’s we developed a governance technique for digital currency that we called the “Five Parties Governance Model.” (This model was built into the digital currency platform that we designed for exchange, called “Ricardo”.) The five parties model shares the responsibility and roles for protection of value amongst five distinct parties involved in the transactions. Although originally designed to protect an entire digital issuance, a problem that Bitcoin addressed with its public blockchain and its absence of an asset redemption contract, this technique can be broadly applied to many problems such as that which has brought MtGox down. The Five Parties Model (5PM) In terms of a cryptocurrency issuance with a single issuer (Ricardo model), the Five Parties Model looks like this (Figure 1). Figure 1. Simple Five Parties Model Issuer. The Issuer is the institution guaranteeing the contract with the User. This is the person or entity ultimately responsible for the assets and whether the governance succeeds or fails. In the present case, MtGox is the contractual party that is guaranteeing to deliver an exchange of value, and in the mean time keep those values secure. In Ricardo the Issuer is the party who defines and offers the contract for a particular issuance, which contract creates the rules that govern the five parties. As can be seen from the following screen capture taken from the Internet Archive, MtGox did in fact have a contract with the users to fully reserve their internal Bitcoin and currency accounts: Figure 2. Mt. Gox Terms & Conditions However, as an Issuer, MtGox appears to have failed to implement internal controls to put the other four parties into place. Trustee. In a digital value scenario, there is always a Trustee role that controls creation or release of long-term funds. For MtGox, this Trustee might be the person who signs off on outgoing wires and outgoing Bitcoin payments, or it might be the person who creates or deletes the derivative monetary units (BTC,LTC,EUR,USD,etc) inside the exchange’s books. For a cryptocurrency that contracts to an underlying asset, the Trustee’s account, sometimes known as the Mint account, is the only one that has the ability to create or destroy digital units of value, as that underlying asset pool increases or decreases. For a cryptocurrency without a contractual underlying, the protocol itself can stand in the person’s stead by employing an algorithm such as Bitcoin’s mining rewards program. Manager. The manager is the person or entity, usually an employee of the Issuer, who asks the Trustee to perform the big controlled operations: create or destroy digital assets, or deposit or withdraw physical ones, in order to reflect the overall pattern of trading activities. The Manager typically works on a daily trading basis. As funds come in and go out, some of these request match each other. For a perfect balance, nothing needs to be done, but normally there is an overall flow in one direction or another. As trading balances build up or draw down, the Manager asks the Trustee to authorise the conversion of daily trading assets against the long-term reserves. In the MtGox context, when BTC is flowing out and cash is flowing in, the Manager would ask the Trustee to release the BTC from the cold wallets, and would deliver cash into the long-term sweep accounts held at bank under the Trustee’s control. The Trustee would control that action by looking at the single transfer into the sweep account to confirm the transaction is backed by assets. In the context of an issuance of digital gold, the Manager might receive an inflow of a 1kg physical bar. The Manager must bail the physical gold into the vault, and present the receipt to the Trustee. With that receipt in hand, and any other checks desired, the Trustee can now release 1kg of freshly-minted digital gold to the Manager’s Account. The Manager is in this way guarded by the Trustee, but it works the other way as well. In a well-governed system, the Trustee can only direct value to be sent to the Manager. In this way, the Trustee cannot steal the value under trust, without conspiring with the Manager; a well-run business will keep these two parties at a distance and bound to govern each other by various techniques such as professional conduct codes. For example, Ricardo has an ability to lock the Mint’s account together with the Manager’s account in this fashion. Bitcoin lacks account-control features, but there is no reason that MtGox could not have implemented account-control for their internal Bitcoin accounts. Operator / Escrow / Vault. For a cryptocurrency, the operator is the part of the business ensuring that the servers and the software are running and properly doing their job. By outsourcing this to a third party, we add another degree of separation of powers to the governance model. In the case of Ricardo and similar contractually-controlled issuances, there is generally a single server cluster that maintains the accounts. The sysadmin for this server controls the accounts and ensures that no phantom accounts or transactions are let in; software designs assist by including techniques such as triple entry accounting, which guarantees that only original users can create signed instructions to transfer value with their private keys. For the physical side of a digital issuance such as gold, a vault fills the operator role. In the case of GoldMoney.com the vault operator is ViaMat. They don’t do anything with the client’s gold unless they receive a signed instruction from the Trustee. They just keep thieves from physically stealing it. Bitcoin is very different in this respect in that it creates the public blockchain as the accounting mechanism. In this case, the operator role is not outsourced to one party, rather it is spread across the miners, the software and the development team, presenting a very strong governance equation over operator malfeasance. For a business such as MtGox, the operators or escrows are two-fold. On the one part is the bank providing accounts, and especially the primary account holding long term cash reserves. On the other part, as an exchange provider, is the set of cold wallets holding long term BTC. The Fifth Party - The Public as Auditor. The final and most important element of the Five Parties Model is the role of the Public as auditor. Typically, the role of audit is to examine the books to validate that the other parties are indeed doing their job. As is covered elsewhere (Audit), paid auditors have a long-term conflict of interest, which has been at the root of several notable disasters in the last decade - the failure of Enron, the wholesale bankruptcy of banking in 2007 financial crisis, the collapse of AIG, none of which auditors rang the bell for. Auditors, as well as being conflicted, are also expensive, which leads to the search for alternates. Once we have mined the cryptographic techniques available to us, we are still left with a set of things we cannot control so easily. What then? Introducing you, the user, or the Public. You do not have a conflict of interest, in that it is your value at risk, and you have a strong interest in seeing that the other four parties are doing their job properly. Which then begs the question of how you, the public, can audit anything, when audit almost by definition means seeing that which cannot be seen? The answer is to make that which was previously unseen, seen. Some examples of digital currencies that have supported audit by you the Public include: e-gold.com published a real time balance sheet of their digital issuance. Goldmoney.com publishes their physical gold as held by their vault operators, and auditors publish the monthly report. Bitcoin publishes the blockchain. Ricardo publishes the balances of the Trustee and Manager accounts. Two-Sided Variation on the Five Parties Model The Five Parties Model is just and exactly that - a model. Which means there are variations and limitations, and a business must modify it to suit. For example, many businesses in the space have not one but two bases of value to control: an underlying asset and a digital issuance. Bitcoin Exchanges fall into this category, for example. When an Issuer is backing the digital currency with a reserve asset, both of these assets need to be protected. To do this, we utilise two instances of the Five Parties Model in a mirrored pair. In each, the Issuer and the Public act as parties on both sides, whereas the Trustee, the Operator and the Manager may be duplicated (or not). Figure 3 shows an arrangement where a single Manager works with mirrored Operators and Trustees. Figure 3. Two-Sided or Mirrored Variation of the 5 Parties Model An exchange such as MtGox would have had an even more complicated regime. For every one of their assets - BTC, Altcoins, USD, EUR, JPY, etc, they would have needed to delegate operators, trustees and managers. We as users expect they did that, which then leaves us with a question -- what went wrong? MtGox Failed Because Nobody Was Watching Them We can now measure MtGox against the governance picture drawn above. Although originally developed for an issuance, the model applies wherever there is an important asset to protect. As a business, the role of Issuer is relatively easy to identify - the company MtGox itself. Their terms and conditions constituted a clear contract between themselves and the users, where MtGox would hold the user’s Bitcoin assets in reserve. Likewise, the Operator for cash is clear: the banks holding the long-term value are presumably identifiable via incoming and outgoing wires. MtGox had transactions going in and out for some time, so Managers are in evidence. The Operator for the long-term BTC cold wallets is the Bitcoin network itself. What about Trustees? Although MtGox has repeatedly placed blame on their in-house operations team for various hacks and bugs, it is rather more likely that they fell short on the appointment and management of Trustees. Somehow, the Management created for themselves 744,408 BTC on their internal books against an underlying reserve of only 2,000 actual Bitcoins, which should have been an obvious disaster to all. If this is the case, this suggests that no Trustees were appointed at all, and Managers were essentially uncontrolled. Finally, the Public as auditor is not in evidence. MtGox on their website did not show the balances of any of their major asset classes, nor provide any easy way to ensure that their parties are doing their job. Ideally, MtGox would have displayed a balance sheet with references to cold wallets on one side, and their internal Bitcoin/Altcoin balances on the other side. The former is checkable via the blockchain, the latter could be made available by the operator, and periodically audited to ensure the code providing the balance query was accurate. With this information, you the Public as individuals or as media or other observers can verify that things are as they should be, and if not, sound the alarm! That’s what Twitter is for, that’s what sites such as DGCMagazine.com, CoinDesk.com and BitcoinMagazine are for. Under such circumstances failure might be expected and indeed may be inevitable. As MtGox did not have a sufficient governance model in place, we might have been disconcerted to learn that more than $300 million worth of Bitcoin managed to disappear, but we should also be aware that we may ultimately blame our own failure to insist on good governance. What other players in the Bitcoin world will fall for the same lack of care? You, the fifth party, the auditing Public would be well advised to review all of your Bitcoin partners to see what forms of governance they use, and to choose wisely. It is your value at risk, and demanding quality governance such as is outlined above is your right....

Why Dispute Resolution is hard -- but not so elusive as to escape solutions

Tue, 02/18/2014 - 02:41
Steven J. Murdoch and Ross Anderson have released a paper entitled "Security Protocols and Evidence: Where Many Payment Systems Fail," to be presented in a few weeks in Financial Cryptography Conference, in Barbados. It is very welcome to point people in the direction of dispute resolution, because it is indeed a make or break area for payment systems. The paper itself is a light read, with some discussion of failures, and some suggestions of what to do about it. Where it gets interesting is that the paper tries to espouse some Principles, a technique I often use to get my thoughts in order. Let's look at them: Principle 1: Retention and disclosure. Protocols designed for evidence should allow all protocol data and the keys needed to authenticate them to be publicly disclosed, together with full documentation and a chain of custody. Principle 2: Test and debug evidential functionality. When a protocol is designed for use in evidence, the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses. Principle 3: Open description of TCB. Systems designed to produce evidence must have an open specification, including a concept of operations, a threat model, a security policy, a reference implementation and protection profiles for the evaluation of other implementations. Principle 4: Failure-evidentness. Transaction systems designed to produce evidence must be failure-evident. Thus they must not be designed so that any defeat of the system entails the defeat of the evidence mechanism. Principle 5: Governance of forensic procedures. The forensic procedures for investigating disputed payments must be repeatable and be reviewed regularly by independent experts appointed by the regulator. They must have access to all security breach notifications and vulnerability disclosures. I have done these things in the past, in varying degrees and fashions, so they are pointing in the right direction, but I feel /as principles/, they fall short. Let's work through them. With P1, public disclosure immediately strikes an issue. This is similar to the Bitcoin mentality that the blockchain should be public, something which has become so tantalising that regulators are even thinking about mandating it. But we live in a world of crooks. Does this mean that a new attack is now about to become popular -- using the courts to force the publication of ones victim's secrets? The reason for financial privacy is to stop scumbags knowing where the loot is, and that is a good reason. As we enter a more transparent world for crooks, because of such innovations as Internet data tracking, economic intelligence harvesting, drugs-ML, AML, sharing of seized value by government agencies, monolithic banks incentivised to cross-sell and compete, etc, the need for financial privacy goes up not down. If you look at M&A's paper, the frustration in the courts that they faced was that the banks argued they couldn't disclose the secrets. Yet, courts readily deal with this already. Lawyers know how to keep secrets, it's their job. So we're really facing a different problem, which is that the banks snowed the judge with bluff and bluster, and the judge didn't blink. As Stephen Mason writes in "Debit Cards, ATMs and the negligence of the bank and customer," in Butterworths Journal of International Banking and Financial Law, March 2012: "The only reason the weaknesses have been revealed in some instances, as discussed in this article, is because the banks were required to cooperate with the investigating authorities and explain and provide evidence of such weaknesses before the criminal courts. In civil actions, the banks have no incentive to reveal such weaknesses. The banks will deny that their systems suffer from any weaknesses, placing the blame squarely on the customer." The real problem here is that banks do not want to provide the evidence; for them, suppression of the evidence is part of their business process, a feature not a bug. Hence, Principle 1 above is not sufficient, and it could be written more simply: P1. Payment protocols should be designed for evidence. which rules out the Banks' claims. But even that doesn't quite work. Now, I'm unsure how to make this point in words, so I'll simply slam it out: P1. Payment protocols should be designed to support dispute resolution. Which is a more subtle, yet comprehensive principle. To a casual outside observer it might appear the same, because people typically see dispute resolution as the presentation of evidence, and to our inner techie, they see our role as the creation of that evidence. But, dispute resolution is far more important that that. How are you filing a dispute? Who is the judge? Where are you and what is your law? Who holds the burden of proof? What is the boundary between testimony and digital evidence? In the forum you have chose, what are the rules of procedure? How do they affect your case? These are seriously messy questions. Take the recent British cases of Shojibur Rahman v Barclays Bank PLC as reported (judgement, appeal) in Digital Evidence and Electronic Signature Law Review, 10 (2013). In this case, a fraudster apparently tricked the victim into handing over a card and also the PIN. This encouraged Barclays to claim no liability for the frauds that followed. Notwithstanding this claim, the bank is required to show that it authenticated the transactions. In both of the two major transactions conducted by the fraudster, the bank failed to show that they had authenticated the transactions correctly. In the first, Barclays presented no evidence one way or another, and the card was not in use for that transaction, so the bank simply failed to meet its burden of proof, as well as its own standards of authentication as it was it undisputed that the fraudster initiated the transaction. In the second, secret questions were asked by the bank as the transaction was suitably huge, /and wrong answers were accepted/. Yet, in district court and on appeal the judges held that because the victim had failed in his obligation to keep the card secure, defendant Barclays was relieved of its duty to authenticate the transactions. This is an outstanding blunder of justice -- if the victim makes even one mistake then the banks can rest easy. Knowing that the banks can refuse to provide evidence, knowing that the systems are so complex that mistakes are inevitable, knowing that the fraudsters conduct sophisticated and elegant social attacks, and knowing that the banks prepared the systems in the first place, this leaves the banks in a pretty position. They are obviously encouraged to hold back from supporting their customer as much as possible. What is really happening here is a species of deception, and/or fraud, sometimes known as liability shifting or dumping. The banks are actually making a play to control and corral the dispute resolution into the worst place possible for you, and the best place for them -- their local courts. Meanwhile, they are telling you the innocent victim, that they've got it all under control, and your rights are protected. In terms of P1 above, they are actually designing their system to make dispute resolution tilted in their favour, not yours. They should not. Then, let's take Principle 2, testing the evidence functionality. The problem with this is that, in software production, testing is always the little lost runt of the litter. Everyone says they will look after her, and promise to do their best, but when it matters, she's just the little squealing nuisance underfoot. Testing always gets left behind, locked in the back room with the aunt that nobody wants to speak to. But we can take a more systemic view. What us financial cryptographers do for this situation is to flip it around. Instead of repeating the marketing blather of promises of more testing, we make the test part of the protocol. In other words, the only useful test is one that is done automatically as part of the normal routine. P2. Evidence is part of the protocol. You can see this with backups. Most backup problems occur because they were never actually used at the time they were created. So good backups open up their product and compare it back to what was saved. That is, part of the cycle is the test. But we can go further. When we start presenting this evidence to the fraternity of dispute resolution we immediately run into another problem highlighted by the above words: "the designers should also specify, test and debug the procedures to be followed by police officers, defence lawyers and expert witnesses." M&A were aware of cases such as the one discussed above, and seek to make the evidence stronger. But, the flaw in their proposal is that the process so promoted is *expensive* and it therefore falls into the trap of raising the costs of dispute resolution. Which make them commensurately less effective and less available, which breaches P1. And to segway, Principle 3 above also fails to the same economic test. If you do provide all that good open TCB stuff, you now need to pull in expert witnesses to attest to the model. And one thing we've learnt over the years is that TCBs are fertile ground for two opposing expert witnesses to disagree entirely, both be right, and both be exceedingly expensive. As before, this approach increases the cost, and therefore reduces the availability of dispute resolution, and thus breaches P1. And, it should be noted that a developing popular theme is that standards and TCBs and audits and other big-costing complicated solutions are used as much to clobber the user as they are to achieve some protection. The TCB is always prepared in advance by the bank, so no prizes for guessing where that goes; the presence of the institution-designed TCB is as much antithetical to the principles of protection of the user, so it can have no place in principles. Now, combining these points, it should be clear that we want to get the costs down. I can now introduce a third principle: P3: The evidence is self-evident. That is, the evidence must be self-proving, and it must be easily self-proving to the judge, who is no technical wizard. This standard is met if the judge can look at it and know instantly what it is, and, likewise, so can a jury. This also covers Principle 5. For an example of P3, look at the Ricardian Contract, which has met this test before judges. Principle 4 is likewise problematic. It assumes so much! Being able to evidence a fraud, but not stop it is a sort of two-edged sword. Indeed, it assumes so much of an understanding of how the system is attacked that we can also say that if we know that much about the fraud, we should be able to eliminate it anyway. Why bother to be evidence-protected when we can stop it? So I would prefer something like: P4: The system is designed to reduce the attack surface areas, and where an attack cannot be eliminated, it should be addressed with a strong evidence trail. In other words, let's put the horse before the cart. Finally, another point I would like to bring out which might now be evident from the foregoing, is this: P5: The system should be designed to reduce the costs to both parties, including the costs and benefits of dispute resolution. It's a principle because that is precisely what the banks are not doing; without taking this attitude, they will also then go onto breach P1. As correctly pointed out in the paper, banks fight these cases for their own profit motive, not for their customers' costs motives. Regulation is not the answer, as raising the regulatory barriers plays into their hands and allows them to raise prices, but we are well out of scope here, so I'll drift no more into competition. As an example of how this has been done, see this comparison between the systems designed by CAcert and by Second Life. And, Steve Bellovin's "Why the US Doesn't have Chip-and-PIN Credit Cards Yet," might be seen as a case study of P5. In conclusion, it is very encouraging that the good work that has been done in dispute resolution for payment systems now has a chance of being recognised. But it might be too early for the principles as outlined, and as can be seen above, my efforts scratched out over a day are somewhat different. What is going to be interesting is to see how the Bitcoin space evolves to deal with the question, as it already has mounted some notable experiments in dispute resolution, such as Silk Road. Good timing for the paper then, and I look forward to reports of lively debate at FC in Barbados, where it is presumably to be presented....

If you only read one thing this weekend, read about the Vampire Squid

Sat, 02/15/2014 - 03:56
If you read only one thing this weekend, read this. This is why the 2007 crisis was not resolved. This is why we now socialize their losses, but leave them their profits. This is why it is impossible to fix, and the only game in town is predicting which economy is toast, this weekend, and which investment bank is making monopoly profits while being technically bankrupt. It is likely impossible to roll back the USA's lifting of the Glass-Steagall barrier, which is in other places known as sound banking. How one deals with a world in which banking is morphing into industrial combines with infinite and free capital is beyond my small brain; we need something like bitcoin, but much stronger. Hack on, your code may save society as we know it....

Bitcoin Verification Latency -- MtGox hit by market timing attack, squeezed between the water of impatience and the rock of transactional atomicity

Mon, 02/10/2014 - 04:36
Fresh on the heels of our release of "Bitcoin Verification Latency -- The Achilles Heel for Time Sensitive Transactions" it seems that Mt.Gox has been hit by exactly that - a market timing attack based on latency. In their own words: Non-technical Explanation: A bug in the bitcoin software makes it possible for someone to use the Bitcoin network to alter transaction details to make it seem like a sending of bitcoins to a bitcoin wallet did not occur when in fact it did occur. Since the transaction appears as if it has not proceeded correctly, the bitcoins may be resent. MtGox is working with the Bitcoin core development team and others to mitigate this issue. Technical Explanation: Bitcoin transactions are subject to a design issue that has been largely ignored, while known to at least a part of the Bitcoin core developers and mentioned on the BitcoinTalk forums. This defect, known as "transaction malleability" makes it possible for a third party to alter the hash of any freshly issued transaction without invalidating the signature, hence resulting in a similar transaction under a different hash. Of course only one of the two transactions can be validated. However, if the party who altered the transaction is fast enough, for example with a direct connection to different mining pools, or has even a small amount of mining power, it can easily cause the transaction hash alteration to be committed to the blockchain. The bitcoin api "sendtoaddress" broadly used to send bitcoins to a given bitcoin address will return a transaction hash as a way to track the transaction's insertion in the blockchain. Most wallet and exchange services will keep a record of this said hash in order to be able to respond to users should they inquire about their transaction. It is likely that these services will assume the transaction was not sent if it doesn't appear in the blockchain with the original hash and have currently no means to recognize the alternative transactions as theirs in an efficient way. This means that an individual could request bitcoins from an exchange or wallet service, alter the resulting transaction's hash before inclusion in the blockchain, then contact the issuing service while claiming the transaction did not proceed. If the alteration fails, the user can simply send the bitcoins back and try again until successful. Which all means what? Well, it seems that while waiting on a transaction to pop out of the block chain, one can rely on a token to track it. And so can ones counterparty. Except, this token was not exactly constructed on a security basis, and the initiator of the transaction can break it, leading to two naive views of the transaction. Which leads to some game-playing. Let's be very clear here. There are three components to this break: Latency, impatience, and a bad token. Latency is the underlying physical problem, also known as the coordination problem or the two-generals problem. At a deeper level, as latency on a network is a physical certainty limited by the speed of light, there is always an open window of opportunity for trouble when two parties are trying to agree on anything. In fast payment systems, that window isn't a problem for humans (as opposed to algos), as good payment systems clear in less than a second, sometimes known as real time. But not so in Bitcoin; where the latency is from 5 minutes and up to 120 depending on your assumptions, which leaves an unacceptable gap between the completion of the transaction and the users' expectations. Hence the second component: impatience. The 'solution' to the settlement-impatience problem then is the hash token that substitutes as a final (triple entry) evidentiary receipt until the block-chain settles. This hash or token used in Bitcoin is broken, in that it is not cryptographically reliable as a token identifying the eventual settled payment. Obviously, the immediate solution is to fix the hash, which is what Mt.Gox is asking Bitcoin dev team to do. But this assumes that the solution is in fact a solution. It is not. It's a hack, and a dangerous one. Let's go back to the definition of payments, again assuming the latency of coordination. A payment is initiated by the controller of an account. That payment is like a cheque (or check) that is sent out. It is then intermediated by the system. Which produces the transaction. But as we all know with cheques, a controller can produce multiple cheques. So a cheque is more like a promise that can be broken. And as we all know with people, relying on the cheque alone isn't reliable enough by and of itself, so the system must resolve the abuses. That fundamental understanding in place, here's what Bitcoin Foundation's Gavin Andresen said about Mt.Gox: The issues that Mt. Gox has been experiencing are due to an unfortunate interaction between Mt. Gox’s implementation of their highly customized wallet software, their customer support procedures, and their unpreparedness for transaction malleability, a technical detail that allows changes to the way transactions are identified. Transaction malleability has been known about since 2011. In simplest of terms, it is a small window where transaction ID’s can be “renamed” before being confirmed in the blockchain. This is something that cannot be corrected overnight. Therefore, any company dealing with Bitcoin transactions and have coded their own wallet software should responsibly prepare for this possibility and include in their software a way to validate transaction ID’s. Otherwise, it can result in Bitcoin loss and headache for everyone involved. Ah. Oops. So it is a known problem. So one could make a case that Mt.Gox should have dealt with it, as a known bug. But note the language above... Transaction malleability? That is a contradiction in terms. A transaction isn't malleable, the very definition of a transaction is that it is atomic, it is or it isn't. ACID for those who recall the CS classes: Atomic, consistent, independent, durable. Very simply put, that which is put into the beginning of the block chain calculation cycle /is not a transaction/ whereas that which comes out, is, assuming a handwavy number of 10m cycles such as 6. Therefore, the identifier to which they speak cannot be a transaction identifier, by definition. It must be an identifier to ... something else! What's happening here then is more likely a case of cognitive dissonance, leading to a regrettable and unintended deception. Read Mt.Gox's description above, again, and the reliance on the word becomes clearer. Users have known to demand transactions because we techies taught them that transactions are reliable, by definition; Bitcoin provides the word but not the act. So the first part of the fix is to change the words back to ones with reliable meanings. You can't simply undefine a term that has been known for 40 years, and expect the user community to follow. (To be clear, I'm not suggesting what the terms should be. In my work, I simply call what goes in a 'Payment', and what comes out a 'Receipt'. The latter Receipt is equated to the transaction, and in my lesson on triple entry, I often end with a flourish: The Receipt is the Transaction. Which has more poetry if you've experienced transactional pain before, and you've read the whole thing. We all have our dreams :) We are still leaves the impatience problem. Note that this will also affect any other crypto-currency using the same transaction scheme as Bitcoin. Conclusion To put things in perspective, it's important to remember that Bitcoin is a very new technology and still very much in its early stages. What MtGox and the Bitcoin community have experienced in the past year has been an incredible and exciting challenge, and there is still much to do to further improve. When we did our early work in this, we recognised that the market timing attack comes from the implicit misunderstanding of how latency interferes with transactions, and how impatience interferes with both of them. So in our protocols, there is no 'token' that is available to track a pending transaction. This was a deliberate, early design decision, and indeed the servers still just dump and ignore anything they don't understand in order to force the clients away from leaning on unreliable crutches. It's also the flip side of the triple-entry receipt -- its existence is the full evidence, hence, the receipt is the transaction. Once you have the receipt, you're golden, if not, you're in the mud. But Bitcoin had a rather extraordinary problem -- the distribution of its consensus on the transaction amongst any large group of nodes that wanted to play. Which inherently made transactional mechanics and latency issues blow out. This is a high price to pay, and only history is going to tell us whether the price is too high or affordable....

Digital Evidence journal is now open source!

Sat, 02/08/2014 - 23:47
Stephen Mason, the world's foremost expert on the topic, writes (edited for style): The entire Digital Evidence and Electronic Signature Law Review is now available as open source for free here: Current Issue         Archives All of the articles are also available via university library electronic subscription services which require accounts: EBSCO Host         vLex">HeinOnline         v|lex (has abstracts) If you know of anybody that might have the knowledge to consider submitting an article to the journal, please feel free to let them know of the journal. This is significant news for the professional financial cryptographer! For those who are interested in what all this means, this is the real stuff. Let me explain. Back in the 1980s and 1990s, there was a little thing called the electronic signature, and its RSA cousin, the digital signature. Businesses, politicians, spooks and suppliers dreamed that they could inspire a world-wide culture of digitally signing your everything with a hand wave, with the added joy of non-repudiation. They failed, and we thank our lucky stars for it. People do not want to sign away their life every time some little plastic card gets too close to a scammer, and thankfully humanity had the good sense to reject the massively complicated infrastructure that was built to enslave them. However, a suitably huge legacy of that folly was the legislation around the world to regulate the use of electronic signatures -- something that Stephen Mason has catalogued here. In contrast to the nuisance level of electronic signatures, in parallel, a separate development transpired which is far more significant. This was the increasing use of digital techniques to create trails of activity, which led to the rise of digital evidence, and its eventual domination in legal affairs. Digital discovery is now the main act, and the implications have been huge if little understated outside legal circles, perhaps because of the persistent myth in technology circles that without digital signatures, evidence was worth less. Every financial cryptographer needs to understand the implications of digital evidence, because without this wisdom, your designs are likely crap. They will fail when faced with real-world trials, in both senses of the word. I can't write the short primer on digital evidence for you -- I'm not the world's expert, Stephen is! -- but I can /now/ point you to where to read.That's just one huge issue, hitherto locked away behind a hugely dominating paywall. Browse away at all 10 issues!...

US State Department rolled, as NSA slides further off-mission. Shoulda used a BlackPhone :D

Sat, 02/08/2014 - 12:36
In what is either belly laugh-level hilarity, or a serious wakeup call for the American taxpayer, Reuters reports on the recent "Fuck the EU" leaks of phone calls. (h/t to zerohedge.) It turns out the recordings may have been (gasp) lifted off the airwaves: Some U.S. officials blamed Moscow for leaking the call, noting that the recording, posted anonymously, was first highlighted in a tweet from a Russian official. In Washington, U.S. officials said Nuland and Pyatt apparently used unencrypted cellphones, which are easy to monitor. The officials said smart phones issued to State Department officials had data encryption *but not voice encryption*. Wtf? Where the hell are you, oh, NSA's security division aka Central Security Service? The Information Assurance mission confronts the formidable challenge of preventing foreign adversaries from gaining access to sensitive or classified national security information. How is it that officials of the State Department have zero, zip, nada, nuttin security while blathering on about international negotiations involving an entire strategic country, a major pipeline, and the number one PR circle-jerk for the nation-states? I had thought that all these things were in the killing zone for the NSA. Ukraine, energy, the Olympic Games, check check check! But apparently not. The evidence on mission drift is somewhat damning, and becoming deafening. They have dropped the baby in many ways. They recently downgraded their irrational fear of terrorism, by prioritising the insider threat as a 'national security threat'. Without apparently understanding the bleeding obvious, that insiders such as Snowden and Manning are a threat to them, not to the people who pay their salaries: “[Snowden and the insider threat] certainly puts us at risk of missing something that we are trying to see, which could lead to [an attack],” said Matthew Olsen, the director of the National Counterterrorism Center. Spoken without any cynicism or humility! If they got back to work, and crafted their mission to deliver return on investment to the taxpayer, instead of stealing from other countries' taxpayers, they wouldn't have time to worry about schoolboy plots like terrorism and rogue sysadms. Message to the American taxpayer: demand your money back. Buy a blackphone instead....

The financial rot just keeps getting worse -- FX is FuXed, the Old Lady's in on the FiX, and the fight against the devil volatility goes on?

Fri, 02/07/2014 - 12:22
FT comes out with this tantalising flash of the gauntlet, at 4pm Friday: The BoE representatives have on several occasions asked whether a particular currency fix can be manipulated, one member of the committee has told the Financial Times previously. Bloomberg, being American and less subtle, loads up both barrels and lets fire, also at 4pm Friday: Bank of England officials told currency traders it wasn’t improper to share impending customer orders with counterparts at other firms, a practice at the heart of a widening probe into alleged market manipulation, according to a person who has seen notes turned over to regulators. A senior trader gave his notes from a private April 2012 meeting of currency dealers and two central bank staff members to the Financial Conduct Authority about six weeks ago because of mounting media coverage of the investigation, said the person, who asked not to be named while probes are under way. Traders representing some of the world’s biggest banks told officials at the meeting that they shared information about aggregate orders before currency benchmarks were set, three people with knowledge of the discussion said. The officials said there wasn’t a policy on such communications and that banks should make their own rules, according to the people. ... During a 15-minute conversation on currency benchmarks, traders said they used chat rooms to match buyers and sellers ahead of the fix to avoid trading at one of the most volatile periods of the day, the people said. That required them to share aggregate positions. They instigated the discussion because they were concerned that similar practices were under scrutiny at the time in the Libor investigations, the people said. The Bank of England officials said they viewed the practices as positive to reduce market volatility and wouldn’t take the matter to the standing committee, according to the people with knowledge of the meeting. That body included a representative from the Financial Services Authority, the FCA’s predecessor, according to central bank records. (My humble emphasis.) As a flat-out claim of a go-ahead for insider trading, it doesn't get any damning. Expect heads to roll. Names were named: Dealers at the April 2012 meeting with Martin Mallett, the Bank of England’s chief currency dealer, and James O’Connor, who works in its foreign-exchange division, were told not to record the discussion or take notes, one of the people said. One trader wrote down what was said soon after leaving because of concerns spawned by investigations of attempted manipulation of the London interbank offered rate, or Libor, the person said. And boom! I'm not sure what they call willfully avoiding the trail of evidence is, but that's close enough to establishing intent as makes no difference. Messrs Mallet and O'Connor are unavailable for comment (4-m, Friday) because they're trying to drag their lawyers out of some Threadneedle Street pub, one hopes. It's enough to turn the crisis-weary public to Bitcoin. How on earth can regulators snub the nose at the blockchain when $5.8 billion of fines have been slapped on the Libor scandal, just ONE of the corruptions in the banking world?...

FC++ -- Bitcoin Verification Latency -- The Achilles Heel for Time Sensitive Transactions

Mon, 02/03/2014 - 05:03
New paper for circulation by Ken Griffith and myself: Bitcoin Verification Latency The Achilles Heel for Time Sensitive Transactions Abstract.Bitcoin has a high latency for verifying transactions, by design. Averaging around 8 minutes, such high latency does not resonate with the needs of financial traders for speed, and it opens the door for time-based arbitrage weaknesses such as market timing attacks. Although perhaps tractable in some markets such as peer to peer payments, the Achilles heel of latency makes Bitcoin unsuitable for direct trading of financial assets, and ventures seeking to exploit the market for financial assets will need to overcome this burden. As with the Gresham's paper, developments moved fast on this question, and there are now more ventures looking at the contracts and trading question. For clarification, I am the secondary author, Ken is lead....

Hard Truths about the Hard Business of finding Hard Random Numbers

Thu, 01/30/2014 - 09:34
Editorial note: this rant was originally posted here but has now moved to a permanent home where it will be updated with new thoughts. As many have noticed, there is now a permathread (Paul's term) on how to do random numbers. It's always been warm. Now the arguments are on solid simmer, raging on half a dozen cryptogroups, all thanks to the NSA and their infamous breach of NIST, American industry, mom's apple pie and the privacy of all things from Sunday school to Angry Birds. Why is the topic of random numbers so bubbling, effervescent, unsatisfying? In short, because generators of same (RNGs), are *hard*. They are in practical experience trickier than most of the other modules we deal with: ciphers, HMACs, public key, protocols, etc. Yet, we have come a long way. We now have a working theory. When Ada put together her RNG this last summer, it wasn't that hard. Out of our experience, herein is a collection of things we figured out; with the normal caveat that, even as RNs require stirring, the recipe for 'knowing' is also evolving. Use what your platform provides. Random numbers are hard, which is the first thing you have to remember, and always come back to. Random numbers are so hard, that you have to care a lot before you get involved. A hell of a lot. Which leads us to the following rules of thumb for RNG production. Use what your platform provides. Unless you really really care a lot, in which case, you have to write your own RNG. There isn't a lot of middle ground. So much so that for almost all purposes, and almost all users, Rule #1 is this: Use what your platform provides. When deciding to breach Rule #1, you need a compelling argument that your RNG delivers better results than the platform's. Without that compelling argument, your results are likely to be more random than the platform's system in every sense except the quality of the numbers. Software is our domain. Software is unreliable. It can be made reliable under bench conditions, but out in the field, any software of more than 1 component (always) has opportunities for failure. In practice, we're usually talking dozens or hundreds, so failure of another component is a solid possibility; a real threat. What about hardware RNGs? Eventually they have to go through some software, to be of any use. Although there are some narrow environments where there might be a pure hardware delivery, this is so exotic, and so alien to the reader here, that there is no point in considering it. Hardware serves software. Get used to it. As a practical reliability approach, we typically model every component as failing, and try and organise our design to carry on. Security is also our domain, which is to say we have real live attackers. Many of the sciences rest on a statistical model, which they can do in absence of any attackers. According to Bernoulli's law of big numbers, models of data will even out over time and quantity. In essence, we then can use statistics to derive strong predictions. If random numbers followed the law of big numbers, then measuring 1000 of them would tell us with near certainty that the machine was good for another 1000. In security, we live in a byzantine world, which means we have real live attackers who will turn our assumptions upside down, out of spite. When an attacker is trying to aggressively futz with your business, he will also futz with any assumptions and with any tests or protections you have that are based on those assumptions. Once attackers start getting their claws and bits in there, the assumption behind Bernoulli's law falls apart. In essence this rules out lazy reliance on statistics. No Test. There is no objective test of random numbers, because it is impossible to test for unpredictability. Which in practical terms means that you cannot easily write a test for it, nor can any test you write do the job you want it to do. This is the key unfortunate truth that separates RNs out from ciphers, etc (which latter are amenable to test vectors, and with vectors in hand, they become tractable). Entropy. Everyone talks about entropy so we must too, else your future RNG will exhibit the wrong sort of unpredictability. Sadly, entropy is not precisely the answer, enough such that talking about is likely missing the point. If we could collect it reliably, RNs would be easy. We can't so it isn't. Entropy is manifest physical energy, causing events which cannot be predicted using any known physical processes, by the laws of science. Here, we're typically talking about quantum energy, such as the unknown state of electrons, which can collapse either way into some measurable state, but it can only be known by measurement, and not predicted earlier. It's worth noting that quantum energy abounds inside chips and computers, but chips are designed to reduce the noise, not increase it, so turning chip entropy into RNs is not as easy as talking about it. There are objective statements we can make about entropy. The objective way to approach the collection of entropy is to carefully analyse the properties of the system and apply science to estimate the amount of (e.g.) quantum uncertainty one can derive from it. This is possible and instructive, and for a nice (deep) example of this, see John Denker's Turbid. At the level of implementation, objective statements about entropy fail for 2 reasons. Let's look at those, as understanding these limitations on objectivity is key to understanding why entropy does not serve us so willingly. Entropy can be objectively analysed as long as we do not have an attacker. An attacker can deliver a faulty device, can change the device, and can change the way the software deals with the device at the device driver level. And much more... This approach is complete if we have control of our environment. Of course, it is very easy to say Buy the XYZ RNG and plug it in. But many environments do not have that capability, often enough we don't know our environment, and the environment can break or be changed. Examples: rack servers lacking sound cards; phones; VMs; routers/firewalls; early startup on embedded hardware. In conclusion, entropy is too high a target to reach. We can reach it briefly, in controlled environments, but not enough to make it work for us. Not enough, given our limitations. CSRNs. The practical standard to reach therefore is what we call Cryptographically Secure Random Numbers. Cryptographically secure random numbers (or CSRNs) are numbers that are not predictable /to an attacker/. In contrast to entropy, we might be able to predict our CSRNs, but our enemies cannot. This is a strictly broader and easier definition than entropy, which is needed because collecting entropy is too hard, as above. Note our one big assumption here: that we can determine who is our attacker and keep him out, and determine who is friendly and let them in. This is a big flaw! But it happens to be a very basic and ever-present one in security, so while it exists, it is one we can readily work with. Design. Many experiments and research seem to have settled on the following design pattern, which we call a Trident Design Pattern: Entropy collector ----\ \ _____ _________ / \ / \ Entropy collector ---->( mixer )----->( expansion )-----> RNs \_____/ \_________/ / Entropy collector ----/ In short, many collectors of entropy feed their small contributions in to a Mixer, which uses the melded result to seed an Expander. The high level caller (application) uses this Expander to request her random numbers. Collectors. After all the above bad news, what is left in the software toolkit is: redundancy . A redundant approach tells us to draw our RNs from different places. The component that collects RNs from one place is called a Collector. Therefore we want many Collectors. Each of the many places should be uncorrelated with each other. If one of these were to fail, it would be unlikely that others also would fail, as they are uncorrelated. Typical studies of fault-tolerant systems often suggest the number 3 as the target. Some common collector ideas are: the platform's own RNG, as a Collector into your RNG any CPU RNG such as Intel's RDRAND, measuring the difference between two uncorrelated clocks, timings and other measurands from events (e.g., mouse clicks and locations), available sensors (movement on phones), differences seen in incoming new business packets, a roughly protected external source such as a business feed, By the analysis that got us past Rule #1, there are no great Collectors by definition, as otherwise we'd already be using them, and this problem would go away. An attacker is assumed to be able to take a poke at one or two of these sources, but not all. If the attacker can futz with all our sources, this implies that he has more or less unlimited control over our entire machine. In which case, it's his machine, and not ours. We have bigger problems than RNs. We tend to want more numbers than fault-tolerant reliability suggests because we want to make it harder for the attacker. E.g., 6 would be a good target. Remember, we want maximum uncorrelation. Adding correlated collectors doesn't improve the numbers. Because we have redundancy, on a large scale, we are not that fussed about the quality of each Collector. Better to add another collector than improve the quality of one of them by 10%. This is an important benefit of redundancy, we don't have to be paranoid about the quality of this code. Mixer. Because we want the best and simplest result delivered to the caller, we have to take the output of all those above Collectors, mix them together, and deliver downstream. The Mixer is the trickiest part of it all. Here, you make or break. Here, you need to be paranoid. Careful. Seek more review. The Mixer has to provide some seed numbers of say 128-512 bits to the Expander (see below for rationale). It has to provide this on demand, quickly, without waiting around. There appear to be two favourite designs here: Push or Pull. In Push the collectors send their data directly into Mixer, forcing it to mix it in as it's pushed in. In contrast, a Pull design will have the Mixer asking the Collectors to provide what they have right now. This in short suggests that in a Push design the Mixer has to have a cache, while in Pull mode, the Collectors might be well served in having caches within themselves. Push or Mixer-Cache designs are probably more popular. See Yarrow and Fortuna as perhaps the best documented efforts. We wrote our recent Trident effort (AdazPRING) using Pull. The benefits include: simplified API as it is direct pull all the way through; no cache or thread in mixer; and as the Collectors better understand their own flow, so they better understand the need for caching and threading. Expander. Out of the Mixer comes some nice RNs, but not a lot. That's because good collectors are typically not firehoses but rather dribbles, and the Mixer can't improve on that, as, according to the law of thermodynamics, it is impossible to create entropy. The caller often wants a lot of RNs and doesn't want to wait around. To solve the mismatch between the Mixer output and the caller's needs, we create an expansion function or Expander. This function is pretty simple: (a) it takes a small seed and (b) turns that into a hugely long stream. It could be called the Firehose... Recalling our truth above of (c) CSRNs being the goal, not entropy, we now have a really easy solution to this problem: Use a cryptographic stream cipher. This black box takes a small seed (a-check!) and provides a near-infinite series of bytes (b-check!) that are cryptographically secure (c-check!). We don't care about the plaintext, but by the security claims behind the cipher, the stream is cryptographically unpredictable without access to the seed. Super easy: Any decent, modern, highly secure stream cipher is probably good for this application. Our current favourite is ChaCha20 but any of the NESSIE set would be fine. In summary, the Expander is simply this: when the application asks for a PRNG, we ask the Mixer for a seed, initialise a stream cipher with the seed, and return it back to the user. The caller sucks on the output of the stream cipher until she's had her fill! Subtleties. When a system first starts up there is often a shortage of easy entropy to collect. This can lead to catastrophic results if your app decides that it needs to generate high-value keys as soon as it starts up. This is a real problem -- scans of keys on the net have found significant numbers that are the same, which is generally traced to the restart problem. To solve this, either change the app (hard) ... or store some entropy for next time. How you do this is beyond scope. Then, assuming the above, the problem is that your attacker can do a halt, read off your RNG's state in some fashion, and then use it for nefarious purposes. This is especially a problem with VMs. We therefore set the goal that the current state of the RNG cannot be rolled forward nor backwards to predict prior or future uses. To deal with this, a good RNG will typically: stir fresh entropy into its cache(s) even if not required by the callers. This can be done (e.g.) by feeding ones own Expander's output in, or by setting a timer to poll the Collectors. Use hash whiteners between elements. Typically, a SHA digest or similar will be used to protect the state of a caching element as it passes its input to the next stage. As a technical design argument, the only objective way that you can show that your design is at least as good as or better than the platform-provided RNG is the following: Very careful review and testing of the software and design, and especially the Mixer; and including the platform's RNG as a Collector. Business Justifications. As you can see, doing RNGs is hard! Rule #1 -- use what the platform provides. You shouldn't be doing this. About the only rationales for doing your own RNG are the following. Your application has something to do with money or journalism or anti-government protest or is a CVP. By money, we mean Bitcoin or other forms of hard digital cash, not online banking. The most common CVP or centralised vulnerability party (aka TTP or trusted third party) is the Certification Authority. Your operating platform is likely to be attacked by a persistent and aggressive attacker. This might be true if the platform is one of the following: any big American or government controlled software, Microsoft Windows, Java (code, not applets), any mobile phone OS, COTS routers/firewalls, virtual machines (VMs). You write your own application software, your own libraries *and* your own crypto! You can show objectively that you can do a better job. Note that it is still a hard test, you want ALL of those to be true before you start mucking around in this chaotic area. That all said, good luck! Comments to the normal place, please, and Ed's note: this will improve in time....

Who invented the shared repository idea: Bitcoin, Boyle, and history

Wed, 01/22/2014 - 11:34
I had previously claimed that Todd Boyle had invented the idea of a shared transaction repository (or STR): "BitCoin achieves the issuer part by creating a distributed and published database over clients that conspire to record the transactions reliably. The idea of publishing the repository to make it honest was initially explored in Todd Boyle's netledger design." It was a point of some discord between us, and almost brought us to academic blows, but with the advent of Bitcoin and it's published, out-there, in-your-face ledger, now aka the blockchain, Todd's ideas have been cast in a new light. This is a historical curiosity, and as I was challenged on this question by Luuk, a student of this history, I finally got around to researching it. Now, sadly, Todd has left the net scene for other things. But the wayback machine preserves his writings (GLT-GLR, STR and death to CDEA), and I found the following snippet concerning the GLT or General-Ledger-for-Transactions, his idea of a webserver that handled transactions for the world: Triple entry accounting is this: You form a transaction in your [General-Ledger-for-Transactions]. Every GLT transaction requires naming an external party. ... [which names a] real customer or supplier ID which is publicly agreed, just as domain names or email addresses are part of public namespaces. When you POST, the entry is stored in your internal [GLT] just like in the past. But it is also submitted to the triple-entry table in whatever [Shared-Transaction-Repository] system you choose. Perhaps your own STR server, such as the STR module of your GL. Or perhaps it is a big STR server out at Exodus or your ISP or a BSP. The same information you stored in your GLT entry suffices to complete the shared entry in the STR, and your private Stub. ... 3. the GLT is something that is almost becoming a community asset. You just cannot get the kind of integrated economy we need, without some real consensus among practitioners, to move certain parts of the transaction to a shared place. I am not saying public; I am saying shared. The amount, date and description of a deal are inherently shared between two parties and should be stored visible to those two parties alone, i.e. either protected by private system permissions or, encrypted visible to those two parties alone. For me, these paragraphs dating back to 2003 stake a tiny claim. I certainly don't claim the idea because I remain horrified at the privacy implications of a published general ledger, as expressed by Bitcoin, but that's something that the market has decided it's not so fussed about. What is interesting is that, rarely amongst contemporary writings, Marc Andreeson came out and said: ... Bitcoin at its most fundamental level is a breakthrough in computer science – one that builds on 20 years of research into cryptographic currency, and 40 years of research in cryptography, by thousands of researchers around the world. Having been someone who started working in cryptographic currency in 1995, I'm very aware of the way this history unfolded. Satoshi Nakamoto stands on the shoulders of giants, his design is the very clever assembling of components that were tried beforehand, and found wanting for various reasons. The notion of a public, and/or shared ledger is one of those components employed in Bitcoin, and for that, I think Todd deserves a small byline in history. Todd Boyle! We who died in the entrepreneurial pursuit of digital currency, we salute you!...

Digital Currencies get their mojo back: the Ripple protocol

Mon, 01/20/2014 - 00:31
I was pointed to Ripple and found it was actually a protocol (I thought it was a business, that's the trap with slick marketing). Worth a quick look. To my surprise, it was actually quite neat. However, tricks and traps abound, so this is a list of criticisms. I am hereby going to trash certain features of the protocol, but I'm trying to do it in the spirit of, please! Fix these things before it is too late. Been there, got the back-ache from the t-shirt made of chain mail. The cross you are building for yourself will be yours forever! Ripple's low level protocol layout is about what Gary Howland's SOX1 tried to look like, with more bells and whistles. Ripple is a protocol that tries to do the best of today's ideas that are around (with a nod towards Bitcoin), and this is one of its failings: It tries to stuff *everything* into it. Big mistake. Let's look at this with some choice elements....

The Shamir-Grigg-Gutmann challenge -- DJB's counterexamples

Sun, 01/19/2014 - 13:19
Last month, I wrote to explain that these challenges by Dan Bernstein: 2011 Grigg–Gutmann: In the past 15 years “no one ever lost money to an attack on a properly designed cryptosystem (meaning one that didn’t use homebrew crypto or toy keys) in the Internet or commercial worlds”. 2002 Shamir: “Cryptography is usually bypassed. I am not aware of any major world-class security system employing cryptography in which the hackers penetrated the system by actually going through the cryptanalysis.” could be simply reduced to: "Show us the money!" Perhaps uniquely, Dan Bernstein took umbrage and went looking for the money. He found two potentials. Out of order, let's look at potential "in the money" option #2: WEP....