`

An Exploration of Identity

This piece by Ian Grigg was originally published in March this year to the R3 membership and we are excited to share this publicly.  His earlier piece, Identity is an Edge Protocol, was published on our blog earlier this month. 

Three Motivators for Identity

 It seems as if there are three general motivators for the commercial notion of identity:  compliance, security and customer service.  Knowing which person is whom is a compliance issue.  Knowing that a person authorised use of his own funds is a security issue.  Knowing that a person was able to move her money easily without obstructions is a customer service issue.  Sometimes these three motivators are compatible and sometimes they are in conflict.

FITS – the Financial Identity Trilemma Syndrome

If you’re suffering from FITS, it might be because of compliance.  As discussed at length in the R3 Architecture Working Group whitepaper on identity, McKinsey noted that costs of compliance are growing at 20% year on year.  Meanwhile, the compliance issue has boxed in and dumbed down the security, and reduced the quality of the customer service.  This is not just unpleasant for customers, it’s a danger sign – if customers desert because of bad service, or banks shed to reduce risk of fines, then banks shrink.  In the current environment of delicate balance sheets, rising compliance costs and a recessionary economy, reduction in bank customer base is a life-threatening issue. 

If we can redress the balance - put customer service back up on the list - without compromising the other two, then not only might it be worth the customers’ while to invest in a new system, it might actually save a few banks.

How might we do that?  From the perspective of our current understanding of identity, it’s somewhat clear that today’s systems are as much the problem as they are the solution.  It then behoves to reexamine the process from scratch.

Let’s start from a clean sheet of paper and ask – how do we make decisions?

Context means everything

We humans can make quick assessments, unconscious ones, from context.

If for example, all of our communications with a group of people are in a closed network of friends or known coworkers, we can make assumptions about what we can share with that group, which we would not make for a public forum. Or, outside work, if we found ourselves in our local bar, we might assume that all the people in the bar are likely as honest and reliable as the average person in our city – in one city we might leave our phone and keys on the table while visiting the conveniences, but not in another.

 In the above examples, when deciding whether to trust me to not steal your phone, you don’t need to know my identity at all. What you need to know is, where am I?  Which set of norms do I subscribe to?

Taking this across to the context of banks and transactions and so forth, it may well suffice to know that I’m a trader with a famous power-broker Broker A.  That might be all you need.  OK, it’s not quite all, you’d also want to know which desk I’m on, what my limits are, and what I’m authorised to trade.  But what you don’t need to know is my name.

 When we limit ourselves to corporate banking – context – then we’re interested in the two identities – the trader and the company, but what we’re really interested in is not the identities per se, but the relationships and interplay between the identities.  Let’s think about this as an edge protocol (see Figure 3 above from the earlier piece on Edge Protocols) and apply it to trading: everyone says Broker A is a great broker, Broker A says that I am a trader there, and that should be enough.  

 If we do business with the Broker A at all, we’re basically riding on that corporation’s trust level, and shouldn’t need to worry that much about which of the many people in the company we’re talking to - they all should be good, else why deal with that company?

 If it was just about edges, then we could just collect them up, analyse them and we’d be liquid.  Connect an edge Hoover to a relationship AI and we’d be done.  Utility, outsourcing, profits, here we come!

 But there are a few impediments to this process.  Let’s look at three, being risk, reliability and liability.

The Facts of Others

 Ideally, we’d like to take one bank’s decision over a person and copy that, just like data, across to other banks.  But risk analysis precludes that -- one bank’s risk is not the same as another bank’s risk.  We therefore need to stop short of outsourcing decisions, and in today’s world, we are limited to out-sourcing the gathering of facts – relationships or edges.

 The Hunt for Facts

 Let’s put the spotlight on the fact.  A typical fact would be that Bob signs over this statement:

 “The passport held by Alice has her name and a good photo of her” /Bob/

 This is evidence, to some value, that Alice is Alice, and if relying party Carol needs that evidence, she might be happy to be able to rely on Bob’s statement.

Or she might not.  This could go wrong in many ways, but let’s say we’ve filtered out the non-useful facts and what we are left with is the golden nuggets of trade.  We’re still left with:

 The Nouns…

What’s a passport, anyway?

What’s in a name?  A rose by any other…

What’s a good photo?  Fierce, pretty or bland?

Who is Bob?

Why does he care?

What incentive does he have to tell the truth?

What incentive does he have to do a good job?

Is the fact within reliable?

Was it reliable then, is it reliable now, will it be reliable tomorrow?

 Chief amongst our concerns is that the edges as collected might be unreliable.  I.e., what happens if my name is not Alice?   Or if I work next door to Broker A and just snuck in to use their terminals one night?  Or, any one of a thousand scenarios - the conclusion here is that while the fact might still be a fact, in that Bob said those words, it might not be an accurate or reliable representation of the real world.  If such happens, then the golden analysis above turns to fool’s gold.

 The Source of Our Unreliability

 The reasons that any given fact might be unreliable are legion – I once wrote down 99 of them.  We could dutifully promise to take more care, but this hasn’t worked well in the past.  We need better than marketing and empty campaign promises to make this work.  Luckily we’ve got some more clarity on why a fact isn’t reliable and how to fix it.  Four techniques will create a foundation for the facts:

 1/ Skin in the game – every agent needs to be not only positively incentivized to work in this relationship building, but also positively corrected when things go wrong.  There needs to be what the engineers call a negative feedback loop – one of correcting capability.

 2/ Quality control – if the above correction is dramatic, we need a way to show that the agent has done a good job.  Statements are in words and they can be both wide of the mark and misinterpreted.  To address this, we can set up-front minimum quality standards that are clear and applicable to both the makers of facts and to the users of the facts, or “relying parties,” and operate them diligently.

 3/ Redundancy in sources – to get a single complete accurate fact is very expensive, but to get many small facts converging on the same approximate large truth is cheap.

 4/ Authorities in sources – some facts are ‘owned’ by certain parties, and if we can get them on board in a secure context then we can improve the quality, at least in their facts.

 These should be familiar, and will create a base of reliability but we need more.

 Liability of the provider sets the quality of the facts

 To be useful to Carol, we need the facts to be not only reliable, but exportable.  That is, suitable for other people to rely on the facts in their own risk assessment.  We can do this in the first two basic ways as listed above:

 Firstly, by setting liabilities for poor work, especially but not only by not following the standard of the second way.

 Secondly, by setting up front and operating to a minimum quality standard that is clear and applicable to both the makers of facts, and to the “relying parties” of the facts.

Imposing liabilities for poor work needs to be done carefully because there are two general possibilities, being

–       the work and care that is done in creating the fact, which has one value, and

–       the damage that can result from relying on the fact, which damage has another value.

These two values are sometimes wildly different.

 In general, it is harder to assess liabilities to the damage that can result in advance because it is implausible to predict to what use the facts are put.  This is what the cryptographers call the problem of Grandma’s house:  if I sign that Mallory is a good guy, and Grandma relies on my statement, but the result is that Grandma loses her house, who’s to blame?   One school has it that she fairly relied on me, so I have to pay her a house.  Another school has it that because I only reviewed Mallory’s passport, it is a process or administrative implication and no more, and back to passport-review school I go.

 Which is it?

 To cut the Gordian knot, we typically place such problems before a resolver of disputes, a person who can decide which of the interpretations apply.  This accepts that we are entirely uncertain how a given dispute will pan out, both I and Grandma, but we know that it will resolve one way or another.  So, and this uncertainty is the crux of the argument, I will probably do a better job than merited because my liabilities may go sky-high, and Grandma won’t put her house to the gamble of one claim, because her assets may go to rock bottom!

 But, if my liabilities could go sky-high, why would I ever get involved?  It is for this reason that I need to be protected by a standard approach – one that is well thought out, agreed, documented and auditable.  Especially, that last step is what will convince an Arbitrator that I have done the job I was asked to do.

 A good liabilities framework then is initially limited to the correctness of the facts.  However, in order to get any traction on relying on those facts, we at least need to improve the quality of the facts such that they are reliable – they meet a minimum bar to allow others to rely on them.

 Hence, while the liability solution is initially necessary to address the liabilities incurred when one person relies on a fact produced by another, it has another side-effect - it improves the quality by encouraging the provider of the fact to take especial care up front, as if they are liable to another person not as yet identified for risks not as yet quantified.  For this reason, we need to protect the provider of the facts with a standard to follow, so that they are not on the hook for impossibly high liabilities from a simple process operation.

 The alternate route lacks accountability

 In the typical alternate approach, the provider of facts asserts zero liability, as that is the only business solution to unpredictable liabilities.  But, this is what we call a positive feedback loop, in which the provider gets paid for good and bad results, equally.  In a positive feedback loop, activity grows and grows until the machine destroys itself.  As there is no correction when the machine goes off the rails, a lack of liability also means a lack of accountability, and in that scenario, there is an unfortunate consequence:  the quality of the data shrinks to nothing.  In effect, the zero-liability solution causes a race to the bottom, and the provider prints unreliable statements without limit.

 We need to solve the liability issue not only because of the direct liabilities themselves but also because the system needs an appropriate level of quality, and to deliver that, the provider of fact needs an appropriate incentive to reach that level of quality.

 In short, we need a feedback mechanism that convinces the provider of facts it is worth taking real care as opposed to advertising care.

 An example of adverse liability consequences can be seen in the Certification Authority (CA) business.  The provider of facts, the CA, typically says two things about a corporation, being that R3 for example

–       holds the private key that signs the public key that is identified, and,

–       is the holder of a domain name, e.g., r3.com.

 These two facts are memorialised (if not explained) in the certificate.

 But the provider of facts, the CA, disclaims all liability.  So, in consequence of that disclaimer, if a bank were to do a Corda transaction that relied upon the certificate to (for hypothetical example) download the latest Corda release, and the bank got man-in-the-middle attacked by the Closet Internet Anarchists which injected a zero-day and then led to a Bangladeshi-style sweep of all the bank’s money via the Philippines….

 Then…who stands up?  Who carries the liability?

 Not the CA – its contract (buried deep) says "zero liability" and it means it – for various and many reasons we cannot take the CA to court.  The CA is entirely covered, no matter what goes wrong.  And, therefore, eventually, inevitably, by all the laws and history of governance and economics and erosion and evolution and all such scientific notions, the CA takes low levels of care to check the data being asserted into the certificate.  The CA has entered the race to the bottom, and we’re not the winners of this race.

 The CA can only have an appropriate level of quality if there is a feedback loop that acts to move the quality up or down as appropriate to the wider uses made by customers.  If the liability is dumbed down in some fashion - if some group of players is insulated from the effects of their actions - then the quality sinks down over time and the entire system becomes useless.  This then eventually results in a disaster such as fraud or hacking or phishing, which then triggers either a bureaucratic overdrive to add more and more pointless feel-good rules, or a regulatory kickback.  Or both.

 Closing the loop

 It isn’t about the bank, or Alice, or the regulator, or the CA or nature or desire or humanity.

 It’s about the system.  We need a feedback loop that controls the quality of information, one that the engineers call a negative feedback loop.

 The nature of this information / this data is that disasters are unpredictable - there is no way to add a dial ex ante that allows the setting of liability to some level.  We all take on some risk, each of us, individually and society-wide, what we need is a mechanism for controlling the risks when they blow out.  Hence, we need a dispute resolution mechanism that can examine the disaster after the fact, take on the question of wider liabilities within the context of a standard, and return a human answer when the data fails.

We need both Alice and Bob to not fall into the trap of moral hazard – hoping that some mysterious other covers them for all eventualities.  We need both of them to take some care, and be prepared to stand up when the care wasn’t sufficient.  We also need to wrap this up into an efficient package, such that they are incentivised to participate.

 If Alice and Bob and everyone else has reached consensus on participation, then the edges will grow and flow.  And with enough edge liquidity, the task of decisions over a node becomes tractable, even across the globe, across languages, across jurisdictions.

 Then we can get back to the business of creating profitable, shared trade.  Deals that are backed by a fabric of identity, built of the edges of human relationships.

Corda: Designed for Commerce, Engineered for Deployment

This post was originally posted here on Richard Gendal Brown's blog.

When we say Corda is designed specifically for enterprises, we mean it!

I’ve spent a lot of time with clients recently and it’s been thrilling to hear how so many of the unique design decisions we’ve made with Corda really resonate.

R3 has been building this new open-source distributed ledger platform in close collaboration with hundreds of senior technologists from across our global financial services membership. And that’s why Corda resonates with businesspeople, because it was designed from the ground up to solve a real business problem: helping firms automate and manage their dealings with each other with legal certainty and without duplication, error and unnecessary reconciliation. Applying the essential insight of blockchain intelligently to the world of commerce.

Corda: inspired by blockchain systems but built from the ground up with the needs of today’s businesses in mind.

But Corda also resonates with technologists in these firms.  This is because we designed Corda to be deployable and manageable in the complex reality of today’s IT landscape. This sounds mundane but turns out to be critical, as I’ll explain in this article.

Corda: the only DLT platform that has been designed to make your IT department smile!

The core reason that Corda appeals to IT departments is simple: we’ve designed it so they can understand it, deploy it and manage it without having to unnecessarily rethink everything about how they operate. For example, Corda runs on the Java platform, it uses existing enterprise database technology for its storage, and it uses regular message queue technology to move data around.

These details seem small, but they turn out to be absolutely crucial: they mean enterprise IT departments already know how to deploy and manage Corda! It means that firms who select Corda will be able to get their solutions live so much quicker than those who mistakenly choose a different blockchain fabric.

No other DLT platform is as standards-compliant, interoperable or designed from the ground up to be deployed successfully into enterprise IT departments. And we’re not just talking about finance, by the way. Corda is applicable to every industry where needless duplication of data and process is prevalent: it turns out that if you can make it in finance, you can make it anywhere…

But this also leads us to another key point that explains why Corda is gaining so much interest: to get DLT projects live, multiple firms will have to move as one.

They will have to collaborate.

Corda is the product of R3, the largest-scale such collaboration the financial world has ever seen and this need for collaboration is hard-wired into its design. We’ve already discussed how Corda reuses existing standards wherever possible – massively simplifying the steps each firm seeking to deploy it needs to go through. But these insights go deeper. For example, there is usually a need to manage complex interfirm negotiations prior to committing a transaction, something enabled by Corda’s unique “flows” feature and entirely missing from other plaforms.

This need for collaboration is not restricted to large institutions themselves, of course. Getting complex DLT applications live requires partnership with implementation firms and software vendors.  Our obsession with collaboration is why Corda is so attractive to so many of these firms – our partners: they see that Corda is the right platform for business and in R3 they see a partner with collaboration in its DNA.

The reality is that there are actually very few fully open-source, credible, enterprise blockchain and DLT platforms, so when systems integrators respond to client requests for proposals, Corda is the one that many of them choose to bid.  This is not only because it is perfectly tailored for commerce but because it is the result of a genuine collaborative effort over which no one technology vendor, who may also be a competitor to them, has outsize influence: they can compete on a level playing field to serve their clients.

When you bring these strands together, it quickly becomes clear why Corda is appearing on everybody’s shortlist for projects right now.

Corda is the only enterprise DLT…

  • … designed from the ground-up to solve real business problems with privacy, scalability and legal certainty engineered in from day one.
  • … built to make the IT department smile
  • … with collaboration in its DNA: engineered to be deployed between firms in a practical and realistic way
  • … with a true ecosystem of partners competing to serve clients on a level playing field: no conflicts of interest, no fear of vendor lock-in.

As always, you can learn more about Corda at corda.net.  You can join our thriving community at slack.corda.net.  Our code is open-source and available at github.com/corda. And you can email us at partner@r3.com if you want to grow your business by building or deploying Corda solutions for your clients as one of our growing community of partners!

Identity is an Edge Protocol

This piece by Ian Grigg was originally published in January this year to the R3 membership and we are excited to share this piece publicly.  

Two tweets allowed me to formulate a vision as to why it is we're heading in a slightly different direction for identity in the future.  The first is this one:

Bingo: Identity is an edge protocol.

In order to understand this at a technological level, we have to go way back in time to a failed little invention called web of trust, invented in the early 1990s by PGP, the original email security program.  In this concept, we want others to send us encrypted email, but the others don't know our keys.

So we all sign over the keys of anyone we've met, thus creating a graph of interrelationships, or as they called it, a web of trust, which we can use to navigate from key to key.  The web worked, but the trust did not, in part because nobody said what the trust meant so people imposed various but incompatible versions of their own truth.

In the mid 1990s, a Certification Authority (CA) called Thwarte melded the PGP concept to the CA concept by using community members called Notaries to do that 'meetup' and report back in a more refined fashion - to a standard that loosely said "I saw Bob's passport".  However, this process also didn't work in the long run, in part because the CA was bought out (and no longer had appetite for community) and in practical part because their mechanism wasn't auditable.

Yet!  The same mechanism was found to be auditable in CAcert - another community CA where I worked as auditor for a while.  Ill-fated again, as the barriers to be an 'acceptable' CA ramped up as we were watching, but we did in the process build an auditable community that self-verified.  Strongly, through many weak relationships.  The upshot of this was that we now know how to do a web of trust.

And out of this process came the observation that the centre (in this case CAcert) knew practically nothing about the person.  But it knew a lot about what people said about people.  Indeed, its entire valuable data set was less about what it knew about me and you, but more what you said about me, what you and I said about others, what Alice says about Bob.  With enough of these relationships captured, we had an impregnable graph.

So when AA above said identity is an edge protocol, this crystalised in my mind a technical way of describing the new identity.  Which brings us to tweet #2:

OK, so for the non-technical folk apparently the words don't present the picture.  Hence, let me see if I can describe it in three pictures. Firstly, the word 'edge' just means the lines between the nodes, or vertices, in a graph of relationships.

Then, let's go back to the classical or IT method for thinking about identity.  We know Alice, we know Bob.  We have a HR department that says this.  We have CAs out there that will sell use certificates to say Alice is Alice.  We have states handing out identity cards that say this too, and corporate IT departments are built in this sense - let's on-board the node known as Alice, let's add permissioning to the node known as Bob, let's figure out whether the node known as Carol can trade with the node known as Bob.

Yet, this isn't how people think.  It also doesn't scale - work in the on-boarding department sometime and calculate the loss rate and the cost rate.  Blech!  Accounts and activity is shrinking around the world.  What crystalised then is that we - the entire IT, infosec and compliance world - have got it backwards.

Identity is an edge protocol, and not a nodal protocol.  What is valuable is not the node but the relationships that we can examine and record between any two given nodes.  It helps to think of the node - the person - as a blank circle, and then imagine in your mind's eye tracing the relationships between the circles.

When we've got that far, we might need to fall back to nodal thinking just for analysis sake.  But that's easy - imagine taking a subset of the relationships and painting them temporarily over a blank canvas.

You end up with very similar information as the old nodal method.  But this time it's scaleable.  We haven't really got a limitation on how many relationships we collect and analyse, as long as we collect them and analyse them as dynamic, weak links that are independent apart, and only then create a vision for us when painted together.  But we've definitely got a complexity limit if we try and shove all the information into the node, and manage it as static data that reaches the one binary truth that you are you.

And that's where the problem lies - we're too focused on the identity thing being the one person whereas actually, identity is a shared social context, inside us all, over each of us.  Ergo:

Identity is an edge protocol!

 

R3 Report: CAD-coin versus Fedcoin

As mentioned in a post last month, central bank digital currency (CBDC) has become an increasingly discussed topic and there are now at least a dozen central banks examining, and in some cases publicly discussing, the implications of creating or backing one.  

At R3, we are working with the Bank of Canada, the Monetary Authority of Singapore, and the Hong Kong Monetary Authority on CBDC-related projects, along with other efforts yet to be announced.

There are roughly two different models for CBDC that are commonly conflated. The first is "The CAD-coin Model" (note: it is not an actual coin), where a central bank issues a currency against some of its assets.  The second model is popularly known as "Fedcoin", where a central bank issues a new type of currency that becomes a liability on its balance sheet.  

Last month we published a paper covering the Fedcoin model.  Today we are publishing the R3 "CAD-coin" paper by Rod Garratt, who is a member of our academic advisory board and worked on Project Jasper (the name for "CAD-coin").

The paper was originally released to our members in November of last year. 

Here is a link to Garratt's paper.  CoinDesk got a first look.

The Weekend Read: Mar 26

When they say 'Blockchain' just close your eyes and think 'DLT DLT DLT'...

First up, some Corda love. This Australian Financial Review article (paywalled) highlights how our bank partner CBA used Corda in collaboration with their customer, Colonial First State, and a delivery partner, Hewlett Packard Enterprise, to show how it could help solve a key business problem of capital costs:

Colonial First State is re-engineering the process of buying units in the $2.2 trillion market for managed funds in a move it says will "dramatically" reduce the amount of capital banks will have to hold against wealth operations. A recent experiment with Commonwealth Bank of Australia's emerging technology team and Hewlett Packard Enterprise using the R3 consortium's Corda 'distributed ledger' allowed Colonial to eliminate arduous paper application process for managed funds and the three-day wait for the delivery of units.
Corda, which is being developed by a consortium of global banks, can remove counter-party risk for intermediaries like CFS by allowing assets to be exchanged and transactions settled instantaneously. It also provides transparency on what each counter-party holds across geographies. By removing the risk of the issuer defaulting or the investor failing to settle, banks will be able to reduce the amount of regulatory capital required to provide cover for those risks.
"If [a blockchain] was adopted locally, regionally or globally, the capital the industry would need to hold could reduce dramatically," CBA's group executive for wealth management, Annabel Spring, told the APAC blockchain conference in Sydney last week.
CBA is confident about Corda's security protocols, which have been designed with input by dozens of banks around the globe. In the CFS trial, the units were transferred cryptographically with keys in the form of PIN numbers required to access the system through mobile apps.

We also got a nice shout out by our friend Michael Dowling of IBM with this in depth post on the evolution of Corda, along with some reference to the recent blockchain-not-blockchain kerfuffle. And since we have been, ahem, a few weeks between posts, here are some 'catch up' blockchain-y links:

And finally, a big congrats to ATB Financial as our newest Canadian member!

RegTech (cont.)

R3 was happy to announce another member recently, as we welcomed the State of Illinois to our growing list of Regulator Members. Read about this here and here, along with their overall plans to leverage DLT. Our CEO David Rutter and R3 world traveller Isabelle Corbett followed up with this conversation with CoinDesk that lays out some of the concepts behind the R3 'RegNet'.

The efforts and interest of regulators extends across the US, both at the State (see Delaware is Drafting Law That Would Recognize Blockchain Records) and Federal level; Acting (and now Nominated) Chairman of the CFTC J. Christopher Giancarlo recently gave a speech on his overall agenda. Of note was the section dedicated to FinTech, both due to its substance and to the fact that the Chairman gave the topic proper airtime even with his quite package agenda. Full text is here, quick pull quote below:

[M]arket regulation by the CFTC has not kept pace. In too many ways, it remains an analog regulator of an increasingly digital marketplace, curtailing its effectiveness in overseeing the safety and soundness of markets. But it doesn’t have to be this way, especially in an industry that is synonymous with innovation. The CFTC must be a leader in adopting the “do no harm” approach to financial technology similar to the US approach to the early Internet. We must cultivate a regulatory culture of forward thinking.

Couple the above with this post from ISDA on the 'past and future' of ISDA agreements, particularly on the role of Master Agreements in the world of smart contracts. As a reminder, our third Smart Contract Template Summit (suggestions for a new name welcome!) will be coming up this June.

MAS continues to push an aggressive fintech agenda of their own. A few weeks back, MAS announced the successful completion of the interbank payments projects that they executed with R3 and a collection of local banks. See here and here. And this past week they announced more details on their plan to roll out a national KYC utility.

Another organization at the intersection of regulation, infrastructure and fintech is CLS. This IBTimes article gives an interesting look at some of their thinking. The article also lays out the differences between ledger approaches, namely that of IBM's Fabric vs R3's Corda.

Get the Papers Get the Papers

Our Research team and amazing collaborators have been busy recently, with three new papers:

  1. R3's Survey of Confidentiality and Privacy Techniques, with an accompanying piece in American Banker
  2. R3's Report on Fedcoin with JP Koning
  3. R3's Bridging the Gap Between Investment Banking Architecture and Distributed Ledgers by my good friend Martin Walker

Others have been busy as well. BIS recently release The Quest for Speed in Payments (summary article here), while G20 Insights released The G20 Countries Should Engage with Blockchain Technologies to Build an Inclusive, Transparent, and Accountable Digital Economy for All

R3 Report on Fedcoin

Central bank digital currency (CBDC) has become an increasingly discussed topic over the past few years, with multiple central banks examining and in some cases publicly discussing the implications of creating or backing one.  At R3, we are working with both the Bank of Canada ("CAD-coin") and the Monetary Authority of Singapore on CBDC-related projects, along with other efforts yet to be announced.

There are roughly two different models for CBDC that are commonly conflated. The first is "The CAD-coin Model" (note: it is not an actual coin), where a central bank issues a currency against some of its assets.  The second model is popularly known as "Fedcoin", where a central bank issues a new type of currency that becomes a liability on its balance sheet.  This Fedcoin model is the topic covered by the new R3 "Fedcoin" paper by JP Koning, which we have released publicly today (see below).

Koning "coined" the term Fedcoin in 2014.  While Fedcoin and "CBDC" have been used interchangeably, they are distinct. Especially as Fedcoin has some of the same characteristics of physical cash today, namely in terms of pseudonymity and potential anonymity.

The R3 Research team enjoyed collaborating with JP Koning on this paper, which was released to our members in November of last year. We would also like to thank Rod Garratt, a member of the R3 Academic Advisory Board, for his help in putting this paper together.

Below is a link to Koning's paper as well as a companion piece written by Antony Lewis, who is our APAC Director of Research.

American Banker got a first look.


Fedcoin: A Central Bank-issued Cryptocurrency

JP Koning

Download


Fedcoin: A Central Bank-issued Cryptocurrency - R3 Perspective

Antony Lewis

Download


Survey of Confidentiality and Privacy Preserving Technologies for Blockchains

Over the past 18 months, one of the topics that we have discussed and brainstormed with our members has been around privacy and confidentiality.  Specifically, who is able to see transactions on a chain or ledger.

While there have been a number of proposals pitched by various groups posted on social media and internet forums, there has been no long form, systematic survey of the privacy and confidentiality tool kit.

Last fall we collaborated with the Zcash team led by Zooko Wilcox O'Hearn and Jack Gavigan, as well as Danny Yang, the founder of Blockseer, to provide a survey on this burgeoning space.  We also asked Mike Ward from our APAC team to put together a short view point on their output.

Below are the finished documents originally sent to our members this past November.  American Banker got a first look of it and put together an article on the paper as well.


Survey of Confidentiality and Privacy Preserving Technologies for Blockchains

Danny Yang, Jack Gavigan, Zooko Wilcox-O'Hearn, R3 Research

Download


Summary of the Confidentiality and Privacy Report

Mike Ward

Download


The Weekend Read: Mar 5

Enterprise Ethereum Alliance

The Enterprise Ethereum Alliance formally kicked off earlier this week with an all day meet up in JP Morgan's Brooklyn offices. The group consists of Ethereum-focused startups and large companies, with a focus on developing standards for private Ethereum deployments. The reaction by the press was curious, as many picked up a theme of Microsoft and IBM waging a proxy war via EEA (Microsoft) and the Hyperledger project (IBM). For example, American Banker noted "the IBM-led Linux Foundation Hyperledger Project" and their use of "a mainframe in a cloud" vs Microsoft as "more focused on openness — letting organizations choose the combinations of technology that work best for them." Coindesk followed up with an article on the decentralized nature of the new group:

Still, while the board is also designed to give members a sense of accountability, more experimental governance models are also being considered. "Everything starts as an idea, with one person," said Lubin. "That happened. But Ethereum is moving towards decentralization."

The press loves a simple narrative (see below for a fine example), but both groups are very diverse and seek to move the whole industry forward, as we ALL have a lot of work to do to make this technology real for business users. One theme that did persist at Tuesday's EEA launch was the desire to keep aligned, and in some minds perhaps eventually merged, with the public Ethereum chain (not to be confused with Ethereum Classic, or Ethereum Classic Classic!). This and Bitcoin's recent price surge are most likely what is behind the recent ramp up in the price of ether. For an older, somewhat related article on public Ethereum, I recommend this Aeon article.

Et Tu, Blockchain?

The only good thing to come out of the R3 non-story was this new Tim Swanson meme...

The only good thing to come out of the R3 non-story was this new Tim Swanson meme...

Over the last two weeks, a blockchain butterfly flapped its wings, and the next thing we knew, R3 was caught in the oddest of fake-news hurricanes. In short: a tweeted pic from a Corda meetup was coupled with the quote "GAME OVER" (perhaps an early tribute to the great Bill Paxton?) and the next thing we knew, there were all sorts of nonsense articles and blog posts. For a run down, you can read Chris Skinner's take (and yes, his is an intentional fake news headline...) and this Bank Innovation piece (Dave Birch: I would love to meet your tailor). In shorter: it was all complete BS. Which was disappointing, but not surprising. I just finished the Michael Lewis book The Undoing Project and the one thing the book taught me was that we are all "confirmation bias" machines. Or as The New Yorker put it: Why Facts Don't Change Our Minds

As David Rutter pointed out in his blog post last week:

Humans are creatures of habit. As time went on, the term blockchain came to be associated with any type of distributed ledger, even as the technology matured and evolved to meet the needs of different groups of users. This isn’t an issue unique to our space. The marketing team at Canon must have spent countless hours working out how to stop people referring to all copy machines as Xeroxs.

We can see this in two other thoughtful articles that were recently published. Our very own Antony Lewis has a great take on Distributed Ledger Technology for post trade published in Tabb Group...yet the title chosen by the editors was "Applying Blockchain to Post-Trade Derivatives Processing." Another from CFO magazine includes yours truly and does a great job in explaining why CFOs should pay attention to distributed ledger technology...which they term "Betting on Blockchain." 

Lost in the noise was the release of an 80 page report by the Aite Group. This Coindesk review of the report gives a flavor of the market landscape that Aite explored, including this key quote:

"A growing trend, adopted by five chaintech platforms and spearheaded by R3," writes Paz, "calls for consensus taking place at the transaction level, requiring the consent of at least two counterparty nodes."

Another bit lost was our new intro video to Corda, which declares in very plain language what Corda is (and isn't):

But don't just trust our word on it. Sign up for Corda training or sign up to our Slack, and see (and debate) for yourself!

Links

When is a blockchain not a blockchain?

We’ve been flattered by all the attention Corda has received this past week. It’s just too bad the story isn’t a story.
 

The issue of semantics is always a challenge as new ideas, technologies and cultural phenomena work their way into mainstream consciousness and the media. Rewind a few years and who would have thought the Oxford English Dictionary’s definition of ‘meme’ would be updated to refer to a picture of a grumpy cat or a sad Michael Jordan on Instagram?
 

When we launched R3 in 2015, we were among a handful of companies inspired by the technology underpinning bitcoin, known as blockchain, and its potential application to wholesale financial markets. Conversations in boardrooms and the media revolved around blockchain, which at that point was the most pertinent example of distributed ledger technology in the mainstream consciousness.
 

Humans are creatures of habit. As time went on, the term blockchain came to be associated with any type of distributed ledger, even as the technology matured and evolved to meet the needs of different groups of users. This isn’t an issue unique to our space. The marketing team at Canon must have spent countless hours working out how to stop people referring to all copy machines as Xeroxs.
 

While we were almost certainly guilty of slipping into this semantics trap now and again, we’ve said from the beginning that Corda is a distributed ledger platform, not a traditional blockchain platform. It was never designed to be one.
 

At the outset our architecture team identified its first priority to be to decide whether to adopt, adapt or build. Put simply, if we found another platform currently in the market that was fit for purpose for regulated financial institutions, such as a traditional blockchain, we would have had no need to build our own and we would have gladly adopted it wholesale or adapted it as necessary.
 

Blockchains are specific pieces of software originally built to handle transactions of virtual currencies such as bitcoin and ether. Together with our bank members, we realised early on that this technology could not be applied blindly to wholesale financial markets without careful consideration: changes must be made to satisfy regulatory, privacy and scalability concerns. And that is what we have done with Corda.
 

Corda’s open source distributed ledger technology was designed from the ground up to address the specific needs of the financial services industry. It is heavily inspired by and captures the benefits of blockchain systems, but with design choices that make it able to meet the needs of regulated financial institutions.
 

Crucially, Corda restricts access to data within an agreement to only those explicitly entitled to it, rather than the entire network. And financial agreements on Corda are intended to be enforceable, linking business logic and data to associated legal prose in order to ensure that the financial agreements on the platform are rooted firmly in law.
 

Corda was designed from the ground up to address the specific needs of the financial services industry. There are currently very few tangible examples of distributed ledger platforms in the market – and none that were developed with over 70 global institutions from all corners of the financial services industry. It is unique and its launch was a landmark moment for the market.
 

When is a blockchain not a blockchain? When it’s Corda.

 

ブロックチェーンがブロックチェーンじゃなくなるときはいつ?

これまでCordaを応援して下さっているみなさまに大変感謝しております。しかし、真実ではない噂が流れていることは非常に残念に思います。

物事の正当性はいつの時代も問題になります。新しいアイデア、テクノロジーや文化的現象が主流になるとき、常にメディアの目に晒されます。数年を巻き戻して、オックフォード英語辞典で”meme”という単語の意味を調べてみると、それがInstagramの可愛くない猫か残念なマイケルジョーダンの写真に更新されていると、誰が想像していたでしょう?

私たちが2015年にR3を立ち上げたとき、ビットコインやブロックチェーンを支えられる技術、そして金融市場への適用の可能性に触発された会社は一握りでした。役員会の会話やブロックチェーン絡みのメディアが使う言葉が、当時、分散台帳技術を最も良く表現していました。

人間は習慣の生き物です。時が経つにつれて、ブロックチェーンという用語はあらゆる分散台帳を示すようになってきました。テクノロジーが成熟し、異なるユーザーのニーズを満たすように進化しているにも関わらずです。これは私たちの業界だけの話ではありません。Canonのマーケティングチームは、人々がコピー機をゼロックスと呼ぶのを止めるよう、もっと時間を費やすべきでした。

しかしながら私たちは今、また同じ正当性の罠にはまってしまいそうになっています。Cordaは、当初から分散台帳プラットフォームであり、これまでのブロックチェーンプラットフォームではないと言い続けてきました。またブロックチェーンに似せようとしたこともありません

当初、私たちのアーキテクチャーチームは、既存のブロックチェーンを採用するか、適用するか、一から作るか、の選択が最優先であると考えました。別の言い方をすると、もし現在市場にある伝統的なブロックチェーンのようなプラットフォームが、規制された金融機関の目的にフィットするのであれば、私たちは一から独自のプラットフォームを構築する必要はありませんでした。喜んでそれを採用するか、必要な範囲内で適用していたでしょう。

ブロックチェーンはビットコインやイーサのような仮想通貨の取引をハンドリングするために構築されたソフトウェアです。コンソーシアムのメンバーである銀行と共に調査・研究していく中で、私たちはこのテクノロジーを深く考えず盲目的に金融市場へ適用することは出来ないと、早い段階で気が付きました。規制上の問題やプライバシーやスケーラビリティの問題を無視して進めるわけにはいきません。それがCordaを開発した理由です。

オープンソースである分散台帳技術Cordaは、金融業界の特定のニーズに対応するために一から設計されました。ブロックチェーンの良い面に強く影響されていますが、設計上の判断は、規制された金融機関のニーズを満たすようされています。