Industry Spotlight: Syntropy’s Bill Norton on DARP, Blockchain, and the Internet

August 16th, 2021 by · Leave a Comment

The optimization of internet traffic has been a topic since there was traffic in the first place.  Peering and interconnection between the islands of infrastructure that make up our digital world is seeing the same trends of automation and control, and lots of smart folks are trying to get it right.  With us today to delve into the subject is one of the industry’s leading voices on peering and interconnection, Bill Norton.  Bill is co-founder and Chief Technical Liaison of Syntropy, formerly NOIA, which is developing a blockchain-based platform that it believes could change the way applications are developed and networks are run.

TR: How did you get started in the study of internet traffic flows and relationships? 

BN: I’ve been around the internet industry for a very, very long time. After graduating from SUNY Potsdam, I did some consulting for IBM when they were producing the PS/2 and developing OS/2 before being recruited to join the University of Michigan’s Merit Network and the NFSNET project.  I wasn’t actually recruited for that project, so I had to kind of finagle my way into that project by volunteering to do anything that needed to be done: setting up computers and the NOC and writing software called Internet Rover that monitored the core of the internet.  It was composed of about 13-17 sites around the United States, interconnecting the research institutions together, and it just measured the performance and noticed when there were transmission issues and so forth. From there, I was asked to chair meetings between the research institutions across the United States that own and operate their own networks.  Those meetings eventually became NANOG, and I became the first NANOG chair.  After I received my MBA from Michigan Business School, I was recruited to help launch what became known as Equinix, a data center company globally, where I was co-founder and Chief Technical Liaison from startup through IPO and then the dot com crash.  And after 10 years I cashed out my stock options and I wrote a book called The Internet Peering Playbook about what I learned from doing the job: how the Internet is interconnected, the nature of the business relationships between the ISPs, and when and why they decided to peer or not with each other.  For the next five years or so I did consulting all around the world before helping found Console Connect, which provided Internet bypass interconnections. 

TR: What was your journey from there to the founding of Syntropy?

BN:  One of the things Console Connect had to do was to prove that connecting directly instead of going through the internet was better. In order to make that case, I did some research.  I deployed about 50 nodes across all the different major cloud companies around the world, and I started doing performance measurements. I started noticing anomalies. A cloud instance doing measurements to another      cloud instance might see a consistent 126ms latency, and then all of a sudden jump to 154ms for four hours and then go back down.  After talking with my colleagues in NANOG and elsewhere, I learned that this was due to MPLS auto-optimization, a way for the routers to automatically adjust the balance of their underlying transport links so that they end up being about equal in utilization. And it was this data and insight that then got me connected up with the Syntropy guys. 

I started analyzing how you could      get better performance from an underlying transport that is highly variable with anomalies that are completely out of my control. It turns out that you can, in fact, identify those problems and decide when to go direct and when is it better to instead relay traffic through a now better-performing Internet path.  This led to the development of DARP, the Distributed Autonomous Routing Protocol.

TR: What opportunity is Syntropy targeting?

BN: Our mission is to establish an open alternative data security and routing system, operating right on top of the existing Internet protocols. As part of this mission, we secure and optimize the public internet by routing around congestion. You can buy a low-end internet service and run our stack, and you get an encrypted mesh network that continuously measures performance to determine if it is better to send data along an alternate path.  Low-cost ISPs minimize their own costs by keeping the packets on their own backbone.  But when paths are congested you might find that that traffic might be better delivered with a lower latency by automatically detecting that you should go through a higher quality network. An analogy might be that it is like buying a Nissan Leaf but getting to drive      a Tesla.

TR: What is DARP, and what makes it different?

BN: DARP stands for the Distributed Autonomous Routing Protocol.  What we found is that if we do continuous one-way latency measurements between a collection of internet-connected devices we can force traffic along the best measured path.  Each of these measurements is a single packet. The payload of these single packet measurements are the previous measurements to that node. So when a receiver gets that packet, it knows what all of the measurements were to that router. Because all of these routers are doing this, every node is endowed with a full matrix of one-way latency measurements, and therefore know how all nodes should forward traffic.  This complete knowledge is exactly what the blockchain needs for validation of forwarding and crediting those nodes that relay packets

TR: How does blockchain take advantage of that information?

BN: Blockchain is really a fairness insurance mechanism. It ensures fairness between those who share by relaying traffic through others and those who use the better paths of others.  The blockchain acts as a distributed ledger to keep track of both. We leverage the same technology that secures cyptocurrencies to ensure fairness between users and sharers of the encrypted mesh network.

TR: Who is it that plays each role in such an ecosystem?

BN: There are really three roles here. One role is that of a relay operator, who will earn a certain amount of tokens that is based on the amount of traffic relayed between two endpoints. However, we have found that when we have deployed 10K nodes around the world, the chances are good that many nodes may not be a better path for anybody else in the network. So there are two more roles that you can participate as in the network: validators and nominators. The nominator, through nominating their tokens, decides which validators should be used. Whereas the Bitcoin blockchain uses a consensus algorithm called the proof of work, basically solving a complex math problem in order to earn the right to offer a block on the blockchain and earn transaction fees, the Syntropy blockchain uses Nominated Proof of Stake, which instead uses the wisdom of crowds to vote with their money on which validators are authentic and good and should be staked on. That leaves the third actor in this play: the validator. The validator’s role is to author new blocks on the blockchain. Validators and nominators share the rewards they gain from honestly producing and validating blocks. In the event of a dishonest/invalid block, malicious validators (and nominators that have staked on them) get their stake slashed (taken away or burned).

TR: Where will all this automation take us?  What does the ecosystem look like when it all comes to fruition?

BN: I’ve been a Telecom guy for a few decades, and one really interesting thing is we see a continuous decline in the price of Internet transit. Back in 1998 the price of Internet transit was $1,200/Mbps. Four years later it was $120/Mbps, and everyone would say ‘I don’t know how anybody’s making money’.  Four years later it was $12/Mbps,and today if you buy in volume today it’s near $0.12/Mbps, and people still say ‘No one is making any money’. But what this continuous decline really means is that our industry is heading towards micropayments whether it likes it or not. Salespeople can’t afford to go out and make a sale unless it’s at least $500/month, and it is becoming impracticable to support written contract overhead at that price. We’re heading towards automation where the end-users in this model will interact directly with the blockchain in order to establish their encrypted mesh networks and pay for its use via automated microtransactions.  As an automated system, this ecosystem adds incremental margin to the ISPs, because relaying traffic in an encrypted mesh ecosystem provides one more way of making money for an ISP. 

TR: That’s a very interesting ecosystem you envision, where does Syntropy itself fit into it?

BN: We want to be the software developer for this platform that other companies can build and earn money from, and provide the open-source application framework for secure application development. If you think about what’s happening on the      Internet today, all you hear about is ransomware and nation-state cyberattacks. The internet grew rapidly because it had an open architecture where anyone connected to the network could interact with anybody else. Well, today, we’re finding that there are some downsides to having billions of devices able to access your Internet-connected devices at your house.

Really what people want, what people need, is access to maybe a couple of hundred sites, not billions. By having an encrypted ecosystem that the end users create and connect into, we can secure these endpoints.  Consider a software developer who creates a docker with a Syntropy stack as a fundamental piece that is then downloaded by 2-3 other parties. The point of this docker is to create a purpose-driven network, each securely connected to each other over an encrypted public internet. All the dockers see themselves as on the same local network, and all security is automatically taken care of. Traffic is automatically relayed between each when there’s a better path. As new dockers get spun up, they automatically connect to the network. This is a better and more secure way of having applications specific between end users and content providers. To me, this is a very exciting new way of building applications with built-in network and encryption.

TR: How mature is this technology?  How close are we to having such an ecosystem work the way you envision?

BN: Well, everything continues to evolve. Make no mistake about that. We are building the test network as we speak, with validators staked and soon nominators with be able to stake in a controlled environment. These validators right now in the test net are run by community members that have staked towards the max 200,000 Syntropy tokens. Many are run by community members that have been working with us for the last couple of years. Once the test net is live and running for some time, we will migrate to what is called a main net, which will be a living, breathing autonomous system. Participants in this fully distributed model can participate in voting as to when and how you evolve the smart contracts, but for the blockchain software is going to continue to evolve even on a weekly basis. That’s the nature of the beast. 

TR: Who do you think the biggest proponents will be?

BN: Once we have it up and running and everyone’s making use of it, content providers will finally get what they’ve been asking for for decades, which is to have the ability to control egress routing to the eyeballs. They have forever had to throw the packet over the wall to the ‘best effort services’ of the ISPs.  There are no controls for the content provider to guide and say, “Based on our experience, based on the retransmissions, we want to use this different path.”  But now they will have the knobs, the ability to guide the traffic through alternative paths.

TR: How much better might network performance be?

BN: One of the deployments we have done      was between Azure, AWS, and other cloud providers, continuously measuring one-way traffic to each other. What we found is that just within AWS to AWS traffic, 17% improvement was detected using relays instead of direct. Just within Canada we saw a 40% improvement, an average of       16 milliseconds better by going through relays. If you go across a wider swath, such as from Azure in Ireland to AWS in South Africa, there you’ll see a much better, more consistent improvement in performance.

The best performance that we see is when you have a very heterogeneous underlying Internet. The more ISPs you have or the more cloud providers you’re going between, the more likely we are to find a better path. When we did all of the 500 nodes across all the different ISPs and cloud providers that we could find, 70% of the paths could be optimized. This means that ISPs with a DARP overlay option have the opportunity to “own the customer” with service level improvements and instrumentation that spans all ISPs involved in their customers’ network traffic. For US-based enterprises with sites across the globe, this is a huge operational benefit.

TR: What types of content do you think would benefit the most?

BN: I did some work with a couple of different cloud gaming providers, and one of the things that I found is that when a game becomes unusable because of the network, there were very few things that the cloud gaming company could do to diagnose and rectify the situation. They can monitor the stack for retransmissions and jitter and try to vary things like the encoding rate or the resolution on the end systems. We are saying they won’t need to do that anymore. They should have a network layer that understands the nature of the connections between the endpoints.  The network layer should be responsible for measuring current performance and identifying better paths through intermediaries.

So in short, all Internet content in the future should be secured and optimized to the end users. All packets should be encrypted such that only the end points can decrypt them, and all end systems should be able to take advantage of a neighbor’s better connection across the shared encrypted network.

TR: Who needs to buy in to make this all happen?  Whose interest do you need to grab?

BN: There’s a developer audience that we’re trying to reach right now. We want to see developers start building applications on top of this secure and encrypted network. Building those distributed applications on top of this is something I’m really excited about.

Then there are the Internet service providers that already have co-located environments that are rich in network connectivity. I see them as having key relay node opportunities and they should want to do it really for two reasons. Firstly, to earn the incremental revenue, but also to participate in the micropayment capabilities.  You should not need to hire a sales person to sell a few Gbps of premium traffic. And secondly, to own the customer regardless of how many other ISPs are participating in the interaction. If you have an MPLS solution spanning 15 ISPs and 15 different countries, this gives you a chance to unify them collectively into an encrypted mesh network that you can hand off to the customer. 

And finally there are the content providers, who have their first chance to control egress routing and provide a better end user experience.            

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Blockchain · Industry Spotlight · Internet Traffic

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar