Industry Spotlight: Software-Defined Mainframes with LzLabs’ Mark Cresswell

November 23rd, 2020 by · Leave a Comment

As the wave of migration of applications to the cloud continues, we sometimes forget that this is not one problem but rather a multitude of problems, each with its own special issues and considerations.  That leaves room for companies to dial in on opportunities that the broader solution sets don’t handle.  LzLabs is one of those companies targeting a unique enterprise computing niche that many people have forgotten is still out there: mainframes.  With us today to talk about LzLabs’ unique position within enterprise cloud migration is Executive Chairman Mark Cresswell.

TR: Tell us a bit about LzLabs origins and how you got involved. 

MC: We’re a software development company headquartered out of Zurich in Switzerland with offices in London, Toronto, Paris. There are thousands of large organizations out there that still run mission-critical workloads on IBM legacy mainframes. Many of them want to move those applications into the cloud, and we’ve got technology that makes that very easy to do.  The idea of running legacy mainframe workloads on platforms other than a mainframe has been around for 20-30 years, but one of the biggest problems encountered, when organizations want to do that, is that they can’t find the source code.  If you can’t find the source code, you can’t recompile it to run it on a different computer architecture. We have a few bright guys that came up with a way round that particular problem, and so they founded the company and raised money from investors. I got involved in 2015 because I’d worked with the investors for quite a long time, and essentially, I provided the executive oversight to get the product finished, launched, and released into the market.

TR: What types of applications are people still running on mainframe technology, and what happened to the source code?

MC: They can be things like credit card authorizations and logistics planning, and more specifically in the telecommunications market things like landline and broadband billing systems, number allocation, and so on.  These are applications that were originally written in the 70s and the 80s.  They were written in COBOL, PL/1, and even Mainframe Assembler. Back then, source code management wasn’t a discipline the way it is today, and there weren’t many tools to manage the relationship between source code and executable,  so a lot of these organizations can’t reliably identify the source code from which the binary executables have been built.

TR: So what form does your offering take?  Do you emulate the mainframe operating system?

MC: We eliminate the problem of having to recompile when you want to move platform. We enable those mainframe binary executables to run in the cloud.  We don’t emulate anything.  The best mental model for what we do is a Java Virtual Machine, which can read a Java bytecode and at runtime cross-compile it into native executable code. We can read the mainframe binary, and at runtime we just translate it into something that will execute on x86. Additionally, we have developed a software layer that enables the open-source operating systems and databases like Postgres to behave the same way as their legacy mainframe counterparts.

TR: Why hasn’t the old code simply been rewritten based on the known logic and functionality it provides?

MC: There is a general apathy for rewriting old programs that continue to work. I mean I think this is endemic in any technology-driven organization. Aspirational technology is new. It’s innovative. It’s shiny. It’s doing something different. Going back and revisiting old programs that continue to work, just to put them in a new environment, isn’t something that gets prioritized. Some companies have done it, but most have not.  As the years have gone by, the people capable of doing that refactoring have retired and left the workforce; the institutional knowledge that is required to refactor anything has left the building. Coupled with the lack of source code, it’s just too difficult to look at an executable program and truly understand what it’s doing. You can see the inputs and the outputs but to reconstruct the core processes, pathways, and everything that underpins them is actually quite difficult.

TR: Given that level of apathy, how do you motivate your customers to shift to the cloud?

MC: Actually, we don’t usually have to convince anyone.  If you have to convince someone to shift, then they’re probably not ready to do it. There are three main reasons why many companies are ready to move. The first reason is that the cost of running these applications on mainframes is extreme when you compare it with what it will cost to run the same application in the cloud. But the main reason is the risk associated with doing nothing. So many people have left the workforce, when these systems fail now, there are very few people around that know how to manage the environment to recover it. There are so many arcane procedures and concepts that exist in the world of the legacy mainframe, that without that really focused systems administration knowledge, the likelihood of system failure increases greatly. The third reason is that as large organizations move to more digital channels to engage with their customers – telcos are great examples – the ability to manipulate and move these legacy applications forward to be competitive is impaired by fact that legacy mainframe software infrastructure has not kept pace with the rest of the industry.

TR: What other benefits can enterprises see from moving such workloads to the cloud this way?

MC: We eliminate a lot of the friction and risk of moving those applications to the cloud, so that enterprises can much more easily leverage the value of those applications in concert with other modern technologies. A classic example is billing applications. Billing and customer management applications for landlines and broadband have existed for decades, and that data is really valuable as modern telecommunications and integration markets emerge. We make it really easy for that data to be leveraged more effectively. Once you take away the risk, and the difficulty of shifting to the cloud, all the other opportunities that you get from running in the cloud open up.

TR: Is anyone else out there attempting to this?

MC: We’re the only guys doing it this way. It’s actually not a task people undertake lightly, because it’s quite a capital-intensive software product to build. We were fortunate enough to have a group of investors that shared this vision.  What we’re trying to do is support applications built on 40-year-old technology that’s evolved over those 40 years; we’ve had to develop a lot of stuff to make it all work. But the market is enormous. Any large traditional company that started more than 20 years ago is likely to have one of these mainframes floating around somewhere.

TR: Do you have to customize it for each specific application, or is it a general solution?

MC: Generally speaking, it’s uniformly applicable. Of course, there are certain edge cases that we don’t support. And, like any good software product, you never really finish it. We continue to enhance it, to address use cases that we haven’t encountered before.

TR: So what is next on the horizon for LzLabs?

MC: We are really focused on our Software Defined Mainframe at the moment, because we are right at the start of the journey; we have only had the product generally available for a couple of years, so we really want to consolidate our leadership position in the market. We do work with adjacent technologies that can benefit from the fact that all this legacy corporate data is now in a much more accessible form, such as data mining, analytics, and machine learning;  technologies that can interoperate with the data more easily when it’s in the cloud. But we’re very focused on making the Software Defined Mainframe a very broadly applicable platform, and we’ve got a long list of enhancements to make.

TR: Do you have any specific examples where your software-defined mainframes have been deployed?

MC: We do have very large success story in Switzerland with Swisscom. They moved their entire billing and number allocation system over onto our Software Defined Mainframe. They have been running it in production now for about 15 months and have decommissioned the whole mainframe. We have a very large European bank that is running credit card management applications on it.  They have been moving the workload off their mainframes incrementally and it has been a success thus far. And we have a number of insurance, pharma and other telcos companies on the journey to move applications as well.

TR: So what’s the biggest challenge ahead for you? Just ramping up the customer wins, or what?

MC: We are taking it one customer at a time, making sure each one is a success, and then building on each success to scale the business. We are not going to get too far out over the front of our skis on that. Beyond that, it’s just letting people know that this kind of technology exists and that it can be and has been done successfully. No one needs to be the first anymore. When we engage with customers, we want to convince them that our technology will work and it will do the job. But we’re not really in the business of trying to convince people that they should exit their mainframe or move their applications off. That’s a conclusion that they reach themselves. Once they’ve reached out and are committed to exiting the mainframe, then we will walk over hot coals to convince them that our technology is the best way to do it.

TR: Thank you for talking with Telecom Ramblings!

If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!

Categories: Cloud Computing · Industry Spotlight

Discuss this Post


Leave a Comment

You may Log In to post a comment, or fill in the form to post anonymously.





  • Ramblings’ Jobs

    Post a Job - Just $99/30days
  • Event Calendar