Big bandwidth and the cloud may get most of the press these days, but the systems keeping the lights on and the temperatures cool do so much of the heavy lifting of today’s internet infrastructure. With us today to offer the view from under the hood of the internet is Ken Rapoport, founder and CEO of Electronics Environments Corporation. EEC has been in the business of power and electrical systems since April of 1986, building and maintaining infrastructure through multiple generations of technology.
TR: How did EEC get its start, and what is your main focus?
KR: We started back in April 1 of 1986, so we've been in continuous operation since then. Our strategy has been to be involved in all facets of the complete lifecycle of mission critical facilities from the design, through implementation and construction, and into ongoing service and facilities management. Through what we call “Integrated Critical Infrastructure Solutions”, we aim to really deliver three key components: (1) reliability and availability, (2) performance and capacity, and (3) return on investment and value. Our core focus is on electrical systems, mechanical systems, and power generation, serving three broad markets: data centers, broadband, and telecommunications.
TR: What kind of projects does EEC specialize in?
KR: Our strength is interior fit-up. We don't build the base building so much as the data center within it and the ancillary, supporting infrastructure. We have electrical and mechanical PEs on staff, so we'll provide drawings and we'll work with regional engineering firms with knowledge of the local codes. We'll design the DC power plants, UPS power and emergency distribution systems, power generation, and the architectural envelope for the room. Sometimes we'll take care of the whole thing, hiring the subcontractors and managing the build itself. Afterwards, we set up maintenance programs based on what the customer might need as well as the baseline acceptance testing results. Then we'll work with them to establish the most appropriate maintenance programs including on-site repairs, on-call 24/7 support - whether in-house, mobile response or a hybrid solution.
TR: How do you differentiate yourselves?
KR: We've always thought that when we design, construct and implement data centers the key is to understand the whole lifecycle. We do a lot of upgrades to live data centers and telecom sites. When we design something, we look for ease and speed of implementation, and then reliability in the form of “what if” scenarios. Unfortunately, things do break! So we ask how can facilities people get around the inevitable problems with equipment so they can fix what they need to fix and have it be completely transparent to the end user. Without a total life cycle view, that too often gets lost by conventional engineering or construction only services. Being versed in all aspects of the life cycle lets us design and build for serviceability.
TR: What has been the biggest change you’ve seen over the last decade or so in the underlying technology of the electrical and power systems that drive the internet as we know it?
KR: The biggest technology improvement in the core of UPS systems, transformers, power transistors, and such has really been the logic. When we started systems had discrete, analog components and in the way the systems worked together there were more failures. Then came digital, asynch logic and microprocessors, and the reliability increased dramatically from a control standpoint. The efficiency of microprocessor-based control systems is also much better, so they can read more datapoints, make more adjustments, and can essentially adjust to optimize the energy utilization. And that's true not only on the electrical side but also on the mechanical side. There we've seen some real innovations, from low hanging fruit like containing air where it needs to be in cold rows or hot rows, to more efficient adiabatic cooling, water-side/air-side economizers, etc.
Now people like Facebook and Google are even stripping UPS systems out of data centers because technology, communication, and replication have made it possible to build multiple Tier 2/3 data centers that have equivalent or superior reliability than one Tier 4. But that isn't applicable to everyone, it has to do with the software applications they are using.
TR: How do you balance costs with reliability?
KR: On the seven levels of infrastructure, physical infrastructure is the lowest level -- the foundation. If you don't start on a good foundation, whatever you build on it is at risk. We're very data driven. We have learned through the process of building and operating data centers, and our engineers go back through the service data to try to get nuggets of information to improve our solutions. We developed a customer portal called Infrastructure Manager (which was a precursor to today’s DCIM). It tracks all their infrastructure assets and service programs. We use this to also look at details like the statistics of temperature, humidity, voltage aberrations, etc in conjunction with parts failure across many manufactures and see what systems are most reliable.
We also look more broadly at how we provide value. Sometimes our customer’s initial preventative maintenance costs go up a small amount due to our holistic approach, but we often see drastic reductions in emergency service calls the second year (by 42%), which translates to both higher reliability and reduced costs overall. Customers have the data to analyze the performance and realize that a program really was cost effective.
TR: Power and cooling seem like a fundamental piece of any network, why does it get outsourced to specialists like EEC?
KR: Telecoms and data centers these days are always looking to reduce costs. But there is a shortage of qualified people out there for these things, and what happens is that we become an integral part of a telecom or data center provider’s strategy. We often take care of infrastructure that customers used to take care of themselves. Today we can assist and train customers to be first responders, provide on-site engineers or contract for comprehensive mobile response services. So they can simply dial one number and say they have a problem, whether it's about a generator, air conditioning, or UPS. The telecom companies are also getting much smarter about energy. Their focus was always electrical, but they are becoming much keener on the cooling aspects, realizing how much money they can save there.
TR: How does the maintenance and design work you do compare with what vendors offer to operators when equipment is bought and installed?
KR: The vendors themselves tend to look at their box and not outside of it. If it's not their box it's not their problem, whereas we look at the whole system to solve it proactively. We'll trace the problem to the breaker, or the control system, or possibly the air conditioning feeding into a unit causing thermal overloads. Afterwards we'll do analysis and forensics, and suggest enhancements they can budget for in following quarters. Frequently, vendors – who specialize in their own gear – will subcontract us to service equipment from their competitors. So, EEC can service not just different types of equipment, we can service mixed environment equipment.
TR: We hear lots of talk about tapping green technologies to power all the telecom and data infrastructure the world is building, how much of it do you actually see implemented?
KR: There is certainly interest in it, but I think it is always balanced against reliability, performance, and ROI. We've put in fuel cells, and we've removed fuel cells too. I think there's a huge push from the C-suite to make use of innovative technology and have a low PUE, whether it be solar power or fuel cells. There are also various air-side economizers and water-side economizers, some interesting intelligent air distribution/demand systems out there. There are all different innovative ways and customers want to see them. They just don't want any reduction in reliability, which is always a challenge with new products that have unknowns to them. As any newer technology becomes more proven, it will become more used. We often help our customers take advantage of utility incentives or rebates to improve the ROI of moving to greener, more efficient equipment.
TR: What’s the next big thing on the horizon for electrical and power in the world of internet infrastructure?
KR: We are looking at a very innovative high-voltage DC data center strategy, where you go from AC to DC and then distribute everything via DC to air conditioners, lighting and information communication technology (ICT) equipment. It's been talked about for a long time, and obviously in the telecom world we've seen a lot of customers getting rid of UPS systems completely. They prefer to stay with DC because of the high reliability, lower part count, and the low PUE. Every time you go from AC to DC and back to AC there is a loss of efficiency. But it takes a long time for people to get comfortable with new technology.
TR : Can you tell us about any unique, memorable projects you’ve been involved with, and how they went?
KR: Working in a live telecom site or data center is always a thrill. One time there was a legacy system that started at a 13.8kV high voltage to 208V main transformer, and they had distribution to 208V-208V UPSs feeding 208V-208V power distribution units (PDUs) going into the data center. It was a huge kludge, and they wanted to take it from 13.8kV to 480V -- higher voltage, better distribution, better efficiency. But they wanted to do it in the same space and without any downtime. So we went to the drawing board and said how are we going to do this, and at first everyone said there was just no way. But there's always a way. It was impossible to accomplish without any downtime, but we got it down to a minimum shift. We built out a half of a multi-unit UPS system in some available space, worked around the clock for 24 hours with a team of 38 people. We changed out the high voltage transformer, segmented over to the 480V UPSs and out to new PDUs on the floor from 480V-208V to the computers. Over the next 30 days, we then disassembled the other UPSs and built the other half of the 480V UPS system while it was live. I remember, we had our team in there early one morning and we'd been up 24 hours, and the client held a big meeting with IBM and they wouldn't let us in the meeting. Then they came out were adamant that the system wasn't working at all. We had thoroughly tested everything, and could not believe this was possible. It turned out IBM never turned on the main breaker to the PDUs. Everything was working, they just forgot to flip the final switch. It was a very memorable project.
TR: Thank you for talking with Telecom Ramblings!