Smart building projects highlight need for new utility business, regulatory models
IEEE's Erich Gunther spoke with us recently on how the concept of "smart" buildings is evolving from automation and controls for energy efficiency to a more self-sustaining model for business continuity based on the bottom line.
Gunther is an IEEE Fellow and board member of the IEEE Power and Energy Society who has been closely involved in an array of government and private sector efforts to modernize and secure the grid, including leading the National Institute of Science and Technology's Smart Grid Interoperability Project, participating in the IEEE Power & Energy Society's Smart Grid Roadmap effort, serving as past chairman of the GridWise Architecture Council, lead consultant on the Electric Power Research Institute's IntelliGrid Architecture project, and consultant to the California Energy Commission, the Illinois Commerce Commission, and dozens of leading U.S. utilities.
FierceSmartGrid: Will you tell us about your work relating to what you're calling "Smart Buildings 2.0"?
Gunther: I'm currently working on a private sector project for a net zero energy-capable corporate campus on the West Coast. It involves the application of almost every modern and evolving energy efficiency, demand response, distributed generation technology that you can conceive. It is designed to realize the client company's corporate goals for using renewable energy and practicing energy efficiency -- "greenness," if you will -- but perhaps more importantly it is designed to support business continuity. Power disruptions, which have been increasing in frequency in the United States, lead to losses in productivity, generally. In this instance, the cost of lost productivity is quite high. Corporations are now developing a strong business case for using grid modernization technology for business continuity.
FierceSmartGrid: Would you define the current generation of Smart Buildings 1.0 and contrast that with the next generation you're calling 2.0?
Gunther: One could argue that building automation and the utility-to-commercial building customer space is reasonably mature in many ways. "Smart" interval meters and dynamic electricity rates have been used in commercial environments for some time. And we've had in-building technology that can respond to those rates through building automation systems that manage lights and HVAC set points and even monitor HVAC performance, looking for energy efficiency opportunities by detecting when something is not operating at optimal efficiency. Advanced analytics have aided this work. In California we've seen the deployment of OpenADR for demand response.
Maturity in this arena has been driven by a solid business case, by simply trying to reduce energy costs that are very visible to a commercial building owner, that directly relate to the bottom line in profitability of the building. That's Smart Buildings 1.0. Lately we've pushed more penetration of that approach. It probably could have gone faster if it weren't for the fact that, in my opinion, vendors in the commercial building space have been slow to embrace interoperability. There remain an awful lot of proprietary systems out there.
So there's still work to be done in 1.0. But that approach doesn't address business continuity. What can be done to manage or control your corporation's "energy destiny"? What happens when a big storm, earthquake or major natural or manmade disaster strikes? The answers to these two questions are closely related in 2.0.
If you want to be a good corporate citizen and part of your corporate strategy is to pay attention to your sources of energy and have additional control over your energy use and efficiency – perhaps to the point of being a net zero building or organization – there are steps to take that go well beyond automation, controls and energy efficiency. There's a next-generation suite of technologies and policies that can contribute to business continuity and the resiliency of your corporate campus. That's Smart Buildings 2.0.
FierceSmartGrid: What are these next-generation technologies?
Gunther: The next generation concept is driven by the recognition that the uninterrupted productivity of the people in a physical setting has a great deal of value. That means ensuring that grid-related interruptions are imperceptible to the workforce. The corporate campus I'm working on will essentially operate as a microgrid that can seamlessly transfer from grid-tied resources over to internal resources so that people don't perceive that a power-related event has occurred.
The use cases that make sense are at the two extremes -- very short duration and relatively long duration outages. We already have processes and facilities to handle the middle of the spectrum. We're focused on the short- and long-duration events that trip stuff offline and how to maintain a supply-load balance in the facility. We use the term "seamless islanding" and, in my definition, that means that things very quickly and intelligently shut down in order to maintain supply-load balance.
The design has three main components, to keep it simple. One is the microgrid's master controller, which through situational awareness orchestrates elements of supply and load and ensures balance between the two. Another component is an array of distributed energy resources, including solar photovoltaics, fast-start diesel generators and energy storage technologies such as fuel cells and batteries. The third component is the power electronics and interfaces between elements of the distributed energy resources.
In an outage event, the supply side might require orchestrating battery banks, kicking on fast-start diesel generators or tapping other sources of energy that can be quickly ramped to replace grid power. Load side actions include implementing pre-programmed priorities that maintain critical systems while shedding less critical loads balance the system. Perhaps HVAC systems immediately jump to a higher set point. Lighting is cut to one-quarter of typical levels. All these actions require a system-to-system signal path that must operate in milliseconds to avoid interruptions perceivable by workers.
It's a "system of systems" engineering task that takes advantage of quick dispatch of inertialist resources, basically loads, while generation resources with a thermal or mechanical time constant for response come on line.
FierceSmartGrid: Are all these technologies mature and market tested?
Gunther: No. Three areas need attention. First, the building automation controller needs to be integrated with a microgrid controller in an efficient manner that precludes signal delays. When the system issues a command to shed load, that's got to happen in milliseconds not seconds or minutes. So we need to identify building automation technology that supports the right interoperability standards and the right behaviors in this scenario.
Second, the microgrid controller itself is pretty bleeding edge as well, even though it looks like there's a company or two with working products.
Third, the power electronic interfaces of the distributed generating devices -- whether that's the power electronic interface to storage, the power electronic interface to photovoltaics – must be configured so they don't trip offline as the system transitions from grid to microgrid.
Right now, the way the standards are written, those distributed energy resources are designed to detect even a momentary outage and trip offline, for the safety of utility field crews. But that's counterproductive to islanding and isn't technically necessary.
FierceSmartGrid: You've mentioned interoperability in both the 1.0 and 2.0 scenarios. In 1.0, it sounds as if interoperability relates to the notion of plug-and-play and avoiding vendor lock-in for market growth. In 2.0, it sounds as if interoperability relates to speeding the signal path. Is that correct?
Gunther: You're right, and the two are related. When you have different, proprietary systems involved, that requires many gateways so that elements of the system can talk to each other. But gateways introduce delays and many signals in the 2.0 scenario must move in milliseconds to enable, for instance, seamless islanding or to ensure supply-load balance at 60 hertz. That requires understanding what we call the non-functional requirements of the system -- one needs to know the performance metrics for that interface. From the time that controller signal says "I need max load shed" until that load is shed needs to be deterministic. We must know that timeframe precisely in order to figure out just how low frequency can go or how high the overload can go before the system trips off. In order to meet our islanding objectives we've got to pay close attention to interoperability and to those non-functional performance requirements. So standards in 2.0 aren't just preferred for the usual reasons. They are functional requirements.
The good news is that we think we already have a couple of standards that can support this use case, if the building systems natively support it. Standards such as ASHRAE's Standard 135-2008, aka BACnet, is sort of the core standard in the building automation space. To facilitate demand response, based on a signal from the outside world, the building automation system should support OpenADR. So a couple of existing standards are appropriate in this use case, they just need to be more ubiquitously deployed and incorporated natively so that there is no undue delay in translation or in recognizing the inputs from various systems.
FierceSmartGrid: You mentioned fast-start diesel generators. Even if a client is committed to net zero energy, renewables and sustainability, some use of fossil-fuel generation remains necessary in a microgrid?
Gunther: That's right. In practicality, these systems can't be made stable without a machine based on rotating mass. A machine with isochronous characteristics has to maintain the frequency base. We don't yet know how to create a system such as this without the inertia of rotating mass. Lots of expensive storage could provide apparent inertia, and we could write the software for the control systems to make it look like a rotating machine, but that would ruin the positive business case. There's a lot to be said for a rotating mass.
This is one of the major fundamental physical principles related to electric system stability that people don't really pay a lot of attention to. In the existing grid, we get a lot of stability from the fact that a lot of energy is stored in rotating masses -- either the rotating shafts of typical generators or in flywheels. There are thousands of times more energy stored in rotating mass than there is, for instance, in a filter capacitor of a traditional power supply or even a storage system. The impact of the inertia in those rotating masses is directly felt cycle by cycle, millisecond by millisecond. That stability is there when, at any instant in time, you flip on a switch. That instantaneous energy has to come from somewhere. And it's sound engineering to rely on that rotating mass before "stepping on the gas" to ramp up another energy source. So fossil fuel-driven generation remains a significant component here -- at least until we can figure out better ways to fulfill the stability a rotating machine provides.
Scale plays a role here. In the case of a typical server with an uninterruptible power source (UPS), there's no rotating machine. The sheer power of electronics is able to take over. But once you get to a certain size load, as you get into multiple servers in a large building, the transition from grid power to islanded power -- that instantaneous switch from no energy flowing to energy flowing -- becomes really hard to manage. That's where the engineering comes in. We have to figure out the optimal mix of power electronic-connected solar generation, fuel cell output and battery storage. Then, we determine the role of conventional, rotating mass, machine generation to assist that system to meet the project's overall objectives.
FierceSmartGrid: How is the host utility handling this new corporate campus?
Gunther: I'm surprised that the utility doesn't use its cooperation with the corporation and its net zero energy campus as a showcase, but the utility business case is not aligned with the customer's. For the utility, a net zero energy design at a major commercial customer's campus is actually more of a troublesome load. What's the incentive for the utility to provide the infrastructure and energy supply for what will eventually be a net zero load or to provide the means to accept that campus's export of excess renewable energy? There's very little incentive, at least based on the way our industry is currently regulated and priced, for a utility to get excited about this kind of a campus. A utility business model that is dominated by energy charges doesn't really work. So this project and the others that follow will continue to highlight the need for new business models and new regulatory models for the utility industry.