Dynamo Founder, CEO Jason Ethier Featured on HighDrive’s The Energymakers April 3rd, 2017Alex Rice
The Dynamo TurboCore was developed to expedite the deployment of our flagship product, an electric power solution for powering artificial lift systems in the Oilfield—we will be piloting it this summer. Based on this product we will be able to show our partners how to design and develop their own systems around the TurboCore. Simultaneously we will be releasing our 1500 Series Turbocore, which as the name implies, is a scaled up version of the 700 series and is capable of producing over 2x the power.
A turbogenerator, as the name implies, is a turbine that produces electric power. The Turbogenerator is created by taking the baseline gas generator and bolting on several components. Hot pressurized air coming out of the TurboCore is converted into mechanical work through a second turbine, called the power turbine. This turbine is attached to a shaft which drives a generator. With the 1500 series TurboCore, the shaft power output for this basic turbo-generator is approximately 15kW.
Gas generators have a few key characteristics that drive their performance. One that is important here is the load line. The load line relates engine speed (RPMs) to engine pressure and airflow—and ultimately total power output. As fuel is added to the engine RPMs increases as does total power. Eventually the gas generator will reach peak power. The factors that limit fuel input and power output for a particular set of hardware include shaft speed limit, temperature limits, and compressor stall limits The limiting factor in any particular situation depends on the full system design and the expected operating environment.
Adding a power turbine is tricky business. Just as turbocharging a traditional car engine changes how it behaves, adding a power turbine requires significant design considerations. This is because adding a power turbine adds flow restriction and back pressure to the gas generator. The increase in backpressure results in a shift in the operating line of the gas generator—ultimately impacting the gas turbine system performance and total available power.
When designing a gas turbine there are a lot of considerations to be made, from optimizing blade shape to optimizing the critical parameters of the gas generator for different applications. At Dynamo we’ve created a suite of tools to help us analyze gas turbine system behavior and find proper match between compressors and turbines to get the desired performance and operating characteristics. An example of how the operating line is impacted by a power turbine is shown in the figure to the left, and represents one of the more complex analyses.
These sets of tools allows Dynamo to quickly evaluate different applications and use cases for the TurboCore, including the implementation of several independent loads and their effect on system performance. Using these tools, the Dynamo development team has repeatedly demonstrated the ability to rapidly develop solutions around the TurboCore, with design to prototype validation times under nine months.
Over the past few months the Dynamo Team has been working on deploying our first generation TurboCore 700, a turbine engine platform for the next generation of remote power products. Our platform is designed for three things: reliability, flexibility, and modularity. There are several ways to interface with our platform, from our software & controls API to interfacing with the physical hardware of our product—and we’ll be working to make this as seamless for designers as possible.
The reason we are doing this is simple. With all of the resources and talent of the world at our disposal, we know that there are experts out there who know far more about their customers’ needs than we will, and we want to empower them to leverage our product. Every week we get a request for adapting our hardware for specific application, including water desalination, flameless heat, & residential combined heat and power. While we wish we could tackle all these problems, we can’t. As turbine experts, we can teach you to push the envelope of imagination with our products.
At Dynamo we are working on developing a world class turbine platform. We are focusing our in-house design efforts on a unique generator solution built around this technology; we also provide these components as kits for our development partners. But at the core of the platform is our gas generator—it’s the heart of the engine that drives the performance & flexibility of the unit. The gas generator is designed to run across a broad range of environmental conditions, with a wide range of fuels, and to do so reliably for thousands of hours—allowing end users the freedom to employ the turbine when and where they need it. The turbomachinery is interchangeable, allowing us to provide different gas generators sized for specific applications, at a cost that is comparable to traditional reciprocating engines found in the market today.
The gas generator alone only produces hot, pressurized air, and additional components are needed to put it to work—such as the addition of heat exchangers, generator, auxiliary turbines. We do these ourselves in building our generator product. To facilitate the mechanical design, we offer a Hardware Development Kit, which contains everything from CAD models, drawings, specs, and reference designs for products we have worked on.
Adding components to an engine will affect its performance, and we realize that it may be difficult for those without a turbo-machinery background to understand the effects. To make it easier to work with the TurboCore platform, we offer training and certification programs that teach you how to integrate this unique engine in a variety of products. In some respects, providing this information is a double edged sword. We are describing to potential competitors how we engineer and build our products, and we are giving away a wealth of applications we could build ourselves, but we taking on this risk both get our technology to those who need it and to motivate ourselves to continue to provide higher performance & more innovative turbines.
Beyond the hardware platform, we also offer a software platform and API, which allows end users to interface directly with our control system. We build the control system to ensure that the engine can be started and operated reliably and safely; this is no trivial task, considering the broad range of fuels we expect to use, and the broad range of environments we will have to operate in. However, our goal is to also provide end users with the ability to configure and operate the device as they see fit. We are still working on making this system more robust, and anticipate releasing an API in a year.
This is our brief plan on getting TurboCores into as many hands as possible, and we are hoping those of you who have the will and the imagination can work with us to bring this game changing technology to the marketplace. To demonstrate this approach, the next post will describe a reference design we put together over the summer for a customer.
Building a platform technology is more than just declaring it as such. Products do not exist as platforms for the sake of being a platform, but must have some intrinsic value, which can be built upon and ultimately enable new and superior products and services. Platforms must also have clear points of stability and vicissitude. Lastly, platforms must have clear processes for enabling collaboration between different parts of the ecosystem—be it internal manufacturing teams, or application developers that clearly articulates how to best leverage a platforms strengths.
The purpose of developing a platform, as indicated in the last post, is predicated on either reducing cost or building value within an ecosystem. Regardless, the underlying product itself must have some native value, which can be leveraged and grown. In the previous post vehicle chassis were a strong example because they were the foundation for both cost reduction and were valuable in providing the structure which ultimately housed the vehicles built from them. In the Apple example, the mobile hardware is the platform, on top of which new applications could be built that uniquely leveraged mobility. A platform with a unique value proposition fosters a strong ecosystem—tautologically, a strong platform begets itself.
Because the platform is really a foundation for everything that is built on top of it, there must be features of the platform which are immutable, delivering a core set of values. This can range from the physical footprint (think x86 chipset platform) to how it is used (think Facebook). There must also features that can be modified or tweaked in a controllable manner to allow users of the platform to adapt it to their own needs. Often times these modifications exist beyond the physical (or digital) product produced by the source company. In the example of vehicle chassis, variations include physical pieces of equipment added to create the final product, from the engine and drive train installed on the chassis, to the trim on the interior. This alludes to early channels for platform technology, namely the OEM. OEMs take core pieces of hardware, like CPUs or Engine, and build a variety of products around them, such as Servers and PCs or trucks and generators—however the interface between the core technology and the whole products are the same. The customer ultimately derives value from consuming the whole product for their specific needs, but the platform is what provides the foundational value for those end products, be it computational power or horse power.
The last key is defining how the platform interacts with the ecosystem around them. It should be clear that the interface itself sits and dictates the boundaries of what is core to the technology, and what is mutable. In software this can be as simple as providing an API or SDK; hardware standards are more complex, with physical properties that must be taken into account. Regardless, these interfaces define the strength and flexibility of a platform. If the interface is poorly defined or does not allow partners to leverage the underlying technology, the platform will wither. Part of understanding the interface is understanding how to communicate it to ecosystem partners; depending on the complexity of the underlying platform, a simple API may be sufficient, but more sophisticated systems may require training, education, and joint technology development. Partners themselves may need to be qualified or their products screened before being released to the world. Building a robust process for defining that interface is an exercise in trust between the technology provider and the ecosystem, and is an integral part of building a world class platform.
All organizations face limits in resources, but the envelope of the possible can be drastically expanded by building on a powerful platform. The cost of development and manufacturing can be reduced (you do not need to re-invent the wheel), and an ecosystem can be developed with partners, customers, and collaborators who will drive adoption and innovation in a product.
Platform technologies are an abstraction of a technology or product that allows other systems, products and processes to be built on top of it. Platform technologies can be broken down into two types: external platforms and internal platforms. Internal platforms are very common, and exist as a way to reduce manufacturing costs. External platforms are customer and partner facing, allowing for ways to build value beyond the confines of the business. Said another way, platforms either shrink the pie [cost] or grow the pie [value], and in a duplicative turn of phrase, can both grow and shrink the pie simultaneously.
Examples of internal platforms include designs implemented to reduce the cost of several products built by a single company. Examples includes the VW vehicle chassis platform & Apples IPhone / IPod architecture.
VW’s vehicle chassis platform allows the company to use the same underlying components to build several models of car (think Passat, Jetta, etc) utilizing a fixed set of engines (1.4L, 2.0L, 2.4L etc), across several brand lines (VW, Seat, Audi). The result is reduced cost, reduced employee training, and higher quality products; at the same time the components and features that truly segment the products and customers can be tailored to drive value for the business.
If you take a look at the last few generations of Apple hand held devices, the IPhone and IPod specifically, you’ll notice a few things. A lot of the materials and components are the same, and they are arranged in the same way on the device. Functionally the phone has superior features, including the phone, a better camera, etc. There are also different levels of phone and ipod, typically with increasing levels of storage that further differentiate the products. Beyond sharing design techniques this has a lot to do with using a shared manufacturing platform to build different products with different price points.
Examples of external platforms include designs implemented to grow ecosystems around a product, and to layer on to the value of the products built by the initiating company. To use parallel examples, this includes Lotus’ coupe rolling chassis platform and of course the Apple iTunes.
Lotus is a very different animal in the automotive industry, and is well known for performance cars and race heritage. Since they do not have the same volumes of production that a company like VW, they have used their chassis platform a different way. Most recently they lent the platform for the Lotus Elise—a solidly designed two seater chassis born from the racetrack—to other manufacturers as what they called a “rolling chassis”, meaning it had the frame, suspension, wheels, but no engine, bodywork, or other accoutrements that define a vehicle. These products went out to GM, Citroen, and most notably Tesla for their roadster. The result was that manufacturers were able to layer on value to the underlying platform, and Lotus could leverage their investment in design to sell more vehicles than they could have by themselves.
The platform built by Apple using iTunes and the iPhone is another example of allowing others to leverage the strengths of a product to build new and unique products and services. This external platform has become a significant addition to the underlying hardware, and is often a point of differentiation with Apples competitors. Furthermore, the applications built on the Apple platform extend far beyond Apple’s core competency, and control, yet Apple is still able to monetize this by sharing a percentage of sales made through the Apple Store.
There is actually a third type of platform not discussed here, but it is related to both internal and external platforms, and is built by a consortium of customers and vendors. It is called a standard—it’s a way to take a set of products and technologies and build a common interface to allow greater value creation for the ecosystem as a whole. Common Standards include IEEE’s 802.11, which gives us the standard for using WIFI, another is the 36” racking standard which nearly every server in the world is physically sized to. Whether platforms are built to support the internals of a business, build layers of value above a business, or standardize an industry, it is clear that platforms are key to driving business value.
In one of our original blog posts (What is Dynamo Series, pt 3), we alluded to the innovation being developed here at Dynamo—that we were developing a platform technology, based on turbo-machinery, to revolutionize small power products. We are going to spend the next few months showing the strengths of this platform, but first we will give you a little background on where this technology came from.
Dynamo was founded by two turbine engineers who had looked long and hard at the status quo for building turbines. We had firsthand experience working on the assembly lines for aircraft jet engines (in one of the first factories in the US to build jet engines, we might add). And we can confirm that all of your assumptions around building these turbines are probably true. Modern manufacturers work with super metals with esoteric names, like Inconel, Rene, and Waspaloy. They have machines that are 20 feet tall and can cut complex dovetails into solid disks of nickel with greater than 0.000,01” of accuracy (we call that tenths in the industry), and they have measurement tools to match. When you are pushing the envelope of engine performance, you need every tenth you can get. There is significant technology innovation being developed as well to improve manufacturability and product quality, from a machine that would friction weld shafts at high speed to novel ceramic composite matrix forming technology. A lot of work goes into building these parts; it’s not uncommon for a part to have a buy-to-fly ratio of over 90% (that means from the raw stock metal, only 10% is left over in the finished part).
As amazing this sounds, we also learned how 20th century the manufacturing process was. For a lean assembly line, there was not much of a line. Assemblies were put together by hand on mobile carts; the carts were moved around the factory floor to stations, where one type of work or another would be performed (e.g. welding, fastening, plumbing, etc). As often as not, engines would move back and forth between stations depending on the exact engine that was being built. The average time to assemble a small engine was two months.
On the parts level of manufacturing, there were other things that didn’t strike us as terribly modern. We called our business a “lean pull” manufacturing business, but the reality was that we built components in batches, and “lean pull” just meant we kept inventory in a holding pattern depending on what the assembly team told us to deliver in the next two weeks. We also did not have entirely fungible labor, and would spend a good deal of our planning time figuring out which machinist could make which parts on the given machines we had working that day. This combined with a metrics-driven culture resulted in some creative accounting. Sometimes we would build extra inventory when times were slow, just to keep labor working; I remember a few times we would “hold” unsalvageable components that didn’t pass their drawings check for a few weeks until we could “hide” the single reject when a large batch of inventory came through so it wouldn’t impact our metrics for that week. A large part of this seemed to be the fact that one out of ten of any batch would need to be re-worked at some point because the tolerances required by the parts were not met by the manufacturing process.
When there wasn’t a standard way to tell if a part was not conforming to the manufacturing requirements, we had to take the specimen to Al. Al was a living library with 30+ years’ experience making components for turbines—not an engineer by training, but a master manufacturer. His workspace, on the second floor, was filled with rejected components. Every one or two weeks I would bring a component to Al, show him the drawings and we would describe why we thought there was a problem. Al would gnaw his pen (which he also used to mark up the drawings), rub his brow and ask you to leave the part on his desk. You were to return the next day to hear his verdict on whether the part should be kept, reworked, or scrapped—and you took his word as gospel.
By contrast, I want to describe another engine factory for you; our founding team had the opportunity to tour a truck engine factory in North Carolina that was similar in scope to the turbine factory we worked at. This factory converted raw inputs to fully built, tested, and shipped engine in a week; and it did it at a rate of an engine every 5 minutes. While we did not have the same hands-on experience as we had at the turbine manufacturer, the differences were immediately clear. There was, for one, an assembly line! Engines would move down a conveyor belt; each station had a 5 minute step before an engine would move to the next station.
Even with this strict timing and specialized stations, each engine was built-to-order, with seamless inventory management in the background operation. Be it a different cam-cover or turbo-charger, the inventory was pulled to the specific station, and refilled as local supply ran low. Part of this was achieved with crude robots, where parts were delivered by following a set of colored lines on the ground from one side of the factory to the next.
What really inspired us, however, was that the diesel company was also building tens of thousands of small turbines as part of this process. Turbochargers are not the same as aircraft jet engines by any stretch of the imagination, but they do have a lot of technology in a small package. They have high speed bearings that must survive the constant loading and unloading of a diesel engine; and they have many little features that contribute to performance and life. When we compared the diesel manufacturer to our experience with turbines, we realized something. The products these two companies were building were for very different markets. By necessity, the turbine had to be built with critical alloys, exacting requirements, and a high rejection rate—partly because they are high performance products, and partly because so few were built a year (<500). In some ways, each engine was its own special production. The diesel units on the other hand are built in a cost competitive market, and where over 60,000 engines would be built—the manufacturing learning curve is also much faster with many more samples to work with.
But this also opened our eye at Dynamo. After seeing these two models, we asked the question “What if we built turbines the way they build diesel engines?” The result is a new way of thinking about the supply chain, of how the engine is built and assembled. It’s a new way to think about what the final product will cost, and how many we can build in a year. The other challenge is a market challenge; if we want to build 60,000 turbines, we have to find someone who wants to buy them. Luckily in the small power market, there are always people looking for something more reliable, more fuel flexible, and smaller than what they have today. In order to access all these customers, however, we had to also think of our product as a platform that could be easily adapted as needed for unique applications.
Here at Dynamo we’ve taken a look at this problem of fuel flexibility, and built a power solution that is truly fuel agnostic. While the product we are building requires a lot of engineering and years of experience (our technical advisory team spans 100+ years of combined turbine development experience), the solution itself has several key features that allow us to tackle this challenging technical problem.
The first thing we decided was to build a gas turbine engine, as they are renowned for their fuel flexibility. In many ways a gas turbine is just a set of compressors and expanders set around a combustion tube. As long as a combustion chamber can be made to reliably burn fuel, a turbine can be built around it.
We then developed a combustion chamber that can accommodate a wide range of BTU contents. The challenge here was to ensure complete combustion and low pressure loss for a variety of fuel mixtures at both startup and steady-state operation. The combustion chamber that we have developed has achieved all of this.
Although the combustion chamber is great, we do not rely on it 100% to ensure the reliability of our engine. To that end we’ve included a specialized fuel conditioning system that is closely monitored by our supervisory control system. The fuel conditioning system serves as a buffer between the wellhead and the combustion chamber, such that the fuel quality does not vary drastically over short periods of time and reduces the amount of work needed for the control system to regulate fuel flow.
Deploying our product in the oilfield adds additional complexity. As discussed above, on the fuel supply side the consistency of the fuel can vary significantly over a few hours, and it is challenging to quantify that fuel a priori. Additionally, on the demand side, pump jacks and other field equipment have a variety of duty cycles which change the amount of power required at any given moment. To meet these needs, our solution has to be more than a combustion system. It is tasked with the double duty of converting a variable energy of one type [fuel] while trying to meet varying output demands—all within very short time frames. To enable smooth operations, this is achieved with several features, including a proprietary control system and a sophisticated custom power electronics package.
We can talk all day about how we do things, but our customers care about results. In the lab to date we have verified the ability to operate on fuels ranging from 500-2045 BTU / scf in a single unit. Across this range we were able to start the engine, bring it to power, and sustain operations as the fuel content was varied. We were also able to do this with liquid water injected into the fuel lines—we were able to do this with a water cut of 80% by mass. This effective range and the ability to handle liquids in the combustion system show that we can sustain combustion in virtually any oil field. A more technical summary can be found in our whitepaper here.
With all this discussions on fuel flexibility, we would be remiss if we did not talk about what makes fuel flexibility difficult. There are many factors that affect how something burns, such as the fuel being a liquid or a gas (or even a solid), the structure of the underlying hydrocarbons, the amount of oxygen present, and the geometry of the flame zone. These are the big ones, but there are many other smaller factors which I will not have the chance to dig into here.
|Automotive Liquid Fuel Injector|
The state of a fuel has a lot to do with its combustibility. Ultimately, for fuel to burn it must mix with oxygen (or some other oxidizer). Gases mix very well and very evenly, which makes them easier to control in the combustion process. Liquids on the other hand do not mix well, and often need to be premixed (such as with a carberator) or aerosoled into little droplets (like a high pressure fuel injector). These processes take much more tuning to get right. Lastly, there are solids, which implicitly do not mix well. Solids usually need to be pulverized into little bits, much like aerosoling, for them to be good combustion candidates. More often than not, solid fuels are mixed with solid oxidizers in even proportions to enable more complete combustion. This is most commonly seen in gunpowder or APCP found in solid rocket motors.
With fuel, you need oxygen to react with hydrocarbons to enable combustion. However, it may not be intuitive that having some of each is not enough to enable combustion. Even with a spark, there will be no flame—this is a phenomenon known as the flammability limits. Generally combustion is most efficient when there is just sufficient oxygen to burn with the fuel; as the amount of fuel gets cut in half the mixture ceases to combust properly. Likewise, if the amount of oxygen is cut to a third, the mixture will fail to combust. To contend with this, many modern engines will have sensors to balance and meter out fuel to match the air in the system—however even with careful tuning, poorly mixed fuels may have spots that lie outside the flammability limits.
As you can imagine, combustion is a very complex process. Energy is released as complex hydrocarbons are reduced to simple carbon dioxide & water. The steps to get there can be rather complex. The chemical bonds between atoms are formed and reformed during the combustion process—with many intermediate molecules formed during the process. The result is that combustion takes time, on the order of several milliseconds. This isn’t a lot when you consider a car engine rev’s up to 7,000 RPM, each stroke (which must include compression, combustion, expansion, and evacuation of the gasses, can only take 8.5 ms. The result for some systems that operate on these time scales is that combustion becomes hare to control, which can prevent it from coming to completion. Generally, however, simpler hydrocarbons burn “faster” than more complex ones.
To make this even more complex, different hydrocarbons carry different amounts of energy, independent of volume or weight. To try and describe this effect, engineers developed something called the Wobbe Index, that allows for relatively simple mathematical scaling of fuel injection rates for a given fuel—assuming that fuel is known ahead of time. Unfortunately, this makes simple fuel metering systems, like a carburetor, a poor solution when the fuel is unknown. The ability to handle different fuels requires a more advanced fuel delivery system that is capable of providing fuel at different rates.
For reciprocating engines, an important factor to consider is the “knock” rating of the fuel. It is essentially a measure of when a fuel will self ignite (assuming it is heated from the compression stroke of an engine). Reciprocating engines become more efficient and more powerful if they have more compression, however the fuel limits the amount of compression that can be practically achieved. To further complicate matters, the flammability limits of fuels change as they are compressed. The result is that adding fuel flexibility often comes at the cost of engine performance and emissions. As a real world example of this complication, many diesel manufacturers are trying to build engines that can run on both natural gas and diesel. As it turns out natural gas has a much higher anti-knock index rating than diesel—due to this quirk of nature, manufacturers have developed bi-fuel generators. In many of these solutions the products have to run on a 50/50 diesel & natural gas mixture, such thatthe diesel is compressed to ignition, burns, and in turn burns that natural gas in the combustion chamber. While this is a good technique for reducing diesel dependency, it is not true fuel flexibility.
|From Lafebvre & Ballal, “Gas Turbine Combustion”|
The last thing that really drives combustion is the physical location where combustion takes place. The geometry, or shape can drive the local mixing of fuel and air; it also provides the combustion constituents with the time and space to burn to completion. The combustion chamber and its aerodynamic interactions with the rest of the engine define a lot of how a combustion process performs.
These are only a sprinkling of the characteristics to consider when designing combustion systems. As one can imagine, all of these things must be taken into account when trying to build a fuel flexible system.
Last time we looked at the major contributors to the cost of fuel—which is generally tied very closely to the marginal cost of producing a kWh. If we take a look at the major factors playing into the cost of energy, we can pretty easily determine that the fuel cost per kWh is a pretty simple function:
Pretty simple; but both costs have complex components that can cause them to range widely. Let’s take a look at the cost of fuel.
I think the above is pretty explanatory—the commodity price of a fuel is what you pay for it at your local gas station. Getting your fuel to site obviously introduces a level of cost. Your delivery company will charge you more if they have to go significantly off the beaten path, or if they have to use special equipment to get to you. Similarly, if your fuel isn’t just plain old gasoline, but requires special fuel tanks (CNG) or handling expertise because it may be hazardous (methanol), there is an additional cost associated with that as well. The factor ε on the end is a coefficient to capture things like taxes, discounts, and other miscellaneous costs that should be taken into account.
Lastly, and most importantly, we divide the cost of fuel by an “Availability” factor. Availability approaches zero as a fuel gets scarcer; it gets bigger than 1 as it becomes bountiful. A value of “1” is a “nominal” value when compared to the other parts of the equation.
In some respects, how this factor is realized is routine. If all the gasoline facilities are off line in your region, there is no amount of money you can spend to buy a drop of gas—such as what happened on the east coast during Hurricane Sandy. Conversely, if there is too much fuel, it either is wasted, burned off, or you have to pay to have it shipped or stored elsewhere—and the cost of the fuel will plummet.
Looking at this another way, you can consider how the supply and demand for a fuel will influence its price—they call this the elasticity of demand. When gasoline gets more expensive, by 10% for example, demand decreased by 2.6%. Demand can also effect supply; for example, a 10% increase in demand may increase prices by 38%. What does this mean for us as modelers of future fuel prices? It means that we can use the availability factor to analyze the risk associated with fuel prices (knowing how things can change) and try to take it into account when comparing different fuel sources.
Efficiency modelling is a whole different ball game, and very much changes from generator to generator. Generally, each generator has an operating point where it is most efficient, but generally the loads the power do not always allow them to operate at this point. The resultant duty cycle can significantly influence the efficiency of the underlying system.
Without going into too much detail into efficiency modelling, I’m going to jump into a couple of different comparisons for various systems using the information we have here. I’ve selected three technologies routinely used in the oil and gas industry for use in different types of power applications: small Diesel Generators, Fuel Cells, and Thermal Electric Generators (TEGs). Each are used in different applications and also have different capital costs, (the impact on LCOE of which is not thoroughly analyzed here). The following chart compares the efficiency of each generator, the relative cost of the fuel on a per kWh basis as delivered, as well as the marginal dollar cost to generate a kWh by each generator Tables for the inputs for this chart is attached at the end.
|Figure 1: Select Cost of Energy Comparison for common Oilfield Generators|
The diesel generator comes out where you would expect it to, with fuel costs at close to 40 cents per kWh. The supply chain is relatively simple, with non-road diesel coming in at roughly $4 per gallon, and shipping contracts only adding a marginal cost. Efficiency can range from the mid-twenties to the low thirties; I picked 30% efficiency since most of these generators run well below their prime rating in the field.
Fuel cells are often chosen for small power applications (<1kW), where reliability is essential, and where the cost of fuel is secondary to the cost of maintenance and the cost of downtime. While many fuel cells are designed to run very efficiently on propane or natural gas, the fuel reformer found in most of these fuel cells require the fuel to be highly refined, above standards found in more traditional applications. In this case, we built the model around a European fuel cell that is finding acceptance in the US O&G market. The fuel cell operates on highly refined methanol, which can only be provided by the manufacturer in Europe. (Having a single source supplier imposes its own supply risks—that availability factor I described earlier, which I did not include here). The result is a very high cost of energy, however for small applications the cost of fuel is dwarfed by the value of reduced maintenance and downtime.
TEGs are also commonly found in the O&G industry, for use powering very small loads <100W; again they are used where reliability is key, although they are very large and suffer from being very expensive. As opposed to fuel cells, TEGs have very low efficiency’s (3-5%) but they also have the distinction of being able to run well very poor quality fuel (basically any heat source will do). In many cases, TEGs operate on pre-pipeline quality natural gas, often found in upstream applications. In this case the source fuel is plentiful, and cheap—often its face value is below the cost of commoditized natural gas, as it has yet to be transported to market; and in many cases, the operator doesn’t have to pay the lease holder the cost of using the fuel, which equates to an additional 10% discount on the fuel.
As this example illustrates, the cost of operating the very inefficient thermal electric generator is 1/10 the cost of operating a fuel cell, and ½ the cost of operating a diesel generator . Unfortunately, thermal electric generators do not scale up well in size and value much above the 100W mark.
Table 1: Inputs to Cost of Power Model
There are three primary considerations when considering a prime mover which ultimately drive the Levelized Cost of Energy: capital cost, maintenance, and fuel expenditures. The first two are topics of discussion for a later time, but the last item, fuel expense, is our area of focus today.
There are several significant components that play into fuel expenses (mapped out below). Fundamentally the fuel expense is the product of the realized cost of fuel to the user and the realized efficiency of the underlying generator.
A logical first step in reducing the cost of energy would be building a more efficient generator, such as a fuel cell, or a complex cycle engine. However, Fuel efficiency is pretty much set by the state of the art in technology, and will change only incrementally with time. There are some minor enhancements that can be made with energy storage to keep the generators operating at their peak operating point, but making systems more efficient is very expensive and the development effort takes a long time.
A cost effective alternative to driving reduced costs is to have the engine operate on the lowest cost fuel of the moment. To achieve this, a variety of engines have been developed to work on specific singular fuels. However, in the modern environment, there is pressure on engine manufacturers to have engines that can operate on a variety of fuels. The physical phenomena that drive combustion make this a technical challenge, but there are a variety of solutions that exist on the market to meet the challenge of reducing costs—each has their strengths and their weaknesses.
|Source: Seeking Alpha, Tristan Brown|
Most shifts in fuel sources are driven by an under-supply or over supply of different types of fuel. In the past, for example, the price of natural gas would often track the price of oil for an equivalent amount of energy. However, oil fracking and the US natural gas export restrictions have caused a spread of roughly 6x in the cost of energy between oil and natural gas. Needless to say, many companies (Dynamo included) are looking to leverage the lower cost fuels in a reliable manner.
Next time we will look at a few corner scenarios that illustrate how various fuel expense conditions can drive the cost of energy.