The TurboCore Platform Series: What Drives Platform Development

All organizations face limits in resources, but the envelope of the possible can be drastically expanded by building on a powerful platform.  The cost of development and manufacturing can be reduced (you do not need to re-invent the wheel), and an ecosystem can be developed with partners, customers, and collaborators who will drive adoption and innovation in a product.

Platform technologies are an abstraction of a technology or product that allows other systems, products and processes to be built on top of it.  Platform technologies can be broken down into two types: external platforms and internal platforms.    Internal platforms are very common, and exist as a way to reduce manufacturing costs.  External platforms are customer and partner facing, allowing for ways to build value beyond the confines of the business.   Said another way, platforms either shrink the pie [cost] or grow the pie [value], and in a duplicative turn of phrase, can both grow and shrink the pie simultaneously.

Examples of internal platforms include designs implemented to reduce the cost of several products built by a single company.  Examples includes the VW vehicle chassis platform & Apples IPhone / IPod architecture.

VW’s vehicle chassis platform allows the company to use the same underlying components to build several models of car (think Passat, Jetta, etc) utilizing a fixed set of engines (1.4L, 2.0L, 2.4L etc), across several brand lines (VW, Seat, Audi).  The result is reduced cost, reduced employee training, and higher quality products; at the same time the components and features that truly segment the products and customers can be tailored to drive value for the business.

If you take a look at the last few generations of Apple hand held devices, the IPhone and IPod specifically, you’ll notice a few things.  A lot of the materials and components are the same, and they are arranged in the same way on the device.  Functionally the phone has superior features, including the phone, a better camera, etc.  There are also different levels of phone and ipod, typically with increasing levels of storage that further differentiate the products.  Beyond sharing design techniques this has a lot to do with using a shared manufacturing platform to build different products with different price points.

Examples of external platforms include designs implemented to grow ecosystems around a product, and to layer on to the value of the products built by the initiating company. To use parallel examples, this includes Lotus’ coupe rolling chassis platform and of course the Apple iTunes.

Lotus is a very different animal in the automotive industry, and is well known for performance cars and race heritage.  Since they do not have the same volumes of production that a company like VW, they have used their chassis platform a different way.  Most recently they lent the platform for the Lotus Elise—a solidly designed two seater chassis born from the racetrack—to other manufacturers as what they called a “rolling chassis”, meaning it had the frame, suspension, wheels, but no engine, bodywork, or other accoutrements that define a vehicle.  These products went out to GM, Citroen, and most notably Tesla for their roadster.  The result was that manufacturers were able to layer on value to the underlying platform, and Lotus could leverage their investment in design to sell more vehicles than they could have by themselves.

The platform built by Apple using iTunes and the iPhone is another example of allowing others to leverage the strengths of a product to build new and unique products and services.  This external platform has become a significant addition to the underlying hardware, and is often a point of differentiation with Apples competitors.  Furthermore, the applications built on the Apple platform extend far beyond Apple’s core competency, and control, yet Apple is still able to monetize this by sharing a percentage of sales made through the Apple Store.

There is actually a third type of platform not discussed here, but it is related to both internal and external platforms, and is built by a consortium of customers and vendors.  It is called a standard—it’s a way to take a set of products and technologies and build a common interface to allow greater value creation for the ecosystem as a whole.  Common Standards include IEEE’s 802.11, which gives us the standard for using WIFI, another is the 36” racking standard which nearly every server in the world is physically sized to.  Whether platforms are built to support the internals of a business, build layers of value above a business, or standardize an industry, it is clear that platforms are key to driving business value.

The TurboCore Platform Series: Technology Origin Story

In one of our original blog posts (What is Dynamo Series, pt 3), we alluded to the innovation being developed here at Dynamo—that we were developing a platform technology, based on turbo-machinery, to revolutionize small power products.  We are going to spend the next few months showing the strengths of this platform, but first we will give you a little background on where this technology came from.

Dynamo was founded by two turbine engineers who had looked long and hard at the status quo for building turbines.  We had firsthand experience working on the assembly lines for aircraft jet engines (in one of the first factories in the US to build jet engines, we might add).  And we can confirm that all of your assumptions around building these turbines are probably true.  Modern manufacturers work with super metals with esoteric names, like Inconel, Rene, and Waspaloy.  They have machines that are 20 feet tall and can cut complex dovetails into solid disks of nickel with greater than 0.000,01” of accuracy (we call that tenths in the industry), and they have measurement tools to match.  When you are pushing the envelope of engine performance, you need every tenth you can get.  There is significant technology innovation being developed as well to improve manufacturability and product quality, from a machine that would friction weld shafts at high speed to novel ceramic composite matrix forming technology.  A lot of work goes into building these parts; it’s not uncommon for a part to have a buy-to-fly ratio of over 90% (that means from the raw stock metal, only 10% is left over in the finished part).

As amazing this sounds, we also learned how 20th century the manufacturing process was.   For a lean assembly line, there was not much of a line.  Assemblies were put together by hand on mobile carts; the carts were moved around the factory floor to stations, where one type of work or another would be performed (e.g. welding, fastening, plumbing, etc).   As often as not, engines would move back and forth between stations depending on the exact engine that was being built.  The average time to assemble a small engine was two months.

On the parts level of manufacturing, there were other things that didn’t strike us as terribly modern.  We called our business a “lean pull” manufacturing business, but the reality was that we built components in batches, and “lean pull” just meant we kept inventory in a holding pattern depending on what the assembly team told us to deliver in the next two weeks.  We also did not have entirely fungible labor, and would spend a good deal of our planning time figuring out which machinist could make which parts on the given machines we had working that day.  This combined with a metrics-driven culture resulted in some creative accounting.  Sometimes we would build extra inventory when times were slow, just to keep labor working;  I remember a few times we would “hold” unsalvageable components that didn’t pass their drawings check for a few weeks until we could “hide” the single reject when a large batch of inventory came through so it wouldn’t impact our metrics for that week.  A large part of this seemed to be the fact that one out of ten of any batch would need to be re-worked at some point because the tolerances required by the parts were not met by the manufacturing process.

When there wasn’t a standard way to tell if a part was not conforming to the manufacturing requirements, we had to take the specimen to Al.  Al was a living library with 30+ years’ experience making components for turbines—not an engineer by training, but a master manufacturer.  His workspace, on the second floor, was filled with rejected components.  Every one or two weeks I would bring a component to Al, show him the drawings and we would describe why we thought there was a problem.  Al would gnaw his pen (which he also used to mark up the drawings), rub his brow and ask you to leave the part on his desk.  You were to return the next day to hear his verdict on whether the part should be kept, reworked, or scrapped—and you took his word as gospel.

By contrast, I want to describe another engine factory for you; our founding team had the opportunity to tour a truck engine factory in North Carolina that was similar in scope to the turbine factory we worked at.  This factory converted raw inputs to fully built, tested, and shipped engine in a week; and it did it at a rate of an engine every 5 minutes.  While we did not have the same hands-on experience as we had at the turbine manufacturer, the differences were immediately clear.  There was, for one, an assembly line!  Engines would move down a conveyor belt; each station had a 5 minute step before an engine would move to the next station.

Even with this strict timing and specialized stations, each engine was built-to-order, with seamless inventory management in the background operation.  Be it a different cam-cover or turbo-charger, the inventory was pulled to the specific station, and refilled as local supply ran low.  Part of this was achieved with crude robots, where parts were delivered by following a set of colored lines on the ground from one side of the factory to the next.

What really inspired us, however, was that the diesel company was also building tens of thousands of small turbines as part of this process.  Turbochargers are not the same as aircraft jet engines by any stretch of the imagination, but they do have a lot of technology in a small package.   They have high speed bearings that must survive the constant loading and unloading of a diesel engine; and they have many little features that contribute to performance and life.  When we compared the diesel manufacturer to our experience with turbines, we realized something.  The products these two companies were building were for very different markets.  By necessity, the turbine had to be built with critical alloys, exacting requirements, and a high rejection rate—partly because they are high performance products, and partly because so few were built a year (<500).  In some ways, each engine was its own special production.  The diesel units on the other hand are built in a cost competitive market, and where over 60,000 engines would be built—the manufacturing learning curve is also much faster with many more samples to work with.

But this also opened our eye at Dynamo.  After seeing these two models, we asked the question “What if we built turbines the way they build diesel engines?”  The result is a new way of thinking about the supply chain, of how the engine is built and assembled.  It’s a new way to think about what the final product will cost, and how many we can build in a year.  The other challenge is a market challenge; if we want to build 60,000 turbines, we have to find someone who wants to buy them.   Luckily in the small power market, there are always people looking for something more reliable, more fuel flexible, and smaller than what they have today.  In order to access all these customers, however, we had to also think of our product as a platform that could be easily adapted as needed for unique applications.

Combustion Dynamics and Fuels—Series 4: the Dynamo Solution

Here at Dynamo we’ve taken a look at this problem of fuel flexibility, and built a power solution that is truly fuel agnostic.  While the product we are building requires a lot of engineering and years of experience (our technical advisory team spans 100+ years of combined turbine development experience), the solution itself has several key features that allow us to tackle this challenging technical problem.
The first thing we decided was to build a gas turbine engine, as they are renowned for their fuel flexibility.  In many ways a gas turbine is just a set of compressors and expanders set around a combustion tube.  As long as a combustion chamber can be made to reliably burn fuel, a turbine can be built around it.
We then developed a combustion chamber that can accommodate a wide range of BTU contents.  The challenge here was to ensure complete combustion and low pressure loss for a variety of fuel mixtures at both startup and steady-state operation.  The combustion chamber that we have developed has achieved all of this.


Although the combustion chamber is great, we do not rely on it 100% to ensure the reliability of our engine.  To that end we’ve included a specialized fuel conditioning system that is closely monitored by our supervisory control system.  The fuel conditioning system serves as a buffer between the wellhead and the combustion chamber, such that the fuel quality does not vary drastically over short periods of time and reduces the amount of work needed for the control system to regulate fuel flow.
Deploying our product in the oilfield adds additional complexity.  As discussed above, on the fuel supply side the consistency of the fuel can vary significantly over a few hours, and it is challenging to quantify that fuel a priori.  Additionally, on the demand side, pump jacks and other field equipment have a variety of duty cycles which change the amount of power required at any given moment.  To meet these needs, our solution has to be more than a combustion system.  It is tasked with the double duty of converting a variable energy of one type [fuel] while trying to meet varying output demands—all within very short time frames.  To enable smooth operations, this is achieved with several features, including a proprietary control system and a sophisticated custom power electronics package.
We can talk all day about how we do things, but our customers care about results.  In the lab to date we have verified the ability to operate on fuels ranging from 500-2045 BTU / scf in a single unit.  Across this range we were able to start the engine, bring it to power, and sustain operations as the fuel content was varied.  We were also able to do this with liquid water injected into the fuel lines—we were able to do this with a water cut of 80% by mass.  This effective range and the ability to handle liquids in the combustion system show that we can sustain combustion in virtually any oil field.   A more technical summary can be found in our whitepaper here.

Combustion Dynamics and Fuels—Part 3: Combustion Basics

With all this discussions on fuel flexibility, we would be remiss if we did not talk about what makes fuel flexibility difficult.  There are many factors that affect how something burns, such as the fuel being a liquid or a gas (or even a solid), the structure of the underlying hydrocarbons, the amount of oxygen present, and the geometry of the flame zone.  These are the big ones, but there are many other smaller factors which I will not have the chance to dig into here.
Automotive Liquid Fuel Injector
The state of a fuel has a lot to do with its combustibility.  Ultimately, for fuel to burn it must mix with oxygen (or some other oxidizer).  Gases mix very well and very evenly, which makes them easier to control in the combustion process.  Liquids on the other hand do not mix well, and often need to be premixed (such as with a carberator) or aerosoled into little droplets (like a high pressure fuel injector).  These processes take much more tuning to get right. Lastly, there are solids, which implicitly do not mix well.  Solids usually need to be pulverized into little bits, much like aerosoling, for them to be good combustion candidates.  More often than not, solid fuels are mixed with solid oxidizers in even proportions to enable more complete combustion.  This is most commonly seen in gunpowder or APCP found in solid rocket motors.
With fuel, you need oxygen to react with hydrocarbons to enable combustion.  However, it may not be intuitive that having some of each is not enough to enable combustion.  Even with a spark, there will be no flame—this is a phenomenon known as the flammability limits.  Generally combustion is most efficient when there is just sufficient oxygen to burn with the fuel; as the amount of fuel gets cut in half the mixture ceases to combust properly.  Likewise, if the amount of oxygen is cut to a third, the mixture will fail to combust.  To contend with this, many modern engines will have sensors to balance and meter out fuel to match the air in the system—however even with careful tuning, poorly mixed fuels may have spots that lie outside the flammability limits.
As you can imagine, combustion is a very complex process.  Energy is released as complex hydrocarbons are reduced to simple carbon dioxide & water.  The steps to get there can be rather complex. The chemical bonds between atoms are formed and reformed during the combustion process—with many intermediate molecules formed during the process.  The result is that combustion takes time, on the order of several milliseconds.  This isn’t a lot when you consider a car engine rev’s up to 7,000 RPM, each stroke (which must include compression, combustion, expansion, and evacuation of the gasses, can only take 8.5 ms.  The result for some systems that operate on these time scales is that combustion becomes hare to control, which can prevent it from coming to completion.  Generally, however, simpler hydrocarbons burn “faster” than more complex ones.
To make this even more complex, different hydrocarbons carry different amounts of energy, independent of volume or weight.  To try and describe this effect, engineers developed something called the Wobbe Index, that allows for relatively simple mathematical scaling of fuel injection rates for a given fuel—assuming that fuel is known ahead of time.   Unfortunately, this makes simple fuel metering systems, like a carburetor, a poor solution when the fuel is unknown.   The ability to handle different fuels requires a more advanced fuel delivery system that is capable of providing fuel at different rates.
For reciprocating engines, an important factor to consider is the “knock” rating of the fuel.  It is essentially a measure of when a fuel will self ignite (assuming it is heated from the compression stroke of an engine).  Reciprocating engines become more efficient and more powerful if they have more compression, however the fuel limits the amount of compression that can be practically achieved.  To further complicate matters, the flammability limits of fuels change as they are compressed.  The result is that adding fuel flexibility often comes at the cost of engine performance and emissions.  As a real world example of this complication, many diesel manufacturers are trying to build engines that can run on both natural gas and diesel.   As it turns out natural gas has a much higher anti-knock index rating than diesel—due to this quirk of nature, manufacturers have developed bi-fuel generators.  In many of these solutions the products have to run on a 50/50 diesel & natural gas mixture, such thatthe diesel is compressed to ignition, burns, and in turn burns that natural gas in the combustion chamber.  While this is a good technique for reducing diesel dependency, it is not true fuel flexibility.
From Lafebvre & Ballal, “Gas Turbine Combustion”
The last thing that really drives combustion is the physical location where combustion takes place.  The geometry, or shape can drive the local mixing of fuel and air; it also provides the combustion constituents with the time and space to burn to completion.  The combustion chamber and its aerodynamic interactions with the rest of the engine define a lot of how a combustion process performs.

These are only a sprinkling of the characteristics to consider when designing combustion systems.  As one can imagine, all of these things must be taken into account when trying to build a fuel flexible system.

Combustion Dynamics and Fuels—Part 2: Cost of Energy Models with Examples

Last time we looked at the major contributors to the cost of fuel—which is generally tied very closely to the marginal cost of producing a kWh.  If we take a look at the major factors playing into the cost of energy, we can pretty easily determine that the fuel cost per kWh is a pretty simple function:
Pretty simple; but both costs have complex components that can cause them to range widely.  Let’s take a look at the cost of fuel.
I think the above is pretty explanatory—the commodity price of a fuel is what you pay for it at your local gas station.  Getting your fuel to site obviously introduces a level of cost.  Your delivery company will charge you more if they have to go significantly off the beaten path, or if they have to use special equipment to get to you.  Similarly, if your fuel isn’t just plain old gasoline, but requires special fuel tanks (CNG) or handling expertise because it may be hazardous (methanol), there is an additional cost associated with that as well. The factor ε on the end is a coefficient to capture things like taxes, discounts, and other miscellaneous costs that should be taken into account.
Lastly, and most importantly, we divide the cost of fuel by an “Availability” factor.  Availability approaches zero as a fuel gets scarcer; it gets bigger than 1 as it becomes bountiful.  A value of “1” is a “nominal” value when compared to the other parts of the equation.
In some respects, how this factor is realized is routine.  If all the gasoline facilities are off line in your region, there is no amount of money you can spend to buy a drop of gas—such as what happened on the east coast during Hurricane Sandy.  Conversely, if there is too much fuel, it either is wasted, burned off, or you have to pay to have it shipped or stored elsewhere—and the cost of the fuel will plummet.
Looking at this another way, you can consider how the supply and demand for a fuel will influence its price—they call this the elasticity of demand.  When gasoline gets more expensive, by 10% for example, demand decreased by 2.6%.  Demand can also effect supply; for example, a 10% increase in demand may increase prices by 38%.  What does this mean for us as modelers of future fuel prices?  It means that we can use the availability factor to analyze the risk associated with fuel prices (knowing how things can change) and try to take it into account when comparing different fuel sources.
Efficiency modelling is a whole different ball game, and very much changes from generator to generator. Generally, each generator has an operating point where it is most efficient, but generally the loads the power do not always allow them to operate at this point.  The resultant duty cycle can significantly influence the efficiency of the underlying system.
Without going into too much detail into efficiency modelling, I’m going to jump into a couple of different comparisons for various systems using the information we have here.   I’ve selected three technologies routinely used in the oil and gas industry for use in different types of power applications: small Diesel Generators, Fuel Cells, and Thermal Electric Generators (TEGs).  Each are used in different applications and also have different capital costs, (the impact on LCOE of which is not thoroughly analyzed here).  The following chart compares the efficiency of each generator, the relative cost of the fuel on a per kWh basis as delivered, as well as the marginal dollar cost to generate a kWh by each generator Tables for the inputs for this chart is attached at the end.
Figure 1: Select Cost of Energy Comparison for common Oilfield Generators
The diesel generator comes out where you would expect it to, with fuel costs at close to 40 cents per kWh.  The supply chain is relatively simple, with non-road diesel coming in at roughly $4 per gallon, and shipping contracts only adding a marginal cost.  Efficiency can range from the mid-twenties to the low thirties; I picked 30% efficiency since most of these generators run well below their prime rating in the field.
Fuel cells are often chosen for small power applications (<1kW), where reliability is essential, and where the cost of fuel is secondary to the cost of maintenance and the cost of downtime.  While many fuel cells are designed to run very efficiently on propane or natural gas, the fuel reformer found in most of these fuel cells require the fuel to be highly refined, above standards found in more traditional applications.  In this case, we built the model around a European fuel cell that is finding acceptance in the US O&G market. The fuel cell operates on highly refined methanol, which can only be provided by the manufacturer in Europe.  (Having a single source supplier imposes its own supply risks—that availability factor I described earlier, which I did not include here).  The result is a very high cost of energy, however for small applications the cost of fuel is dwarfed by the value of reduced maintenance and downtime.
TEGs are also commonly found in the O&G industry, for use powering very small loads <100W; again they are used where reliability is key, although they are very large and suffer from being very expensive.  As opposed to fuel cells, TEGs have very low efficiency’s (3-5%) but they also have the distinction of being able to run well very poor quality fuel (basically any heat source will do).  In many cases, TEGs operate on pre-pipeline quality natural gas, often found in upstream applications.  In this case the source fuel is plentiful, and cheap—often its face value is below the cost of commoditized natural gas, as it has yet to be transported to market; and in many cases, the operator doesn’t have to pay the lease holder the cost of using the fuel, which equates to an additional 10% discount on the fuel.
As this example illustrates, the cost of operating the very inefficient thermal electric generator is 1/10 the cost of operating a fuel cell, and ½ the cost of operating a diesel generator .  Unfortunately, thermal electric generators do not scale up well in size and value much above the 100W mark.
Table 1: Inputs to Cost of Power Model

Combustion Dynamics and Fuels—Part 1: Drivers of Cost of Energy

There are three primary considerations when considering a prime mover which ultimately drive the Levelized Cost of Energy:  capital cost, maintenance, and fuel expenditures.  The first two are topics of discussion for a later time, but the last item, fuel expense, is our area of focus today.
There are several significant components that play into fuel expenses (mapped out below).  Fundamentally the fuel expense is the product of the realized cost of fuel to the user and the realized efficiency of the underlying generator.
Cost Map

A logical first step in reducing the cost of energy would be building a more efficient generator, such as a fuel cell, or a complex cycle engine.   However, Fuel efficiency is pretty much set by the state of the art in technology, and will change only incrementally with time.  There are some minor enhancements that can be made with energy storage to keep the generators operating at their peak operating point, but making systems more efficient is very expensive and the development effort takes a long time.

A cost effective alternative to driving reduced costs is to have the engine operate on the lowest cost fuel of the moment.  To achieve this, a variety of engines have been developed to work on specific singular fuels.  However, in the modern environment, there is pressure on engine manufacturers to have engines that can operate on a variety of fuels.  The physical phenomena that drive combustion make this a technical challenge, but there are a variety of solutions that exist on the market to meet the challenge of reducing costs—each has their strengths and their weaknesses.
Source: Seeking Alpha, Tristan Brown
Most shifts in fuel sources are driven by an under-supply or over supply of different types of fuel.  In the past, for example, the price of natural gas would often track the price of oil for an equivalent amount of energy.  However, oil fracking and the US natural gas export restrictions have caused a spread of roughly 6x in the cost of energy between oil and natural gas.  Needless to say, many companies (Dynamo included) are looking to leverage the lower cost fuels in a reliable manner.
Next time we will look at a few corner scenarios that illustrate how various fuel expense conditions can drive the cost of energy.


The Bakken—Part 3: Existing Generators

After speaking to operators and service personnel in the field, we learned a few astounding facts about how essential the reliability of the power generator is to production yields.  Specifically, generator problems account for at least 90% of downtime in upstream operations.  Generators will go down at any time of day, but most operators only find out during their daily site visit—which is why most of the leasing companies we visited were on the phone starting with the first shift around 8am.  Most operators expect you to send out a service technician within two hours of that first call (mostly because that’s how long it takes to drive across the field), and they all want the generator back up in three hours.  We also learned that there is a huge surge of generator orders around the coldsnap of the year.  In an environment where temperatures routinely range
Diesel Fuel Gelled on a Fuel Filter

from -40°F to 80°F, weather is a killer for these generators.  Diesel freezes at -22°F, and if the generator shuts down, so does the lubricating oil.

Diesel generators also have another unique problem:  diesel theft.  One of the smaller operators we visited told us they had lost $350k worth in stolen diesel in the last 3 months (or an estimated $3M for the year).
E&P operators aspire to use flare gas to run their generators, but the natural gas generators adapted for this purpose have higher failure rates.  Most of these NG generators have propane available on site for backup, but the primary cause of failure is of course due to the fact that flare gas has inconsistent fuel quality, which beyond causing engine shutdowns also shortens their life.  One supplier told us he only expects his fleet of generators to last 12-18 months before he will have to overhaul or replace them.  The quality of flare gas is so poor, one major generator vendor won’t sell their generators without first evaluating a test sample for examination in the lab.  If the results come back as poor, they will not provide a generator.  If the results come back positive, they will sell the product without a warranty.  In addition, the availability of these generators is less than 90%.
Needless to say, the systems used by the industry are subpar.  Technologies do exist to bring improve the availability of flare gas burning generators, but the costs don’t currently justify their implementation, and the sub-par availability is the biggest barrier to the implementation of conventional reciprocating generators.  The technology we are developing here at Dynamo is built to solve the numerous problems here.

The Bakken—Part 2: Wellpad Configuration

While we are talking about the oil field, we should probably talk about what the oil field is.  Upstream operations mostly take place at a wellpad.  Modern fracked wells require relatively large amounts of space, as shown in this picture on the left.   This is done to accommodate all the equipment needed to drill and complete a well.  “Completion” is a process whereby rock is perforated and stimulated (fractured & cleaned up). Once completed, a well enters production phase—as represented by the picture on below.
Oil wells are not static entities either; once drilled their production decreases over time.  Just as a juice box becomes harder and hard to drink from as you pull out all the fluid, the same is true with oil coming out of the ground.  After some time, usually a few months, artificial lift is installed to keep the oil flowing.  Artificial lift is a generic term for a pumping unit, but usually takes the form of a pump jack, like the ones below. Also on site you’ll find tanks for holding oil and water, heater treaters (basically a system to separate oil from water and associated gas), flare stacks, and of course an onsite generator.  While all of the equipment onsite is necessary, without a generator all activities on site come to a stop.
Early on, while the Bakken was being developed, (for political, legal, and strategic reasons) most wells were drilled with only one well on any given pad.  Today operators are doing what’s called infill drilling, where they packing together wells as densely as possible.  Without going into a detailed description of the geology and science of drilling and fracking, one thing is clear: there is more than one well per pad being drilled today.  This is important because each well typically needs 50-75kW of power.  Numbers vary, with as few as three wells per pad but we saw one pad with nine wells on site.  Each operator has their own secret number which is highly correlated to the reservoir engineering that is going on below the ground.
Kodiak Oil & Gas Corp
Howard Weil Energy Conference, March 2014


After the first six months (as shown from the chart on right), a typical well will be producing $30,000 in oil a day (or $1,250 an hour).  I think it is interesting to note that on that well pad with 9 pumpjacks, we saw one generator, which rented for roughly $20,000 a month.  Each hour of downtime from that generator at that wellpad amounted to $10k—and a good generator averages a day and a half of downtime a month.  More on this next time.

The Bakken—Part 1: Boomtown

Image source: UND EERC
The Dynamo Team went out to Williston, ND last month to see firsthand the shale revolution that is changing
the energy world.  The reason to come to Williston is that it is at the center of the action. Landing in Williston was uneventful, save that the airport was probably the size of our incubator, Greentown Labs.
A boomtown is an amazing place, where at first the world seems like any other you are familiar with, but after a while you realize it is actually different.  Very, very different.

Getting out of the Airport, Williston seems like any other midwest American town of ~14,000 people: streetlights, cars, gas stations, and local restaurants that were never displaced by big national chains.  But once you drive half a mile beyond the airport, you notice the odd things.  There are far too many trucks on the road.  Not SUVs, but 18 wheelers, and cement trucks, and tankers—in fact, you barely see any other types of vehicles on the road.  These trucks always seemed to be on the road, morning or evening; we would later hear them screaming by as we tried to sleep in our hotel.


As you drive from the airport, along Highway 2, you see the roads are lined, not with strip malls like you would see in suburban America, but office buildings of service companies; names like Schlumberger, Baker Hughes, Cameron, Caterpillar, & Weatherford streak by in your peripheral vision.  Of the houses you did see, you wonder why the houses are so small and packed together in this mostly un-inhabited county.  You notice the flash of light from the sides of metal buildings still under construction, and you wonder why there needs to be so many trailer parks for this little town.  As you will later learn, Williston has the highest rent in the US, and some believe that the population of Williston swells to 75,000 people in the summer—and they are all here to work on drilling oil.
We arrived at our hotel to find it was still under construction.  Workers were painting the walls and lining Ethernet as we checked in.  When we asked the front desk where we could grab some dinner, she exclaimed “Applebee’s just opened up down the street.”  As we drove to dinner (we opted for something more local than Applebee’s) we saw Pumpjacks right in town—an integral part of the urban landscape.
We had arrived in a boomtown, where oil wells and buildings, services were being built out at a lightning pace, but where talent and the houses for them to live in could not be found fast enough.  We had arrived in a town where at every table sat groups of people with the word oil on their lips.  We had arrived in a town where something new and different was taking place; where people came to be a part of the tidal wave that would bend history.

Getting Electricity from a Jet

After the Turbocore, there is a dynamo (or as we call it these days, a permanent magnet generator).  The generator that is attached to the Turbocore produces electricity at 1 kHz, much too fast for conventional equipment to use it, so we pass the power through a rectifier to make stable DC power across a DC link.  After the DC link, an inverter changes the power to a more standard form of AC power – in this case 60 Hz standard utility power – so that it can go into the grid, power a compressor, power a beam pump, or be used for other applications.  A DC converter could also be installed in lieu of the inverter for DC applications.  This is a typical power conversion architecture for microturbines.

While this architecture is a little more complex than your household Honda generator, it gives us product flexibility and reliability.  The DC link is electrically simple, and is a good place to create modularity in the engine.  Components on the left can be changed independently of components on the right.  This means it’s easier for us to cost effectively provide 120V 1-phase power or 3 phase 240V power or even 48 VDC by changing a few parts.  On the other side of things, it’s easier for us to make upgrades to the underlying hardware—the engine and the generator—without sacrificing electrical quality.  In fact, multiple inverters and multiple generators can be attached to both sides of the link, providing end users with a wealth of options for power.

While we are talking about electrical, and what that means for the end user, we do want to talk about why the Dynamo Turbocore has two turbines.  We could have gone with a single turbine: it’s cheaper, there are less parts and less engineering to be done with a single shaft engine.  However, we went with a split shaft design because we realized doing so would result in a more stable and more reliable engine whose performance would be less sensitive to changes in the application. 

A two-shaft engine has two main subsystems: the gas generator and the power turbines.  The gas generator includes the compressors, the combustion chamber, and the turbines that power the compressors; the remaining turbines are mechanically connected to an electric generator and are called power turbines.  There are three main advantages for doing this.

The first advantage of a two-shaft engine is that the second turbine, which spins the electric generator, can be designed to operate at a lower RPM, which results in less stringent performance requirements for the turbo-machinery, the electric generator and the power conversion unit.   This holds true for mechanical loads as well, which also see significant advantages from lower gear ratios and lower speeds.  

The second advantage is that the power turbine is a constant power device, which is exhibited as a significantly superior torque characteristic versus engine speed compare to a single-shaft engine.  For a single-shaft engine, the available torque decreases to zero as the speed of the engine drops; for a two-shaft engine, the available torque increases as the speed decreases.  The torque characteristic for a two-shaft engine t is also superior to that of a reciprocating engine, which has a relatively flat torque curve.  The torque advantage is important during back starting heavy loads. 

Lastly, the Turbocores are controlled to provide consistent power to the power turbine; the gas generator is essentially de-coupled from the power demands;  the gas generator can be throttled up and down faster and the compressor is not limited by the load on the turbine, and can generally operate near their efficiency point.  The control system can be more robust—there is less compromise between keeping the engine operational and preventing a brown-out.  Big power generating turbines where the load varies over time are generally of this split shaft configuration.  As a corollary split shaft engines are easier to start, with less thermal loading to the turbine system.

It is for these reasons and others we went to a split shaft design for our Turbocore product.