Introduction
A previous paper16 overviewed the Lifecycle Assessment literature concerning biofuels and found that no conclusive answers emerge from this literature on the important and policy-relevant questions of whether biofuels can help reduce emissions of greenhouse gases, and whether they are an efficient source of energy. This inconclusiveness is attributed to the problematic specification of these papers, which cannot give actionable and policy-relevant answers. The main problems in the specification of the papers are: the reliance on aggregate-based modeling rather than investigating the impact of specific policies, the absence of integration within a dynamic economic model that includes the price effects, the focus on emissions quantities rather than the environmental impact of the emissions, and missing. This paper draws on insights from economics and philosophy of science to explain the underlying reasons why LCA studies fail to reach conclusive answers.
Complexity and predictability
The first problem LCA’s encounter is that of irreducible complexity, which is hard to systemize and reduce for straightforward analysis. Under this broad heading we can list Quirin’s points about the differing energy quantities that go into making fertilizers, the difference in crop yields, and the allocation of co-products as well as Larson’s points about the inclusion of climate-active species, other emissions, co-products and soil-sequestration. This also includes Deluchchi’s points about the consumption and production of energy and materials, land use change and other emissions. All these critiques have the same essence: LCA’s analyze complex phenomena but and do not account for all the factors that matter in them.
Warren Weaver1 in his discussion of the evolution of scientific understanding of complex phenomena begins by defining complex phenomena and how science treats them. Weaver 1:57 argues that before the Twentieth Century, physical science’s greatest advances and most momentous contributions to human welfare came from applying the scientific method to studying questions that involved only two (or only a few) variables. Relatively straightforward theories and experiments were sufficient to establish scientific rules which then became very important for human knowledge and society. Enormous gains from science and technology ensued from applying the scientific method to these laws and rules. Weaver then explains that the twentieth century presented an attempt to apply the methods of science studying a few variables to studying many more variables—studying complexity. He draws the distinction here between two types of complexity: disorganized complexity and organized complexity. Weaver1:58 defines disorganized complexity as:
“a problem in which the number of variables is very large, and one in which each of the many variables has a behavior which is individually erratic, and may be totally unknown. But in spite of this helter-skelter or unknown behavior of all the individual variables, the system as a whole possesses certain orderly and analyzable average properties.”
As examples of this type of complexity he cites a telephone exchange predicting the average frequency of calls, or an insurance company attempting to assess death rates. The key feature of disorganized complexity can be seen to be the lack of complex interrelations between the multiplicity of variables. Weaver argues that organized complexity is amenable to investigation by statistical and mathematical techniques. Because there are no complex interrelations between the variables, the totality of the variables can be assessed using statistical and mathematical techniques.
Organized complexity, on the other hand, is not amenable to easy analysis with mathematical and analytical techniques. The distinction, Weaver insists, is not in the number of factors or variables, but rather in the existence of complex interrelations between the multiplicity of factors. “They are all problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole”1:58. These complex interrelations make trying to study the complex systems difficult because one cannot reduce the complexity away.
This distinction between “organized” and “disorganized” complexity is similar to the distinction between the concepts of Extremistan and Mediocristan, presented by Taleb in The Black Swan. Taleb defines Mediocristan problems as being scalable problems, where a large sample cannot be altered significantly by the introduction of a single observation, no matter how large or small it is relative to the others. These scalable problems are ones where the range of variation of the variables is not wide enough for one observation to skew the total results. Examples of distributions that are from Mediocristan include height, weight, calorie consumption, car accidents, mortality rates2:35.
Extremistan, on the other hand refers to situations where one extreme observation can disproportionately impact the aggregate or mean. In these distributions, the value of one observation can be so high or low compared to the rest that it could completely alter the final result. Taleb provides the example of the wealth of a group that includes Bill Gates. The mere introduction of Gates, even to a very large group of a thousand people, would completely change the metrics for the group, since Gates would account for 99.9% of the wealth of the entire group. Further examples include: book sales, number of references on Google, populations of cities, financial markets, and inflation rates2:35.
FAO Hayek3:3 illustrates this point by demonstrating the difference between physics and other fields of inquiry.
“More particularly, what we regard as the field of physics may well be the totality of phenomena where the number of significantly connected variables of different kinds is sufficiently small to enable us to study them as if they formed a closed system for which we can observe and control all the determining factors; and we may have been led to treat certain phenomena as lying outside physics precisely because this is not the case. If this were true it would certainly be paradoxical to try to force methods made possible by these special conditions on disciplines regarded as distinct because in their field these conditions do not prevail.”
For Hayek, it is the simplicity of the questions that physics tackles that makes these questions suitable for the methods of physics. Questions which do not exhibit this simplicity are, according to Hayek, unsuitable to be examined using the tools of physics. In agreement with, and elaboration on, Weaver, Hayek defines the complexity of systems to be dependent on “the minimum number of elements of which an instance of the pattern must consist in order to exhibit all the characteristic attributes of the class of pattern in question”3:25. As we move from simple physical inanimate systems that are amenable to investigation by physics’ methods, we progressively witness increasing degrees of organized complexity, and increasing numbers of irreducible relationships that cannot be abstracted away in any attempt to study or manage the system.
Here, it is useful to turn to the more recent literature on Complexity Studies, which provides a useful insight into the issue of reductionism. Tamas Vicsek4:131 argues:
“Although it might sometimes not matter that details such as the motions of the billions of atoms dancing inside the sphere’s material are ignored, in other cases reductionism may lead to incorrect conclusions. In complex systems, we accept that processes that occur simultaneously on different scales or levels are important, and the intricate behaviour of the whole system depends on its units in a nontrivial way. Here, the description of the entire system’s behaviour requires a qualitatively new theory, because the laws that describe its behaviour are qualitatively different from those that govern its individual units.”
As these interrelations increase, the investigation of the systems then must be able to account for all of them in order to accurately study the system. One will need all the data that is relevant to the question to be included in the analysis. As we move towards investigating complex social and economic systems, we are faced with two main problems that make such studies difficult. The first problem is the lack of data. A lot of the important relations in complex systems do not have adequate data measuring them—though this could in some instances be remedied with better data collection, the real problem remains when one remembers that a lot of the data needed is simply unquantifiable and immeasurable. The second problem is the proliferation and unknowability of the real relations governing such complex phenomena. With many interrelated factors and variables, it can be impossible to determine what the actual relations between different variables are, and how they influence each other. Modeling these relations accurately is not possible unless one can know them precisely.
This understanding of complexity problems illuminates the disagreements in the LCA literature and why the results in them are so varied. In the quest to find the environmental effect of biofuels utilization, studies are unable to define all the factors that matter for biofuels production, or to specify all the interrelations that tie these factors together. Different studies choose to emphasize different factors and interrelations, and as a result different results emerge. None of these studies has come close to including all the factors and interrelations that matter, for such a task is impossible given the infinite number of transactions, agents, and knock-on effects involved. Further, the measurement of these factors and their interrelations continues to be dogged by uncertainty. In short, different studies arrive at starkly different results because they define different factors as being important, define their interrelations differently, and measured them differently.
Hayek identified the problems of LCA’s in his analysis of the shortcomings of applying the methodology of science to social phenomena:5
This brings me to the crucial issue. Unlike the position that exists in the physical sciences, in economics and other disciplines that deal with essentially complex phenomena, the aspects of the events to be accounted for about which we can get quantitative data are necessarily limited and may not include the important ones. While in the physical sciences it is generally assumed, probably with good reason, that any important factor which determines the observed events will itself be directly observable and measurable, in the study of such complex phenomena as the market, which depend on the actions of many individuals, all the circumstances which will determine the outcome of a process, for reasons which I shall explain later, will hardly ever be fully known or measurable. And while in the physical sciences the investigator will be able to measure what, on the basis of a prima facie theory, he thinks important, in the social sciences often that is treated as important which happens to be accessible to measurement. This is sometimes carried to the point where it is demanded that our theories must be formulated in such terms that they refer only to measurable magnitudes.
Dynamic economic analysis and prices
As discussed above, Deluchi’s point about the need to take account of the effect of prices necessitates a comprehensive analysis of the dynamic economic impacts of different policies. Static and partial-equilibrium analysis will not suffice in a large complex system which includes significant knock-on effects to actions. As LCA authors agree, there is a need for LCA’s to calculate the impacts of economic actions across the economy. An understanding of the dynamics of an economy is instructive to understanding this type of problem. To do so, we turn to an analysis of the coordinating mechanism of a market economy: the price mechanism.
The price mechanism is the naturally emergent way of coordinating exchange. The scarcity and abundance of different goods is reflected in their relative prices to one another. The price emerges to coordinate the production and consumption of all goods relative to one another. The price mechanism is the answer to the economic calculation problem faced by individuals and societies, as explained by the Austrian economists. Ludwig von Mises states three main virtues for the price mechanism. Firstly, it allows for the valuation of all individuals taking part in trade is used for calculation. It allows people to compare the profitability of their methods of production to those of others. And thirdly, use of money prices allows values to be reduced to a common unit6:12.
In Economic Calculation in the Socialist Commonwealth, Mises emphasizes the importance of private property rights to economic calculation. Mises states: “Who is to do the consuming and what is to be consumed by each is the crux of the problem of socialist distribution”7:4. Dispersed calculation allows each individual to measure the factors relevant to them and make decisions based upon it. Centralized calculation, however, needs to take into account the totality of all relevant factors, and will naturally be unable to determine which of them matter to which individual. The problem is magnified when one considers decisions concerning higher order goods, or capital.
“Moreover, the mind of one man alone—be it ever so cunning, is too weak to grasp the importance of any single one among the countlessly many goods of a higher order. No single man can ever master all the possibilities of production, innumerable as they are, as to be in a position to make straightway evident judgments of value without the aid of some system of computation” 7:12.
Only through monetary calculation carried out by individuals owning their own capital can production decisions be successful and a complex economic system function. It will be useful to refer to economic calculation carried out by individuals for the use of their private property as situated calculation, to differentiate it from centralized calculation. Mises adds:
“It is an illusion to imagine that in a socialist state calculation in natura can take the place of monetary calculation. Calculation in natura, in an economy without exchange, can embrace consumption goods only; it completely fails when it comes to dealing with goods of a higher order. And as soon as one gives up the conception of a freely established monetary price for goods of a higher order, rational production becomes completely impossible. Every step that takes us away from private ownership of the means of production and from the use of money also takes us away from rational economics”7:13.
While Mises in the 1920’s emphasized the essential nature of situated calculation for the functioning of a market economy, Hayek in the 1930’s moved on to discuss the importance of the price mechanism for coordinating dispersed knowledge that is not available to any central party. Hayek writes in The Use of Knowledge in Society:5:20
“The economic problem of society is thus not merely a problem of how to allocate “given” resources—if “given” is taken to mean given to a single mind which deliberately solves the problem set by these “data.” It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge which is not given to anyone in its totality.”
Prices are the way that signals and information about products and markets are communicated from an individual to another, and in the process, decentralized decision-making is coordinated among all the dispersed individuals and their dispersed knowledge. Kirzner, building on the work of Mises and Hayek, emphasizes another important role for prices in stimulating entrepreneurial discoveries, arguing that “prices emerge in an open-ended context in which entrepreneurs must grapple with true Knightian uncertainty”6:15. This uncertainty is itself the stimulus for the discovery of new processes. Kirzner quote Lavoie that the entrepreneur “does not treat prices as parameters out of his control but, on the contrary, represents the very causal force that moves prices in coordinating directions”6:15.
The central planner thus faces four intractable problems: He is not able to aggregate all the information from all the producers and consumers in order to find the ‘correct’ allocation. This problem is prohibitively complex. No central statistical board could accumulate all the correct information needed to make allocation decisions. The market for any product is very large and exhibits disorganized complexity. There are countless relations between different factors, variables and actors. These interrelations cannot be understood completely and laid out clearly from any central viewpoint. The complexity of this system makes central calculation very hard.
But beyond the complexity of the market or social order, the other problem is that the knowledge of each small pocket within the complex social order is dispersed, and situated with the actor in their respective locations within the complex structure. The knowledge of production and consumption conditions is very dispersed and cannot be accumulated by a single mind. Instead, every individual in the market possesses a small fragment of knowledge: that which is related to them. This is Hayek’s knowledge problem.
Thirdly, and related to the dispersed knowledge problem, is the problem of the subjectivism of the preferences and decisions of individuals in the market system. Even if a central planner were to realize all the information needed to perform the calculations, the central planner cannot ascertain the subjective preferences of individuals who are all unique in their preferences of consumption and production. Fourthly, the central planner needs to also somehow internalize in their decision-making the entrepreneurial activities that have not yet been undertaken by others, and attempt to make calculations on what does not even exist.
The problem of calculation in a market, then, is one where the calculation cannot be performed centrally because of the dispersal of knowledge in a structure of disorganized complexity where individuals each ascertain a small part of knowledge pertaining to them, and because the preferences of an individual cannot possibly be communicated to a central planner completely. The striking insight from Hayek’s work, however, is that there is never a need for this information to be centralized. Each individual knows their own preferences and subjective valuations, and using the guiding light of the price mechanism and what it tells him about his choices and their costs, he is able to arrive at the decisions he feels suit him the best, as evidenced by the prosperity of individuals living in prosperous free market economies. In due time, the position of Mises and Hayek on the socialist calculation debate would be borne out by real world events in the most accurate and tragic manner, with the complete economic collapse of all societies that employed economic systems based on centralized economic calculation.
A dynamic analysis of the economic impacts of an action, then, will need to internalize the different knowledge that different actors in a market have, and aggregate it into one large model of the market interaction. But this then will fall into the same intractable problem that faced the socialist economists of the interwar period. It was simply impossible for any central planner to devise calculations that accurately reflect and mirror the complexity of the price process. This is the problem that the more complex and sophisticated LCA studies encounter when attempting to quantify biofuels’ impacts in a lifecycle analysis. Different studies will have different pieces of knowledge and information incorporated and will therefore yield different results from the dynamic analysis.1
Agent-based vs. aggregate modeling
The solution to the aforementioned calculation problem in a dynamic economy is achieved through the price system, which, in effect, disperses and decentralizes the calculation problem to the individuals who have the knowledge relevant to their decisions, as well as the knowledge of their own preferences. By decentralizing this calculation, every individual in the economy is responsible for a small part of the giant spontaneous order of calculation that emerges from free exchange. Economic calculation is carried out in the location where the market exchanges happen, by the agent who carries out the exchange. This situated calculation works because the knowledge and the preferences relevant for the calculation are present with the actor carrying it out, where it needs to be carried out.
It might be helpful here to think of the economy as an infinitely large matrix of simultaneous equations that are instantaneously and continuously solved through the market decisions of each person. Every individual decision is a single equation within the infinite matrix. Their local knowledge and their subjective preferences are combined with the price signal every time the individual makes a choice on the market. The ‘solution’ of this large matrix is the economic arrangement that emerges as a result of people’s individual actions.
Aggregates-based modeling techniques like Dynamic Stochastic General Equilibrium modeling and Life-Cycle Analysis are attempts to abstract away from the real calculation that drives the market process—the individual situated calculation—by attempting to establish scientific relationships between the aggregate outcomes of these processes and attempting to measure their impacts. The problem that these methodologies invariably run into, and the reason they regularly produce erratic, divergent and inconsistent outcomes is that they fail to study the actual relationships governing the market process, and instead focus on constructed relationships that do not exist in the real world, but were instead constructed to yield the process to study and analysis by the economist or engineer.
This conclusion is also reflected in Delucchi’s analysis of LCA’s. Delucchi’s argument on the need to structure LCA’s as policy-specific questions introduces a methodological difference in the structuring of the analysis of biofuels. Delucchi is effectively saying that aggregates-based modeling is inadequate because it does not provide us with the answers we need, nor is it built on analyzing the correct constituting relations between different factors. Delucchi’s proposed alternative of policy-specific basis for the models is a micro-based analysis that attempts to find the relevant and necessary outcomes as consequences of specific actions.
This clarifies another reason LCA’s continue to produce inconsistent and contradictory results. By choosing to measure and analyze aggregates, these studies are constructing artificial relations between constructed aggregate factors that do not exist in the real world, and do not reflect on reality.
Modeling technological advance
In many Lifecycle assessments, large assumptions are made about the future course of technological advance in the production of a fuel.2 Predictions are made about the likely course of efficiency increases in the manufacturing processes of biofuels. This matter is an issue of dispute between different authors. The problem with such estimates is that they are built on the assumption that technological and technical advances are predictable and can be estimated.
Such presumptions are built on a rather mechanistic and linear model of technological advance, which presumes that advance is largely predictable and proceeds in an orderly manner. But a more nuanced understanding of the nature of scientific and technological advance would suggest that this predictability is not as well-placed. Nathan Rosenberg views technological and economic growth as the result of problem solving, technical inducement mechanisms, and learning-by-doing and not some over-arching long-term plan for scientific advances that spur technological advances, as the linear model suggests. He emphasizes the close relation between scientific advance and technological innovation, and how the relationship often runs both ways, and not from scientific knowledge to technology.
In his study of engineering and technological advance, Walter Vincenti looked in-depth at different engineering problems and came-up with the variation selection model for technological advance. By examining the process of landing gear development as it happened, Vincenti shows how the progression towards retractable airplane landing gear was far from an orderly linear logical process, but was rather a disorderly process of plenty of innovations being introduced, tried, tinkered with and eventually either discarded or utilized and built upon. While to the historian looking back, hindsight bias would portray the process as an orderly progression, discard any contrary evidence, and present it as though the right answer was known all along, and it was just a matter of finding the technical and specific ways in which to reach it. But that was not the reality of the process. Several inventions were experimented with, and the outcome was far from pre-ordained.
While the retractable gear, Vincenti8:21 argues, had a
“technical imperative in light of the large, overall increase in speed that a combination of advances would eventually open up… Designers in the early 1930s, however, lived in a world of small, progressive speed increments coming from loosely related changes in various components of the vehicle… The community of designers was feeling its way into the future in a state of knowledge in which engineering assessment was, at best, problematic. The technical imperative of the retractable gear is knowledge after the fact. We see the outcome; designers at the time, by their own testimony, did not foresee it.”
Looking at their day-to-day problems, designers introduced a wide variety of different solutions to whose impact they were unforesighted. With time, trials and experimentation, it became apparent that retractable landing gear would be the most suitable technology and was then utilized.
This, according to Vicenti, conforms better with his variation-selection model than any linear model of technological advance. He quotes Donald Campbell’s description of the model as one of “blind variation and selective retention”8:21, and though he agrees with it, offers justification for using the term unforesighted rather than blind to describe the variation. They key is that innovators are not blind to the consequences of their innovations, “they see where they want to go and by what means they propose to get there. What they cannot do, if their idea is novel, is foresee with certainty whether it will work in the sense of meeting all the relevant requirements”8:21-22.
Philip Scranton9, in his analysis of the development of the Jet Engine, is critical of the linear vision of the progression of scientific and technological advance. Scranton argues that “at the level of design, testing and building, science provided next to no guidance for resolving critical jet engine problems; instead Edisonian, cut-and-try engineering paved the route to eventual success”9:130. Scranton further describes the process as: “an extravagantly intense and passionate project—conflict-filled and failure-prone, non-linear, non-rational, in ways even non-cumulative, and, of course, secret”9:130.
Finally, we can draw on the work of Karl Popper to further elucidate this point. Popper famously remarked that to predict the wheel is to invent it10. This illustrates precisely the unsolvable problem of knowledge prediction: if you know what you will know, then you already know it, and it is no longer a prediction. If you do not know it, then you cannot know that you will know it, or what it is.
Scientific discoveries, being discoveries, are discoveries of facts that were previously unknown. Discovering a fact is, by definition, the point at which it is discovered. This is why predicting a discovery is a logical impossibility. Once one predicts a discovery, then they have discovered it. This means that until they discovered it, they did not predict it. In knowledge, discovery and prediction are the same thing.
From these examples of actual scientific and technological advance, one can see the problem with the estimates of technological advance in the biofuels literature and another reason for the disparate results is apparent: the projections these studies use about future rates of advance in production cannot be considered robust and reliable projections—they are bound to result in errors in estimates.
Nowhere is this more pronounced than in the many analyses of the efficiency of cellulosic ethanol production. Once one considers the nature of the unknowability of future scientific advance, one realizes the problems inherent in attempting to assess the environmental friendliness of production techniques that have not been invented yet, and whose very inception in not certain. In fact, a review of the history of development of cellulosic ethanol would show why such analyses are misplaced by their very nature. As far back as 1980 one can find this statement in the USDA Yearbook of Agriculture11: “In 3 to 5 years, technology advances should occur that will allow the conversion of cellulosic materials, tree trimmings, old newspapers, crop residues, etc., to alcohol on an economic basis.”
One of the co-authors of these lines, Otto Doering, also co-authored this about cellulosic ethanol in 200812: “Currently, ethanol derived from corn kernels is the main biofuel in the United States, with ethanol from “cellulosic” plant sources (such as corn stalks and wheat straw, native grasses, and forest trimmings) expected to begin commercially within the next decade.”
Since the ‘energy crisis’ of the 1970’s, biofuels researchers have touted cellulosic ethanol as the technology that will make biofuels a viable significant contributor to the energy mix. The introduction of commercially produced cellulosic ethanol into the market has always been a few years away. The technological, technical and industrial advances have always been arriving “in 3-5 years” or “within the next decade”
This informs a skeptical assessment of all the aforementioned studies that discuss the potential of biofuels. There are still countless technical, technological and industrial challenges to the introduction of cellulosic ethanol. The predictions of scientific advance that will overcome these challenges are all built on a simplistic linear view of scientific advance. Based on the historical mismatch between the predictions of overcoming those challenges and the actual mismatches, one should be careful about using these estimates in efficiency studies.
This further helps explain the disparity of the results of different studies. When the scientific processes that are being modeled do not yet exist, it is to be expected that serious discrepancy would exist between different studies depending on their projections.
Constructivist conception of energy systems
Finally, perhaps the most general problem of the current approach to assessing energy sources and energy policy lies in the conception of the energy system as a product of rationalist human design rather than an emergent product of human action. Vernon Smith makes a distinction of two types of rationality: constructivist rationality and ecological rationality. Smith defines Constructivist Rationality as the “deliberate use of reason to analyze and prescribe actions judged to be better than alternative feasible actions that might be chosen.” Ecological Rationality, on the other hand, refers to “emergent order in the form of practices, norms, and involving institutional rules governing actions … created by human interaction but not by conscious human design”13:2.
Constructivist rationality is what humans deliberately use when solving problems, choosing a course of action, designing machinery, inventing new technology, or trying to understand physical processes. It is what our brain learns to do through education. Constructivist rationality is what has produced the inventions, machines, devices and technological innovations that have improved our life. Ecological rationality, however, refers to order that exists without the direct reason of any individual designing it or implementing it, but is also not a natural system arising independently of human action. It emerges through countless individuals acting and interacting with each other. It is a product of “human action, not human design” as Adam Ferguson14:122 put it. It is an order whose details cannot be forecast or expected beforehand. After it emerges, however, it is at times possible to apply constructivist rationality in order to understand its properties and its process of emergence.
Smith13:38 maintains an evolutionary framework for understanding the emergence of ecologically rational systems:
“But in cultural and biological coevolution, order arises from mechanisms for generating variation to which is applied mechanisms for selection. Reason is good at providing variation, but it is far too narrowly limited and inflexible in its ability to comprehend and apply all the relevant facts in order to serve the process of selection, which is better left to ecological processes that implicitly weights more versus less important influences.”
Whereas constructivist rationality is what provides us with particular designs, it is an ecologically rational selection process, which is the result of the actions of various individuals that produces the ecologically rational system that employs some of these constructivist rationalist designs. Languages are a good example of ecological rationality; no single individual or planning committee sat down and devised a live language from scratch. Languages have evolved over time, through the actions and thoughts of an infinite number of individuals. The market system is another good example of ecological rationality: nobody has designed a market system in a modern market economy, outlining and ordaining the format for production and consumption of goods, their prices and their quantities. Instead, individuals act and in their action, shape the contours of the market system: they seek out their best interest, devise ideas for production and consumption, try to cooperate with one another, and a spontaneously emergent, ecologically rational system emerges as the outcome.
The energy markets of the United States of America or Europe never were designed by a central planner; they have always been the spontaneously emergent result of the actions of many individuals. Government policy has undoubtedly affected these actions immensely, but it has not directly drawn up the mix of the energy that is used. This is an important distinction that entails serious consequences for the policy-making associated with energy. The emergent aggregate phenomenon of the fuel mix is the product of many decisions carried out by many individuals and institutions. The attempt to plan these aggregates is fraught with difficulty because it methodologically fails to grapple with the micro foundations of the decisions that make this emergent outcome.
The example of the previous large attempt at planning energy illustrates this point. The US government’s plan to design the energy system of America in the early 1980’s was a drastic and comprehensive failure. People’s choices and decisions were undoubtedly affected by the policies the government pursued, and the emergent order of the energy mix of America was certainly affected by the subsidies and regulations that the government implemented. But what emerged was drastically different from what the central planners had anticipated, designed or hoped for. Planners can make their plans and try to shape how order unfolds, but the actions of individuals are the one that ultimately determine the shape of the ecologically rational outcome.
Lee, Ball and Tabors15 have published a comprehensive overview of this episode of energy planning, detailing how the US Government sought to promote five main sources of energy in the aftermath of the “energy crisis” of the late 1970’s. These energy sources were: Synfuels, photovoltaics, renewables, natural gas and nuclear energy. The following is a brief overview of the policy and fate of each of these:
Synfuels: The Synthetic Fuels Corporation was established to subsidize synthetic fuels, committing $17b, with the goal of producing 2 million barrels of synfuel per day by 1992. The program was scrapped after only $100m were spent. No commercial synfuel production has taken place.
Photovoltaics: PV joined a horse-race of competing for increasing efficiency, but, alas, it wasn’t a competition for going on the market and succeeding commercially as much as it was a competition for federal funding. The government created the market and dictated prices, quantities and timeframes. Photovoltaics failed commercially. Lee et al conclude: “The major portion of this blunder was assuming that it was possible, in effect, to dictate the supply-demand relationship in advance and that by having the government establish the market through forced, prestated quantity purchases, it would be possible to drive the price of the technology down”15:34. The second problem, for Lee et al, was the assumption that it was possible to predict the advancement of technology and the cost-curve for the future15:78.
Renewables: Biofuels policies similar to the ones being used today were used back then. The one tangible result of these subsidies was massive wealth transfer to corn farmers and big agricultural companies.
Natural gas: The Fuel Use Act provision was put in place to dictate what were legitimate and illegitimate (legal and illegal) uses of natural gas, leading gas to become an energy source that was “too valuable to burn”, according to Lee et al. The result was that this law hampered the development of natural gas as an energy source. Only when these interventions were repealed did natural gas become a more energy source.
Nuclear energy. Lee et al. call nuclear energy policy in America “an outstanding example of what not to do following achievement of unquestionable scientific and technological leadership in a critical field”15.
The end result of these programs was a resounding failure on all stated levels. This is not a result of the failure of these plans as much as it is the failure of the idea of making these plans. The important thing to learn from this lesson is not just a limited lesson about the viability of synfuels or any other energy source; it is rather that an emergent phenomenon like the energy mix cannot be designed willfully using constructivist rationalist methods.
Implications and conclusions
The consequences of biofuels-promoting policies have been discussed in depth in another paper16, and will only be summarized here: a likely increase in deforestation and greenhouse gas emissions, increased fossil fuel consumption to produce expensive fuels, a rise in food prices, extinction of species from wild habitats destroyed to plant energy crops, damage to local ecosystems, and a large cost to taxpayers. This multitude of negative unintended consequences echoes (but to a lesser extent) the disastrous outcomes of centralized economic planning based on centralized economic calculation carried out by socialist economies in the twentieth century. By attempting to apply the tools of constructivist rationality to spontaneously emergent order shaped by human actions, and not human design, policy-makers in both situations have created real-world consequences unforeseeable with their calculation tools.
A previous paper16 illustrated the large disparities in the results of scientific studies on the efficiency of biofuels as a mechanism for reducing greenhouse gas emissions and fossil fuel consumption, concluding that there is no scientific consensus on these questions, in spite of hundreds of researchers and studies tackling the question over more than three decades. This lack of consensus can be explained by realizing that the question of biofuels suitability is not a technical scientific question that can be analyzed with the tools of the natural sciences to obtain certain answers. Rather, biofuels’ suitability is a complex, social, dynamic question determined through real world experimentation, trial and error. The complex nature of fuel markets, the dynamic nature of markets and the dispersed knowledge they contain, as well as the indispensable role of the price mechanisms for achieving decentralized coordination of economic activity, imply that centralized theoretical calculations are not even wrong, they are inapplicable in this domain The uncertain nature of scientific advance, and the fact that energy systems are ecologically rational and not the result of constructivist rational design invite us to rethink the rationale for making the adoption of a specific form of energy a goal of government policy altogether. The unintended consequences of doing so, as illustrated in several cases over time, could exceed any benefits. There is no clear benefit nor reliable method for government policy aiming to ‘pick winners’ between different energy sources, which will always emerge as the result of human action, and not human design. Policy is likely to be more effective if it tries to target the desired outputs themselves (such as reduced emissions) rather than attempt to construct the energy system to achieve these goals.