Deflation healthy sign of productivity, saving, adding value, liberty

Actual price levels, and price levels with normal deflation, 1948-1976, showing deflation healthy sign

Actual price levels, and price levels with normal deflation, 1948-1976 [1:12]

Deflation healthy sign overcome by government

…western monetary institutions in the era of the classical gold standard were far from being perfect. Governments frequently intervened in the production of money through price control schemes, which they camouflaged with the pompous name of “bimetallism.” They actively promoted fractional-reserve banking, which promised ever-new funds for the public treasury. And they promoted the emergence of central banking through special monopoly charters for a few privileged banks.[2:18]

A paper money system… merely distributes the existing resources in a different manner; some people gain, others lose. It is a system that makes banks and financial markets vulnerable, because it induces them to economize on the essential safety valves of business: cash and equity. Why hold any substantial cash balances if the central bank stands ready to lend you any amount that might be needed, at a moment’s notice? Why use your own money if you can finance your investments with cheap credit from the printing press?[2:7]

Paper money has caused an unprecedented increase of debt on all levels: government, corporate, and individual. It has financed the growth of the state on all levels, federal, state, and local. It thus has become the technical foundation for the totalitarian menace of our days.[2:21]

…in practice, there are at any point in time two, and only two, fundamental options for monetary policy. The first option is to increase the quantity of paper money. The second option is not to increase the paper money supply. Now the question is how well each of these options harmonizes with the basic principles on which a free society is built.[2:29]

Inflation is an unjustifiable redistribution of income in favor of those who receive the new money and money titles first, and to the detriment of those who receive them last. In practice the redistribution always works out in favor of the fiat-money producers themselves (whom we misleadingly call “central banks”) and of their partners in the banking sector and at the stock exchange. The rich stay rich (longer) and the poor stay poor (longer) than they would in a free society.[2:33-34]

Deflation healthy sign of productivity growing

…throughout modern history, improvements in aggregate productivity have overshadowed occasional setbacks. According to one widely-used estimate, from 1948 to 1976 total factor productivity in the US grew by an average annual rate of 2 per cent. Had a (total factor) productivity norm been in effect during this time, US consumer prices in 1976 would on average have been roughly half as high as they were just after the Second World War.[1:11]

Deflation healthy sign because saving pays off

Imagine if all prices were to drop tomorrow by 50 percent. Would this affect our ability to feed, cloth, shelter, and transport ourselves? It would not, because the disappearance of money is not paralleled by a disappearance of the physical structure of production. In a very dramatic deflation, there is much less money around than there used to be, and thus we cannot sell our products and services at the same money prices as before. But our tools, our machines, the streets, the cars and trucks, our crops and our food supplies—all this is still in place. And thus we can go on producing, and even producing profitably, because profit does not depend on the level of money prices at which we sell, but on the difference between the prices at which we sell and the prices at which we buy. In a deflation, both sets of prices drop, and as a consequence for-profit production can go on.

There is only one fundamental change that deflation brings about. It radically modifies the structure of ownership. Firms financed per credits go bankrupt because at the lower level of prices they can no longer pay back the credits they had incurred without anticipating the deflation. Private households with mortgages and other considerable debts to pay back go bankrupt, because with the decline of money prices their monetary income declines too whereas their debts remain at the nominal level.

…bankruptcies—irrespective of how many individuals are involved—do not affect the real wealth of the nation, and in particular that they do not prevent the successful continuation of production. The point is that other people will run the firms and own the houses—people who at the time the deflation set in were out of debt and had cash in their hands to buy firms and real estate. These new owners can run the firms profitably at the much lower level of selling prices because they bought the stock, and will buy other factors of production, at lower prices too.[2:25-27]

Deflation healthy sign because adding value pays off

…deflation… stops inflation and destroys the institutions that produce inflation. It abolishes the advantage that inflation-based debt finance enjoys, at the margin, over savings-based equity finance. And it therefore decentralizes financial decision-making and makes banks, firms, and individuals more prudent and self-reliant than they would have been under inflation.

Deflation healthy sign that liberty is secure

…deflation eradicates the re-channeling of incomes that result from the monopoly privileges of central banks. It thus destroys the economic basis of the false elites and obliges them to become true elites rather quickly, or abdicate and make way for new entrepreneurs and other social leaders.[2:40]

Deflation puts a brake—at the very least a temporary brake—on the further concentration and consolidation of power in the hands of the federal government… It dampens the growth of the welfare state, if it does not lead to its outright implosion.

… deflation is at least potentially a great liberating force. It… brings the entire society back in touch with the real world, because it destroys the economic basis of the social engineers, spin doctors, and brain washers.

…if our purpose is… to restore… a free society, then deflation is the only acceptable monetary policy.[2:41]

  1. Selgin, George. Less than zero. The case for a falling price level in a growing economy. The Institute of Economic Affairs, 1997.
  2. Hülsmann, Jörg Guido. Deflation and liberty. Ludwig von Mises Institute, 2008.

Process response depends on integration degree, delays & lags

Integration degree, delays & lags determine process response

Process response depends on process variable’s self-regulation

An indication of the ease with which a process may be controlled can be obtained by plotting the process reaction curve. This curve is constructed after having first stabilized the process temperature under manual control and then making a nominal change in heat input to the process, such as 10%. A temperature recorder then can be used to plot the temperature versus time curve of this change.[1]

Process response of self-regulating process variable vs. time

Process response of self-regulating process variable vs. time:
vessel temperature after heating increase

Process response of integrating process variable vs. time

Process response of integrating process variable vs. time:
vessel level after inflow increase [2]

Another phenomenon associated with a process or system is identified as the steady-state transfer-function characteristic. Since many processes are nonlinear, equal increments of heat input do not necessarily produce equal increments in temperature rise. The characteristic transfer-function curve for a process is generated by plotting temperature against heat input under constant heat input conditions. Each point on the curve represents the temperature under stabilized conditions, as opposed to the reaction curve, which represents the temperature under dynamic conditions. For most processes this will not be a straight-line, or linear, function.

Temperature gain vs. heat input of endothermic process illustrates factor in process response

Temperature gain vs. heat input of endothermic process:
heating a liquid till it boils and absorbs more heat

As the temperature increases, the slope of the tangent line to the curve has a tendency to decrease. This usually occurs because of increased losses through convection and radiation as the temperature increases. This process gain at any temperature is the slope of the transfer function at that temperature. A steep slope (high ΔT/ΔH) is a high gain; a low slope (low ΔT/ΔH) is a low gain.

Temperature gain vs. heat input of exothermic process illustrates factor in process response

Temperature gain vs. heat input of exothermic process:
heating a mixture of reactants till their reaction gives off reaction heat;
melting a plastic till its flow gives off frictional heat

This curve follows the endothermic curve up to the temperature level D. At this point the process has the ability to begin generating some heat of its own. The slope of the curve from this point on increases rapidly and may even reverse if the process has the ability to generate more heat than it loses. This is a negative gain since the slope ΔT/ΔH is negative. This situation would actually require a negative heat input, or cooling action.

Process response depends on process variable’s dead time delays & capacity lags

Temperature response vs. time of single capacity illustrates process response

Temperature response vs. time of single capacity:
vessel-temperature lag

Temperature response vs. time of single capacity with dead time illustrates process response

Temperature response vs. time of single capacity with dead time:
vessel-temperature lag with hot-water piping delay

Temperature response vs. time of two capacities illustrates process response

Temperature response vs. time of two capacities:
vessel-temperature lag and vessel-wall lag

Temperature response vs. time of three capacities illustrates process response

Temperature response vs. time of three capacities:
vessel-temperature lag, vessel-wall lag, and thermowell-wall lag

Two characteristics of these curves affect the process controllability, (1) the time interval before the temperature reaches the maximum rate of change, A, and (2) the slope of the maximum rate of change of the temperature after the change in heat input has occurred, B. The process controllability decreases as the product of A and B increases. Such increases in the product AB appear as an increasingly pronounced S-shaped curve on the graph.

The time interval A is caused by dead time, which is defined as the time between changes in heat input and the measurement of a perceptible temperature increase. The dead time includes two components, (1) propagation delay (material flow velocity delay) and (2) exponential lag (process thermal time constants).

Temperature responses vs. time illustrate representative shapes of process response

Temperature responses vs. time of temperatures I, II, III, and IV

The maximum rate of temperature rise is shown by the dashed lines which are tangent to the curves.

The tangents become progressively steeper from I to IV. The time interval before the temperature reaches the maximum rate of rise also becomes progressively greater from I to IV.

As the S curve becomes steeper, the controllability of the process becomes increasingly more difficult. As the product of the two values of time interval A and maximum rate B increases, the process controllability goes from easy (I) to very difficult (IV). Response curve IV, the most difficult process to control, has the most pronounced S shape.[1]

  1. Stevenson, John. “Control Principles.” Process/industrial instruments and controls handbook, 5th ed., edited by Gregory K. McMillan, McGraw-Hill, 1999, pp. 2.4-2.30.
  2. Shinskey, F. Greg. “Fundamentals of Process Dynamics and Control.” Perry’s chemical engineers’ handbook, 8th ed., edited by Don W. Green, McGraw-Hill, 2008, pp. 8-5 – 8-19.

Cognitive therapy neural networks are increasingly well known

Cognitive therapy neural networks are changed bottom-up in antidepressant therapy and top-down in cognitive therapy

Hypothetical time course of the changes to amygdala and prefrontal function that are associated with antidepressant medication and cognitive therapy, illustrating major cognitive therapy neural networks

Hypothetical time course of the changes to amygdala and prefrontal function that are associated with antidepressant medication and cognitive therapy.

a | During acute depression, amygdala activity is increased (red) and prefrontal activity is decreased (blue) relative to activity in these regions in healthy individuals.

b | Cognitive therapy (CT) effectively exercises the prefrontal cortex (PFC), yielding increased inhibitory function of this region.

c | Antidepressant medication (ADM) targets amygdala function more directly, decreasing its activity.

d | After ADM or CT, amygdala function is decreased and prefrontal function is increased. The double-headed arrow between the amygdala and the PFC represents the bidirectional homeostatic influences that are believed to operate healthy individuals.[1]

Cognitive therapy neural networks –
– work together (along the black lines) to produce depressed symptoms
– feed back the results (along the gray line) to generate depressed symptoms in the future

Information processing in the cognitive model of depression illustrates cognitive therapy neural networks, showing feedback loops

Information processing in the cognitive model of depression.

  • Activation of depressive self-referential schemas by environmental triggers in a vulnerable individual is both the initial and penultimate element of the cognitive model.
  • The initial activation of a schema triggers biased attention, biased processing and biased memory for emotional internal or external stimuli.
  • As a result, incoming information is filtered so that schema-consistent elements in the environment are over-represented.
  • The resulting presence of depressive symptoms then reinforces the self-referential schema (shown by a grey arrow), which further strengthens the individual’s belief in its depressive elements.
  • This sequence triggers the onset and then maintenance of depressive symptoms.[2]

Untreated cognitive therapy neural networks take negative schema information and fan it out, and add in overgeneral negative information

Cognitive functioning in a healthy individual vs. in a depressed individual illustrates functionality in major cognitive therapy neural networks

Cognitive functioning in a healthy (a) or depressed (b) individual.

  • In a depressed individual, a negative self-schema and an over-general mode of processing concur to automatically prime and activate information that is congruent with the negative self-schema, via a cognitive interlock (resulting in rumination), biased memory and attention.
  • In a healthy individual, a concrete mode of processing counteracts these automatic activations.

Cognitive therapy neural networks information flow (in the diagrams above) maps directly to neural regions (in the pictures below)

Brain networks involved in various cognitive functions of cognitive therapy neural networks

Brain networks involved in
(a) self-referential processes and rumination,
(b) cognitive interlock and mood congruent processing,
(c) episodic buffer,
(d) attention bias,
(e) memory bias,
(f) overgeneral processing.

dmPC: dorsomedial prefrontal cortex,
vmPFC: ventromedial prefrontal cortex,
mPFC: medial prefrontal cortex,
iPFC: inferior prefrontal cortex,
mOFC: medial orbitofrontal cortex,
aOFC: anterior orbitofrontal cortex,
dlPFC: dorsolateral prefrontal cortex,
aITC: anterior inferotemporal cortex,
STG: superior temporal gyrus,
AnG: angular gyrus,
Ins: insula,
ACC: anterior cingulate cortex,
PCC: posterior cingulate cortex,
PCun: precuneus,
Rsp: retrosplenial cortex,
dmTh: dorsomedial thalamus,
HPC: hippocampus,
Amy: amygdala,
Hab: habenula,
Acc: nucleus accumbens,
Cd: caudate,
Pu: putamen,
Re: nucleus reuniens,
DG dentate gyrus of the hippocampus.[3]

  1. DeRubeis, Robert J., Greg J. Siegle, and Steven D. Hollon. “Cognitive therapy versus medication for depression: treatment outcomes and neural mechanisms.” Nature Reviews Neuroscience 9.10 (2008): 788-796.
  2. Disner, Seth G., et al. “Neural mechanisms of the cognitive model of depression.” Nature Reviews Neuroscience 12.8 (2011): 467-477.
  3. Belzung, Catherine, Paul Willner, and Pierre Philippot. “Depression: from psychopathology to pathophysiology.” Current opinion in neurobiology 30 (2015): 24-30.

Money control by government people denies us money self-regulation by business people

US consumer price index 1800-2016 shows effect of money control by government people after 1913

US Consumer Price Index for 1800-2016,[1]
scaled so 1913 CPI = 100

Under private control between wars before 1913, prices fell; a dollar bought more. Under government control since 1913, prices shot upward; a dollar buys less.

Money control by government people is not natural

Money is a government monopoly.

If there is no incentive to please consumers, the products or services monopolies sell tend to be more expensive and of lower quality than would be the case under competition, where a number of firms are free to compete for the consumer’s business.

Natural rights theorists argue that individual rights are being violated if government, which is supposed to protect property and political rights, prevents individuals from entering a line of business.

Money is not a natural government monopoly. It came into existence spontaneously out of the market process. Gold and silver coins, and promises to pay in gold and silver, have served consumers well for hundreds of years. From 1839 until the advent of the Federal Reserve Bank in the United States in 1913, most money in circulation consisted of privately issued bank notes…

Money control by government people is unstable and does us outsized harm

“The economy has been much less stable since the establishment of the Fed restored a government monopoly in currency issuance. The wholesale price index in 1913 [the year the Federal Reserve Board came into existence] was 87 percent of what it had been 73 years earlier… By contrast, in the 73 years since the founding of the Fed, the wholesale price index has risen to 825 percent of its 1913 level.”

“What we should have learned is that monetary policy is much more likely to be a cause than a cure of depressions, because it is much easier, by giving in to the clamour for cheap money, to cause those misdirections of production that make a later reaction inevitable, than to assist the economy in extricating itself from the consequences of overdeveloping in particular directions. The past instability of the market economy is the consequence of the exclusion of the most important regulator of the market mechanism, money, from itself being regulated by the market process.

“Just as the absence of competition has prevented the monopolist supplier of money from being subject to a salutary discipline, the power over money has also relieved governments of the necessity to keep their expenditure within their revenue…There can be little doubt that the spectacular increase in government expenditure over the last 30 years, with governments in some Western countries claiming up to half or more of the national income for collective purposes, was made possible by government control of the issue of money.”

Government control over the money supply increases centralization, which tends to reduce individual freedom. Asserting economic control over the individual also runs the risk of political control, since economic and political freedom are intertwined.

Money control won’t be given up easily by Progressive government people

One way to minimize this excessive control over the individual is to prohibit government from controlling the money supply. “One of the most effective measures for protecting the freedom of the individual might indeed be to have constitutions prohibiting all peacetime restrictions on transactions in any kind of money or the precious metals.”

Unfortunately, constitutions have a tendency to get twisted beyond recognition by the courts, at least that has been the American experience, so having a constitutional provision is not a perfect solution either. But it is better than nothing.

The long-run solution seems to be to phase-out government money as private money takes its place. Once government money is abolished there will be less pressure to restore the government’s monopoly position.

Money control by business people would be self-regulated by producers and consumers

Initially, the introduction of private money would be slow. However, over time, the money that holds its value best would be preferred over money that depreciates in value — the exact reverse of Gresham’s Law, which holds that bad money drives good money out of circulation. Gresham’s Law only holds true when government sets fixed exchange rates between or among competing currencies.

Thus, money would act just like other goods and services. Consumers would shop for the brand that best serves their needs. They would prefer stable currencies to unstable ones, and the market would weed out the bad from the good.[2]

  1. Consumer Price Index (Estimate), Accessed 25 Mar. 2017.
  2. McGee, Robert W. “The Case for Privatizing Money.The Asian Economic Review 30.2 (Aug. 1988): 258-273.

Profit option signals are generated by activity-based costing (ABC)

The hierarchy of factory operating expenses show profit option signals

The hierarchy of factory operating expenses

Profit option signals from allocating costs to where value is added

The gross numbers on corporate financial statements… represent the aggregation of thousands of small stories about bow the company designed, produced, and delivered its products, served customers, and developed and maintained brands.

  • Some activities like drilling a hole or machining a surface, are performed on individual units.
  • Others – setups, material movements, and first part inspections – allow batches of units to be processed.
  • Still others – engineering product specifications, process engineering, product enhancements, and engineering change notices – provide the overall capability that enables the company to produce the product.
  • And plant management, building and grounds maintenance, and heating and lighting sustain the manufacturing facility.

…managers need to distinguish the expenses of direct labor, direct materials, and electricity, which are consumed at the unit level, from the expenses of resources used to process batches or to support a product or a facility. Batch- and product-level expenses can be controlled only by modifying batch- and product-level activities.

Profit option signals from unit, batch, product, and facility costs

The example of a large equipment manufacturer with a machining shop containing dozens of numerically controlled machine tools shows the important distinction in emphasis between traditional cost systems and ABC analysis.

A detailed ABC analysis revealed that more than 40% of the department’s support resources were not used to produce individual product units. The company developed five new drivers of overhead resources:

  1. setup time,
  2. production runs,
  3. materials movements,
  4. active parts numbers maintenance, and
  5. facility management.

The first three related to how many batches were produced, the fourth to the number of different types of products produced, and the fifth to the facility as a whole rather than to individual products.

For a simple drive shaft, for example, the traditional system had allocated $13.38 of factory overhead to every 100 units. For the 8,000 units actually produced, the allocated overhead costs were $1,070. In contrast, the ABC system signaled that production of the shaft consumed about $1,700 of unit, batch, and product-sustaining support resources.

The heavy equipment manufacturer in our example recognized that its low-volume products were a drag on profits. To avoid outsourcing all of the low-volume products, the division opened a special low-value-added job shop.

It went from a single facility producing a broad mix of products to two focused facilities: one for high-volume products and the other for low-volume products.

Profit option signals guide repricing, resource saving, and resource management

Managers should take two types of actions after an ABC analysis.

First, they should attempt to reprice products: raise prices for products that make heavy demands on support resources and lower prices to more competitive levels for the high-volume products that had been subsidizing the others.

Second, and more important, managers should search for ways to reduce resource consumption. Reducing resource consumption gives managers an opportunity to boost profits.

…management can use the freed-up resources to increase output, which in tum generates more revenues. …management can eliminate or redeploy resources periodically to bring spending down to the new lower levels of resource consumption.

….management must take some action to capture the benefits from the signals ABC analysis sends.

  1. Cooper, Robin, and Robert S. Kaplan. “Profit Priorities from Activity-Based Costing.Harvard Business Review 69.3 (1991): 130-135.

PID controller responds to error, to error footprint, and to projected change

Closed-loop response of process using SIMC tunings shows how Pi controller responds and PID controller responds

The top graph shows the measured process variable (the process’s output); the bottom graph shows the controller output (the process’s input). The setpoint is changed at t=0, and the external process load is changed at t=10. The PID control’s D-action is on the process variable only, not on the setpoint. The PID control action is fast and accurate; the PI-only actions keep the valve movements smaller.[1]

PID controller responds to diverse needs

…proportional-integral-derivative (PID) is by far the dominant feedback control algorithm.[2]

PID controllers are found in large numbers in all industries. The PID controller is a key part of systems for motor control. They are found in systems as diverse as CD and DVD players, cruise control for cars, and atomic force microscopes. The PID controller is an important ingredient of distributed systems for process control.[3]

There are approximately three million regulatory controllers in the continuous process industries…

Based on a survey of… controllers in the refining, chemicals and pulp and paper industries… 97% of regulatory controllers utilize a PID feedback control algorithm.[2]

Many sophisticated control strategies, such as model predictive control, are also organized hierarchically. PID control is used at the lowest level; the multivariable controller gives the set points to the controllers at the lower level.

The PID controller can thus be said to be the “bread and butter” of control engineering.[3]

PID controller responds to error with proportional action

PID controllers are defined by the control algorithm, which generates an output based on the difference between setpoint and process variable (PV). That difference is called the error…

…the most basic controller would be a proportional controller. The error is multiplied by a proportional gain and that result is the new output.

When the error does not change, there is no change in output. This results in an offset for any load beyond the original load for which the controller was tuned. A home heating system might be set to control the temperature at 68˚F. During a cold night, the output when the error is zero might be 70%. But during a sunny afternoon that is not as cold, the output would still be 70% at zero error. But since not as much heating is required, the temperature would rise above 68˚F. This results in a permanent off-set.

PID controller responds to error footprint with integral action

Integral action overcomes the off-set by calculating the integral of error or persistence of the error.

This action drives the controller error to zero by continuing to adjust the controller output after the proportional action is complete. (In reality, these two actions are working in tandem.)

PID controller responds to projected change with derivative action

And finally, there is a derivative term that considers the rate of change of the error. It provides a “kick” to a process where the error is changing quickly…

Derivative action is sensitive to noise in the error, which magnifies the rate of change, even when the error isn’t really changing. For that reason, derivative action is rarely used on noisy processes and if it is needed, then filtering of the PV is recommended.

Since a setpoint change can look to the controller like an infinite rate of change and processes usually change more slowly, many controllers have an option to disable derivative action on setpoint changes and instead of multiplying the rate of change of the error, the rate of change of the PV is multiplied by the derivative term.

Derivative is not often required, but can be helpful in processes that can be modelled as multiple capacities or second order.[4]

PID controller responds simply and intuitively

The PID controller is a simple implementation of feedback.

It has the ability to eliminate steady-state offsets through integral action, and it can anticipate the future through derivative action.[3]

  1. Skogestad, Sigurd, and Chriss Grimholt. “The SIMC method for smooth PID controller tuning.” PID Control in the Third Millennium. Springer London, 2012. 147-175.
  2. Desborough, Lane, and Randy Miller. “Increasing customer value of industrial control performance monitoring-Honeywell’s experience.” AIChE symposium series 326 (2002): 172-192.
  3. Åström, Karl Johan, and Tore Hägglund. Advanced PID control. ISA-The Instrumentation, Systems and Automation Society, 2006, p. 1.
  4. Heavner, Lou. “Control Engineering for Chemical Engineers.Chemical Engineering 124.3 (Mar. 2017): 42-50.