## Process response depends on integration degree, delays & lags

Process response depends on process variable’s self-regulation

An indication of the ease with which a process may be controlled can be obtained by plotting the process reaction curve. This curve is constructed after having first stabilized the process temperature under manual control and then making a nominal change in heat input to the process, such as 10%. A temperature recorder then can be used to plot the temperature versus time curve of this change.[1]

Process response of self-regulating process variable vs. time:
vessel temperature after heating increase

Process response of integrating process variable vs. time:
vessel level after inflow increase [2]

Another phenomenon associated with a process or system is identified as the steady-state transfer-function characteristic. Since many processes are nonlinear, equal increments of heat input do not necessarily produce equal increments in temperature rise. The characteristic transfer-function curve for a process is generated by plotting temperature against heat input under constant heat input conditions. Each point on the curve represents the temperature under stabilized conditions, as opposed to the reaction curve, which represents the temperature under dynamic conditions. For most processes this will not be a straight-line, or linear, function.

Temperature gain vs. heat input of endothermic process:
heating a liquid till it boils and absorbs more heat

As the temperature increases, the slope of the tangent line to the curve has a tendency to decrease. This usually occurs because of increased losses through convection and radiation as the temperature increases. This process gain at any temperature is the slope of the transfer function at that temperature. A steep slope (high ΔT/ΔH) is a high gain; a low slope (low ΔT/ΔH) is a low gain.

Temperature gain vs. heat input of exothermic process:
heating a mixture of reactants till their reaction gives off reaction heat;
melting a plastic till its flow gives off frictional heat

This curve follows the endothermic curve up to the temperature level D. At this point the process has the ability to begin generating some heat of its own. The slope of the curve from this point on increases rapidly and may even reverse if the process has the ability to generate more heat than it loses. This is a negative gain since the slope ΔT/ΔH is negative. This situation would actually require a negative heat input, or cooling action.

Process response depends on process variable’s dead time delays & capacity lags

Temperature response vs. time of single capacity:
vessel-temperature lag

Temperature response vs. time of single capacity with dead time:
vessel-temperature lag with hot-water piping delay

Temperature response vs. time of two capacities:
vessel-temperature lag and vessel-wall lag

Temperature response vs. time of three capacities:
vessel-temperature lag, vessel-wall lag, and thermowell-wall lag

Two characteristics of these curves affect the process controllability, (1) the time interval before the temperature reaches the maximum rate of change, A, and (2) the slope of the maximum rate of change of the temperature after the change in heat input has occurred, B. The process controllability decreases as the product of A and B increases. Such increases in the product AB appear as an increasingly pronounced S-shaped curve on the graph.

The time interval A is caused by dead time, which is defined as the time between changes in heat input and the measurement of a perceptible temperature increase. The dead time includes two components, (1) propagation delay (material flow velocity delay) and (2) exponential lag (process thermal time constants).

Temperature responses vs. time of temperatures I, II, III, and IV

The maximum rate of temperature rise is shown by the dashed lines which are tangent to the curves.

The tangents become progressively steeper from I to IV. The time interval before the temperature reaches the maximum rate of rise also becomes progressively greater from I to IV.

As the S curve becomes steeper, the controllability of the process becomes increasingly more difficult. As the product of the two values of time interval A and maximum rate B increases, the process controllability goes from easy (I) to very difficult (IV). Response curve IV, the most difficult process to control, has the most pronounced S shape.[1]

1. Stevenson, John. “Control Principles.” Process/industrial instruments and controls handbook, 5th ed., edited by Gregory K. McMillan, McGraw-Hill, 1999, pp. 2.4-2.30.
2. Shinskey, F. Greg. “Fundamentals of Process Dynamics and Control.” Perry’s chemical engineers’ handbook, 8th ed., edited by Don W. Green, McGraw-Hill, 2008, pp. 8-5 – 8-19.

## Habit automaticity takes very-many consistent repetitions

Example of increase in automaticity

Habit automaticity on desired new habits was measured daily

…participants were asked to choose a healthy eating, drinking or exercise behaviour that they would like to make into a habit. Participants were asked to try to carry out the behaviour every day for 84 days.

SRHI scores were the primary outcome measures… The behaviour of interest is followed by statements to which participants report their level of agreement; example items are ‘I do automatically’, ‘I do without thinking’ and ‘I would find hard not to do’. We created an automaticity subscale… which gave a total score range of 0–42.

Habit automaticity built fast at first, built slower later, and approached a maximum

…the relationship between repetition and habit strength follows an asymptotic curve in which automaticity increases steadily—but by a smaller amount with each repetition—until it reaches an asymptote (plateau).

SPSS Version 14 was used… to fit a curve for each individual’s data… using Mitscherlich’s law of diminishing returns (y=a-be-cx),
where
y is automaticity and
x is day of the study…
a’ represents the asymptote of the curve (the automaticity plateau score),
b’ is the difference between the asymptote and the modelled initial value of y (when x=0) and
c’ is the rate constant that represents the rate at which the maximum is reached.

• An asymptotic model proved to be a good fit for almost half (48%) of the participants who provided enough data for analysis.
• Those for whom the asymptotic model was a poor fit had typically carried out the behaviour fewer times during the study.
• Two other groups of participants had relatively high levels of performance; one for whom the model could not be fitted and one with a very high modelled asymptote. It is probable that these individuals were relatively slow in forming their habits and would have reached a plateau if the recording had continued for longer.

On average …the fit of the asymptotic curve was superior to the linear model. We are therefore reasonably confident that the asymptotic curve reflects a generalized habit formation process.

Habit automaticity plateaued only after very many repetitions

We were only able to find one statement in the literature discussing how long it takes to for a habit… once it has been ‘performed frequently (at least twice a month) and extensively (at least 10 times)’…

Our study has shown that it is likely to take much longer than this for a repeated behaviour to reach its maximum level of automaticity.

Early repetitions result in larger increases in automaticity than those later in the habit formation process, and there is a point at which the behaviour cannot become more automatic even with further repetition.

The average modelled time to plateau in this sample was 66 days, but the range was from 18 to 254 days.

Habit automaticity built better when repetitions were consistent

…even in this study where the participants were motivated to create habits, approximately half did not perform the behavior consistently enough to achieve habit status.

…missing one opportunity does not preclude habit formation, but missing a week’s worth of opportunities reduces the likelihood of future performance and hinders habit acquisition.

… individuals who performed the behaviour more consistently showed a change in automaticity scores which was modelled more closely by an asymptotic curve.

Habit automaticity required more repetitions for habits that were more complex

…it can take a large number of repetitions for an individual to reach their highest level of automaticity for some behaviours…

It is notable that the exercise group took one and a half times longer to reach their asymptote than the other two groups. Given that exercising can be considered more complex than eating or drinking, this supports the proposal that complexity of the behaviour impacts the development of automaticity.

…creating new habits will require self-control to be maintained for a significant period before the desired behaviours acquire the necessary automaticity to be performed without self-control.

…reaching a higher asymptote took longer.[1]

1. Lally, Phillippa, et al. “How are habits formed: Modelling habit formation in the real world.European journal of social psychology 40.6 (2010): 998-1009.

## Sticky prices are responses to customer friction

Sticky prices illustrated [1]

Sticky prices delay price jumps hoping customers will be happier

“We can’t change prices biannually, it is not the culture here.”

“We said we weren’t going to raise prices that year and I believe that once you say that, you should stick with it.”

…if the costs are stable, then doubling the frequency of price changes… invites customers to complain, to demand discounts and rebates, and to ask to renegotiate.

…price rigidity was perceived by the company’s customers to be a sign of “customer orientation” and therefore a good thing.

…many customers were more positively disposed to do business with companies who only changed their prices according to a predictable time schedule. Indeed, price rigidity was a source of pride within a company because it indicated that one’s relationship with customers was more important than the “bottom line.”

“We will take it in the pants rather than pass it on down to our customers.”

Sticky prices delay price drops that might jump up again

One member of the sales force aptly described cutting prices as “feeding the animal.” Such a decision sets up a dangerous cycle: cutting prices in order to get business this period leads to a response by a competitor with a still lower price. This lower price puts return pressure on the firm to lower its prices again.

…both the sales force and customers would sometimes argue against a price decrease because it would make a price increase in later years more expensive because of the need to convince customers that prices should go up again. Thus any price change that does not make sense for the customer can cause customer antagonism.

Sticky prices eventually jump, and effort with customers jumps up

…“every time you have one of those price changes you have to go in there and you are opening a Pandora’s box.”

“It is getting to be a running joke that every December and January I am coming in with some [price] change… They will say things like: ‘Where does that come from?… The direction is not consistent… You change discounts… dramatically, we don’t know if you are committed to us or not.’”

“Pricing season around here lasts longer than the NFL.”

“All of these costs depend on the size of the price change.”

During the pricing season we studied, a major customer called a senior vice president to negotiate a new discount level. The senior vice president and his staff flew to meet with the customer, which took two days. The team then returned to headquarters to gather additional data about the customer, similar customers, the firm’s competitors, and the effect of the customer’s purchases on the firm’s revenue. The pricing team recalculated the effect of their price changes on that customer and similar customers. They met, suggested additional analysis, met again, and decided on what they wanted to offer at the next round of meetings with the customer. Then they planned a presentation for the customer. The team then flew back with three corporate people, an area manager, and the account manager for another two days.

New large accounts require even more effort.

Although the company carries only about 8,000 products and it changes the list prices of almost all of them each year, the actual number of price changes it undertakes each year is many times higher because of the individually negotiated prices, discounts, and rebates. Therefore, the actual number of price changes undertaken is quite large, in the range of 10,000–54,000 each year.

Sticky prices speed up when overall prices move more, because people adapt

“There was… a period of some rapid inflation back in the Carter years where we would barely get a price sheet printed and you would have to start working on another one, every 6 months or so.”

“The [price] increases we experienced during that [inflationary] time were very much largely driven by cost and our average costs were going up and we were trying to recoup that… [During] high-inflation period you could get away with the high price increases. I think there was expectations in the market place; our customers are saying ‘I am able to in ate my prices to the end user so I shouldn’t be surprised when my vendor raises their prices…’ The distributors could pass on their prices a lot of easier than they can now.”[2]

1. Koning, JP. “Are prices getting less sticky?” 14 Oct. 2015, jpkoning.blogspot.com/search?q=are+prices+getting+less+sticky Accessed 13 May 2017.
2. Zbaracki, Mark J., et al. “Managerial and customer costs of price adjustment: direct evidence from industrial markets.” Review of Economics and statistics2 (2004): 514-533s.

## PID controller responds to error, to error footprint, and to projected change

The top graph shows the measured process variable (the process’s output); the bottom graph shows the controller output (the process’s input). The setpoint is changed at t=0, and the external process load is changed at t=10. The PID control’s D-action is on the process variable only, not on the setpoint. The PID control action is fast and accurate; the PI-only actions keep the valve movements smaller.[1]

PID controller responds to diverse needs

…proportional-integral-derivative (PID) is by far the dominant feedback control algorithm.[2]

PID controllers are found in large numbers in all industries. The PID controller is a key part of systems for motor control. They are found in systems as diverse as CD and DVD players, cruise control for cars, and atomic force microscopes. The PID controller is an important ingredient of distributed systems for process control.[3]

There are approximately three million regulatory controllers in the continuous process industries…

Based on a survey of… controllers in the refining, chemicals and pulp and paper industries… 97% of regulatory controllers utilize a PID feedback control algorithm.[2]

Many sophisticated control strategies, such as model predictive control, are also organized hierarchically. PID control is used at the lowest level; the multivariable controller gives the set points to the controllers at the lower level.

The PID controller can thus be said to be the “bread and butter” of control engineering.[3]

PID controller responds to error with proportional action

PID controllers are defined by the control algorithm, which generates an output based on the difference between setpoint and process variable (PV). That difference is called the error…

…the most basic controller would be a proportional controller. The error is multiplied by a proportional gain and that result is the new output.

When the error does not change, there is no change in output. This results in an offset for any load beyond the original load for which the controller was tuned. A home heating system might be set to control the temperature at 68˚F. During a cold night, the output when the error is zero might be 70%. But during a sunny afternoon that is not as cold, the output would still be 70% at zero error. But since not as much heating is required, the temperature would rise above 68˚F. This results in a permanent off-set.

PID controller responds to error footprint with integral action

Integral action overcomes the off-set by calculating the integral of error or persistence of the error.

This action drives the controller error to zero by continuing to adjust the controller output after the proportional action is complete. (In reality, these two actions are working in tandem.)

PID controller responds to projected change with derivative action

And finally, there is a derivative term that considers the rate of change of the error. It provides a “kick” to a process where the error is changing quickly…

Derivative action is sensitive to noise in the error, which magnifies the rate of change, even when the error isn’t really changing. For that reason, derivative action is rarely used on noisy processes and if it is needed, then filtering of the PV is recommended.

Since a setpoint change can look to the controller like an infinite rate of change and processes usually change more slowly, many controllers have an option to disable derivative action on setpoint changes and instead of multiplying the rate of change of the error, the rate of change of the PV is multiplied by the derivative term.

Derivative is not often required, but can be helpful in processes that can be modelled as multiple capacities or second order.[4]

PID controller responds simply and intuitively

The PID controller is a simple implementation of feedback.

It has the ability to eliminate steady-state offsets through integral action, and it can anticipate the future through derivative action.[3]

1. Skogestad, Sigurd, and Chriss Grimholt. “The SIMC method for smooth PID controller tuning.” PID Control in the Third Millennium. Springer London, 2012. 147-175.
2. Desborough, Lane, and Randy Miller. “Increasing customer value of industrial control performance monitoring-Honeywell’s experience.” AIChE symposium series 326 (2002): 172-192.
3. Åström, Karl Johan, and Tore Hägglund. Advanced PID control. ISA-The Instrumentation, Systems and Automation Society, 2006, p. 1.
4. Heavner, Lou. “Control Engineering for Chemical Engineers.Chemical Engineering 124.3 (Mar. 2017): 42-50.

## Tax cuts work, stimulus fails, debtor stimulus fails big

Tax cuts provide GDP increases that are
rapid, large, and long-lived

• Hypothetical response if average personal income tax rate was cut 1% and then immediately raised back up.
• Based on quarterly post-WWII US data.
• Two alternative factoring-out sequences were used to remove the effects in the data from simultaneous corporate tax cuts.
• Dashed lines are 95% confidence intervals.[1]

Capital gains tax cuts increase output quite a lot,
personal income tax cuts increase output just a little,
sales tax cuts reduce output

We find that the average values of the capital, labor, and consumption tax multipliers are 2.74, 1.39, and 0.62, respectively. That is, a one dollar decline in tax revenue from a cut in the tax rate on capital income stimulates output by approximately two dollars and seventy four cents on average.

The capital tax multiplier ranges from a low of 2.54 to a maximum value of 3.08. The range for the labor tax multiplier is 1.27 to 1.56. The consumption tax multiplier varies least… 60 to 0.65.[2]

Government stimulus spending reduces output

…the estimated multiplier for temporary defense spending is 0.4–0.5 contemporaneously and 0.6–0.7 over 2 years. If the change in defense spending is “permanent”… the multipliers are higher by 0.1–0.2.

These multipliers are all significantly less than 1… [3]

Debtor-government stimulus spending kills output

When debt levels are high, increases in government expenditures may act as a signal that fiscal tightening will be required in the near future. Moreover, as recent events in southern Europe and Ireland illustrate, these adjustments may need to be sudden and large. The anticipation of such adjustment could have a contractionary effect that would tend to offset whatever short-term expansionary impact government consumption may have.

Under these conditions, fiscal stimulus may therefore be counter-productive.

…we built a sample of country-episodes where the ratio of the total debt of the central government exceeded 60% of GDP. Our estimate for the impact multiplier is close to zero, and we estimate a long run multiplier of -3.[4]

1. Mertens, Karel, and Morten O. Ravn. “The dynamic effects of personal and corporate income tax changes in the United States.” The American Economic Review 103.4 (2013): 1212-1247.
2. Sims, Eric, and Jonathan Wolff. “The State-Dependent Effects of Tax Shocks.” (2016).
3. Barro, Robert J., and Charles J. Redlick. “Macroeconomic Effects From Government Purchases and Taxes.” The Quarterly Journal of Economics 126.1 (2011): 51-102.
4. Ilzetzki, Ethan, Enrique G. Mendoza, and Carlos A. Végh. “How big (small?) are fiscal multipliers?” Journal of Monetary Economics 60.2 (2013): 239-254.