## Process response depends on integration degree, delays & lags

Process response depends on process variable’s self-regulation

An indication of the ease with which a process may be controlled can be obtained by plotting the process reaction curve. This curve is constructed after having first stabilized the process temperature under manual control and then making a nominal change in heat input to the process, such as 10%. A temperature recorder then can be used to plot the temperature versus time curve of this change.[1]

Process response of self-regulating process variable vs. time:
vessel temperature after heating increase

Process response of integrating process variable vs. time:
vessel level after inflow increase [2]

Another phenomenon associated with a process or system is identified as the steady-state transfer-function characteristic. Since many processes are nonlinear, equal increments of heat input do not necessarily produce equal increments in temperature rise. The characteristic transfer-function curve for a process is generated by plotting temperature against heat input under constant heat input conditions. Each point on the curve represents the temperature under stabilized conditions, as opposed to the reaction curve, which represents the temperature under dynamic conditions. For most processes this will not be a straight-line, or linear, function.

Temperature gain vs. heat input of endothermic process:
heating a liquid till it boils and absorbs more heat

As the temperature increases, the slope of the tangent line to the curve has a tendency to decrease. This usually occurs because of increased losses through convection and radiation as the temperature increases. This process gain at any temperature is the slope of the transfer function at that temperature. A steep slope (high ΔT/ΔH) is a high gain; a low slope (low ΔT/ΔH) is a low gain.

Temperature gain vs. heat input of exothermic process:
heating a mixture of reactants till their reaction gives off reaction heat;
melting a plastic till its flow gives off frictional heat

This curve follows the endothermic curve up to the temperature level D. At this point the process has the ability to begin generating some heat of its own. The slope of the curve from this point on increases rapidly and may even reverse if the process has the ability to generate more heat than it loses. This is a negative gain since the slope ΔT/ΔH is negative. This situation would actually require a negative heat input, or cooling action.

Process response depends on process variable’s dead time delays & capacity lags

Temperature response vs. time of single capacity:
vessel-temperature lag

Temperature response vs. time of single capacity with dead time:
vessel-temperature lag with hot-water piping delay

Temperature response vs. time of two capacities:
vessel-temperature lag and vessel-wall lag

Temperature response vs. time of three capacities:
vessel-temperature lag, vessel-wall lag, and thermowell-wall lag

Two characteristics of these curves affect the process controllability, (1) the time interval before the temperature reaches the maximum rate of change, A, and (2) the slope of the maximum rate of change of the temperature after the change in heat input has occurred, B. The process controllability decreases as the product of A and B increases. Such increases in the product AB appear as an increasingly pronounced S-shaped curve on the graph.

The time interval A is caused by dead time, which is defined as the time between changes in heat input and the measurement of a perceptible temperature increase. The dead time includes two components, (1) propagation delay (material flow velocity delay) and (2) exponential lag (process thermal time constants).

Temperature responses vs. time of temperatures I, II, III, and IV

The maximum rate of temperature rise is shown by the dashed lines which are tangent to the curves.

The tangents become progressively steeper from I to IV. The time interval before the temperature reaches the maximum rate of rise also becomes progressively greater from I to IV.

As the S curve becomes steeper, the controllability of the process becomes increasingly more difficult. As the product of the two values of time interval A and maximum rate B increases, the process controllability goes from easy (I) to very difficult (IV). Response curve IV, the most difficult process to control, has the most pronounced S shape.[1]

1. Stevenson, John. “Control Principles.” Process/industrial instruments and controls handbook, 5th ed., edited by Gregory K. McMillan, McGraw-Hill, 1999, pp. 2.4-2.30.
2. Shinskey, F. Greg. “Fundamentals of Process Dynamics and Control.” Perry’s chemical engineers’ handbook, 8th ed., edited by Don W. Green, McGraw-Hill, 2008, pp. 8-5 – 8-19.

## Progressive skew underlies media stories and government actions

Public attention vs. media coverage of Vietnam war [1]

Progressive skew didn’t win up through the 1890s

…crisis alone need not spawn Bigger Government. It does so only under favorable ideological conditions, and such conditions did not exist in the 1890s. Acting with substantial autonomy, governments even in a representative democracy may… refuse to accept or exercise powers that many citizens would thrust upon them.

American governments in the twentieth century, impelled by a more “progressive” ideology, readily accepted—indeed eagerly sought—expanded powers.[2]

Progressive skew starts with journalists’ worldviews

“Now the thing that God puts in a man that makes him a creative person makes him very sensitive to social nuances and that sort of thing. And overwhelmingly—not by a simple majority, but overwhelmingly—people with those tendencies tend to be on the liberal side of the spectrum. People on the conservative side of the political spectrum end up as vice presidents at General Motors.”

Individuals with strong political views will accept lower pay to do the type of reporting they believe in. Professionalism and peer review increase autonomy and independence in many fields.

…85 percent of Columbia Graduate School of Journalism students identified themselves as liberal, versus 11 percent conservative…

The journalists who voted for a major party candidate in presidential elections between 1964 and 1976 overwhelmingly went for Democrats: Lyndon Johnson 94 percent, Hubert Humphrey 87 percent, and George McGovern and Jimmy Carter 81 percent each.[3]

Progressive skew spins stories that show government people as heroes

Although people commonly suppose that news organizations report just the facts, journalists typically tell stories about current events. A report on a house fire, an earthquake, a factory closing, or a battle is actually a story about the event. It is no coincidence that we call news reports “stories.”

During times of foreign crisis and the early stages of a war, there is likely to be near-unanimous support for the war effort among the denizens of official Washington. The crucial expansion of government power can occur without the news media’s presenting the case against that expansion (for want of a prominent source).

…reporters place great reliance on government officials as sources. Members of the opposing party typically provide the “other” point of view, which limits the range of coverage.

Government… serves as a personalized hero, offering new policies to solve society’s problems. Thus, for example, a fiscal stimulus package to revive economic activity provides a happy ending to a story about a recession.[4]

Progressive skew tilts coverage towards government, and all the more during “crises”

…in this article media storms are operationalized as instances of a strong increase (≥150%) in attention to an issue/event that lasts at least 1 week and that attains a high share of the total agenda (≥20%) during at least that week.

New York Times front page story policy areas for 1996-2006:
Coverage is skewed, especially during media storms.[5]

Progressive skew of media “crises” is followed by disproportionally-large government actions

We first replicated the well-known and general linear effect of media attention on political attention: when media attention goes up, politics follows.

More importantly, we found that, once in media storm mode, media attention has a significantly stronger effect on congressional hearings than when not in storm mode. Our findings—which were the first results of an empirical, systematic examination of incoming information—support the notion that punctuated political attention is due to a nonlinear processing of incoming information.[6]

1. Neuman, W. Russell. “The threshold of public attention.” Public Opinion Quarterly 54.2 (1990): 159-176.
2. Higgs, Robert. Crisis and Leviathan: Critical episodes in the growth of American government. Oxford University Press, 1987, pp. 78-79.
3. Sutter, Daniel. “Can the media be so liberal? The economics of media bias.” Cato Journal 20.3 (2001): 431-431.
4. Sutter, Daniel. “News media incentives, coverage of government, and the growth of government.” The Independent Review 8.4 (2004): 549-567.
5. Boydstun, Amber E., Anne Hardy, and Stefaan Walgrave. “Two faces of media attention: Media storm versus non-storm coverage.” Political Communication 31.4 (2014): 509-531.
6. Walgrave, Stefaan, et al. “The nonlinear effect of information on political attention: media storms and US Congressional Hearings.” Political Communication (2017).

## Figurative language is processed faster, making the load lighter

Figurative language, and all language, is processed by embodied sensory-motor-emotion architectures.[1]

Figurative language accesses strong networks

The problem of how the brain copes with the fragmentary representations of information is central to our understanding of brain function. It is not enough for the brain to analyze the world into its components parts: the brain must bind together those parts that make whole entities and events, both for recognition and recall. Consciousness must necessarily be based on the mechanisms that perform the binding. The hypothesis suggested here is that the binding occurs in multiple regions that are linked together through activation zones; that these regions communicate through feedback pathways to earlier stages of cortical processing where the parts are represented; and that the neural correlates of consciousness should be sought in the phase-locked signals that are used to communicate between these activation zones.[2]

…information is encoded in an all-or-none manner into cognitive units and the strength of these units increases with practice and decays with delay. The essential process to memory performance is the retrieval operation. It is proposed that the cognitive units form an interconnected network and that retrieval is performed by spreading activation throughout the network. Level of activation in the network determines rate and probability of recall.[3]

Figurative language conserves resources

Resources seem to be required only as attention, consciousness, decisions, and memory become involved; it is here that the well-known capacity limitations of the human system seem to be located rather than in the actual processing.[4]

…our minds tend to minimise processing effort by allocating attention and cognitive resources to selected inputs to cognitive processes which are potentially relevant at the time, and to process them in the most relevance-enhancing way.[5]

Figurative language reflects our senses and our movements

…a significant aspect of metaphoric language is motivated by embodied experience.[6]

According to theories of grounded cognition, cognitive processing is a product of our sensory and perceptual experiences… For example, during word recognition, sensory and perceptual systems may automatically become activated so that access to a concept’s meaning is influenced by our sensory knowledge of that concept—how it looks, feels, smells, sounds, and tastes.

Strongly perceptual concepts such as chair, music, or crimson can be represented quickly because most of their conceptual content is a relatively simple and discrete package of perceptual information, and hence is easy to simulate. Weakly perceptual concepts, on the other hand, tend to take longer to represent because they lack a neat package of perceptual information that can benefit from modality attention effects, and because much of their non-perceptual conceptual content involves pulling in other concepts as part of their broader situation (e.g., a tendency to do what? A republic of where?).[7]

How is prediction embodied? First, action control (and the motor system) is intimately concerned with prediction. That is, every action is accompanied by predicted changes in our proprioception and perception of the world so that the system can determine if the action was successfully completed. For example, in reaching for a glass of water, the system predicts how far the arm will have to reach, how wide the fingers need to open, and the feel of the cool, smooth glass.[8]

While the studies reported above support the grounded cognition view of word recognition, they are concerned with sensorimotor processing, only one aspect of sensory/perceptual experience. Other potential aspects of the sensory experience include sound, taste, and smell.

Figurative language also reflects our emotions

Similarly, reading a strong emotion word could produce perceptual simulations in the reader. For example, the emotion word love could lead to sensory simulations of sweating palms or racing heart that are experienced by a person actually in love.[9]

…emotional content… plays a crucial role in the processing and representation of abstract concepts: …abstract words are more emotionally valenced…, and this accounts for a… latency advantage…[10]

Figurative language often includes phrases we understand all at once

…lexical bundles… are stored and processed holistically. …regular multiword sequences leave memory traces in the brain.[11]

To hold all the aces,
to speak one’s mind,
to break the ice,
to lay the cards on the table,
to pull s.o.’s leg,
to give a hand, to stab s.o.’s back,
to miss the boat,
to pull strings,
to be on cloud nine,
to change one’s mind,
to lose one’s train of thought,
to hit the sack,
to kick the bucket,
to come out of the blue,
to break s.o.’s heart,
to spill the beans,
to have one’s feet on the ground,
to turn over a new leaf,
to be the icing on the cake,
to keep s.o. at arm’s length,
to be the last straw (that broke the camel’s back),
to cost an arm and a leg,
to go over the line,
to fill the bill,
to chew the fat,
to add fuel to the fire,
to get out of the frying pan into the fire,
to be in the same boat.[12]

Figurative language helps writers connect with readers

Consider the opening paragraphs of the following article from the Good Times, a Santa Cruz, California news and entertainment weekly (Nov 4–10, 2004, p.8). The article is titled “David vs. Goliath: Round One,” and describes the University of California, Santa Cruz’s controversial plan to double in physical size and increase enrollment by over 6000 students. Read through the following text and pick out those words and phrases that appear to express figurative meaning.

Hidden in the shadows of a massive election year, tucked under the sheets of a war gone awry and a highway scuffle, another battle has been brewing.

When UC Santa Cruz released the first draft on its 15-year Long Range Development Plan (LRDP) last week, it signaled an ever-fattening girth up on the hill. While some businesses clapped their hands with glee, many locals went scrambling for belt-cinchers.

The LRDP calls for 21,000 students by the year 2020 – an increase of 6,000 over today’s enrollment … . The new enrollment estimate may have startled some residents, but as a whole it merely represents a new stage in a decades-long battle that has been fought between the city and the City on the Hill. While some students are boon to local businesses and city coffers, many residents complain students are overrunning the town—clogging the streets, jacking up rents and turning neighborhoods and the downtown into their own party playground … .

“The bottom line is that the university can do what it wants to,” explains Emily Reilly, Santa Cruz City Council member and head of a committee developed to open up dialogue between “the campus and the city.”[13]

Life’s full of action; figurative language is full of action. We’re made for this.

Grasping an explanation, giving an example, posing a threat – language is full of actions and objects, and the ties between language and motion are under continuous investigation. Generally, embodiment links the individual sensorimotor experiences with higher cognitive functions such as language processing and comprehension.[14]

It is physically impossible to do metaphorical actions such as push the argument, chew on the idea, or spit out the truth. But these metaphorical phrases are sensible because people ordinarily conceive of many abstract concepts in embodied, metaphorical terms. Engaging in, or imagining doing, a body action, such as chewing, before reading a metaphorical phrase, such as chew on the idea, facilitates construal of the abstract concept as a physical entity, which speeds up comprehension of metaphorical action phrases.[15]

1. Meteyard, Lotte, et al. “Coming of age: A review of embodiment and the neuroscience of semantics.” Cortex 48.7 (2012): 788-804.
2. Damasio, Antonio R. “The Brain Binds Entities and Events by Multiregional Activation from Convergence Zones.” Neural Computation 1.1 (1989): 123-132.
3. Anderson, John R. “A spreading activation theory of memory.” Journal of verbal learning and verbal behavior 22.3 (1983): 261-295.
4. van Dijk, Teun A., and Walter Kintsch. “Toward a model of text comprehension and production.” Psychological review 85.5 (1978): 362-394.
5. Moreno, Rosa E. Vega. Creativity and convention: The pragmatics of everyday figurative speech. John Benjamins Publishing, 2007, p. 229.
6. Gibbs, Raymond W., Paula Lenz Costa Lima, and Edson Francozo. “Metaphor is grounded in embodied experience.” Journal of pragmatics 36.7 (2004): 1189-1210.
7. Connell, Louise, and Dermot Lynott. “Strength of perceptual experience predicts word processing performance better than concreteness or imageability.” Cognition 125.3 (2012): 452-465.
8. Glenberg, Arthur M. “Few believe the world is flat: How embodiment is changing the scientific understanding of cognition.” Canadian journal of experimental psychology= Revue canadienne de psychologie experimentale 69.2 (2015): 165-171.
9. Juhasz, Barbara J., et al. “Tangible words are recognized faster: The grounding of meaning in sensory and perceptual systems.” The Quarterly Journal of Experimental Psychology 64.9 (2011): 1683-1691.
10. Kousta, Stavroula-Thaleia, et al. “The representation of abstract words: why emotion matters.” Journal of Experimental Psychology: General 140.1 (2011): 14-34.
11. Tremblay, Antoine, et al. “Processing advantages of lexical bundles: Evidence from self‐paced reading and sentence recall tasks.” Language Learning 61.2 (2011): 569-613.
12. Moreno, Rosa E. Vega. Creativity and convention: The pragmatics of everyday figurative speech. John Benjamins Publishing, 2007, p. 144.
13. Gibbs, Raymond W., and H. Colston. “Figurative language.” Handbook of psycholinguistics, 2nd ed., Elsevier, 2006, pp. 835-862.
14. Jirak, Doreen, et al. “Grasping language–a short story on embodiment.” Consciousness and cognition 19.3 (2010): 711-720.
15. Wilson, Nicole L., and Raymond W. Gibbs. “Real and imagined body movement primes metaphor comprehension.” Cognitive science 31.4 (2007): 721-731.

## Reading skill requires well-trained multilevel networks

Parsing four clauses, and forming connections

Working memory keeps new information active for one to two seconds while it carries out the appropriate processes.

Reading skill requires well-trained networks for recognizing words

The most fundamental requirement for fluent reading comprehension is rapid and automatic word recognition… Amazing as it may seem, fluent readers can actually focus on a word and recognise it in less than a tenth of a second… Thus, four to five words per second even allows good readers time for other processing operations. Both rapid processing and automaticity in word recognition (for a large number of words) typically require thousands of hours of practice in reading.

Reading skill requires well-trained networks for parsing syntax

In addition to word recognition, a fluent reader is able to take in and store words together so that basic grammatical information can be extracted… to support clause-level meaning. Syntactic parsing helps to disambiguate the meanings of words that have multiple meanings out of context (e.g. bank, cut, drop).

Reading skill requires well-trained networks for assembling clauses

A third basic process that starts up automatically as we begin any reading task is the process of combining word meanings and structural information into basic clause-level meaning units (semantic proposition formation). Words that are recognised and kept active for one to two seconds, along with grammatical cueing, give the fluent reader time to integrate information in a way that makes sense in relation to what has been read before. As meaning elements are introduced and then connected, they become more active in memory and become central ideas if they are repeated or reactivated multiple times. Each semantic proposition reflects the key elements of the input (word and structure) and also highlights linkages across important units (in this case, verbs), where relevant. Semantic propositions are formed in this way and a propositional network of text meaning is created.

Reading skill requires forming networks connecting text

As clause-level meaning units are formed (drawing on information from syntactic parsing and semantic proposition formation), they are added to a growing network of ideas from the text. The new clauses may be hooked into the network in a number of ways: through the repetition of an idea, event, object or character; by reference to the same thing, but in different words; and through simple inferences that create a way to link a new meaning unit to the appropriate places in the network… As the reader continues processing text information, and new meaning units are added, those ideas that are used repeatedly and that form usable linkages to other information begin to be viewed as the main ideas of the text… they become, and remain, more active in the network. Ideas that do not play any further roles in connecting new information…, or that do not support connecting inferences, lose their activity quickly and fade from the network. In this way, less important ideas tend to get pruned from the network, and only the more useful and important ideas remain active.

Reading skill requires forming networks summarizing ideas

As the reader continues to build an understanding of the text, the set of main ideas that the reader forms is the text model of comprehension. The text model amounts to an internal summary of main ideas… Background knowledge… plays a supporting role and helps the reader anticipate the discourse organisation of the text…

Reading skill requires forming networks modeling narratives

At the same time…, the reader begins to project a likely direction that the reading will take. This reader interpretation (the situation model of reader interpretation) is built on and around the emerging text model. The ability of fluent readers to integrate text and background information appropriately and efficiently is the hallmark of expert reading in a topical domain (e.g. history, biology, psychology).

…we know that an executive control processor (or monitor) represents the way that we focus selective attention while comprehending, assess our understanding of a text and evaluate our success. Our evaluation of how well we comprehend the text is dependent on an executive control processor.

Reading skill compacts multilevel information into working memory

…the many processes described here all occur in working memory, and they happen very quickly… Roughly, in each and every two seconds of reading, fluent readers:

1. focus on and access eight to ten word meanings
2. parse a clause for information and form a meaning unit
3. figure out how to connect a new meaning unit into the growing text model
4. check interpretation of the information according to their purposes, feelings, attitudes and background expectations, as needed
5. monitor their comprehension, make appropriate inferences as needed, shift strategies and repair misunderstanding, as needed
6. resolve ambiguities, address difficulties and critique text information, as needed [1]

1. Grabe, William Peter, and Fredricka L. Stoller. Teaching and researching reading. 2nd ed., Routledge, 2011, pp. 13-23.

## Industrial basic research brings practical problems to strong networks

Industrial basic research and industrial total R&D was performed in a few sectors by a few companies in the US in 1984

[1, 2]

Industrial basic research plugs industry networks into universities

Most basic research in the United States is conducted within the university community, but in order to “plug in” to these research centers and to exploit the knowledge that is generated there, a firm must have some in-house capability. The most effective way to remain effectively plugged in to the scientific network is to be a participant in the research process.

When basic research in industry is isolated from the rest of the firm, whether organizationally or geographically, it is likely to become sterile and unproductive.

The history of basic research in industry suggests that it is likely to be most effective when it is highly interactive with the work, or the concerns, of applied scientists and engineers. This is because the high technology industries are continually throwing up problems, difficulties and anomalous observations that are most unlikely to occur outside of a high technology context.

High technology industries provide a unique vantage point for the conduct of basic research, but in order for scientists to exploit the potential of the industrial environment it is necessary to create opportunities and incentives for interaction with other components of the industrial world. …the performance of basic research may be thought of as a ticket of admission to an information network.

Industrial basic research often is the unplanned byproduct of paying talented people to work on great practical problems

…the history of basic research in… industry suggests that a very large part of this research has been unintentional.

…if… Sadi Carnot… had been asked… what he thought he was doing, his answer would have been that he was trying to improve the efficiency of steam engines. As a byproduct of that particular practical interest, he created the modern science of thermodynamics.

If Pasteur had been asked what he thought he was doing back around 1870, he would have replied that he was trying to solve some very practical problems connected with fermentation and putrefaction in the French wine industry. He solved those practical problems – but along the way he invented the modern science of bacteriology.

Industrial basic research at Bell Labs, for example, started with practical problems and ended up producing scientific advances

Back at the end of the 1920s when transatlantic radiotelephone service was first established, the service was poor because there was lots of static. Bell Labs asked a young man, Karl Jansky, to determine the source of the noise so that it could be reduced or eliminated. He was given a rotatable antenna to work with. Jansky published a paper in 1932 in which he reported three sources of noise: Local thunderstorms, more distant thunderstorms, and a third source. which he identified as “a steady hiss static, the origin of which is not known”. It was this “star noise”, as he labelled it, which marked the birth of radio astronomy…

…Bell Labs decided to support basic research in astrophysics because of its relationship to the whole field of problems and possibilities in microwave transmission, and especially the use of communication satellites for such purposes. It turned out that, at very high frequencies, rain and other atmospheric conditions became major sources of interference in transmission. This source of signal loss was a continuing concern in the development of satellite communications. It was out of such practical concerns that Bell Labs decided to employ Arno Penzias and Robert Wilson. Penzias and Wilson… first observed the cosmic background radiation, which is now taken as confirmation of the “big bang” theory of the formation of the universe, while they were attempting to identify and measure the various sources of noise in their antenna and in the atmosphere. Although Penzias and Wilson did not know it at the time, the character of the background radiation that they discovered was just what had been postulated earlier by cosmologists favoring the “big bang” theory. Penzias and Wilson appropriately shared a Nobel Prize for this finding.

Industrial basic research at Bell Labs also started with practical problems and ended up producing practical advances

…basic research can provide valuable guidance to the directions in which there is a high probability of payoffs to more applied research. In this sense, William Shockley’s education in solid state physics during the 1930s may have been critical to the decision at Bell Labs to look for a substitute for the vacuum tube in the realm of semiconductor materials – a search that led directly to the invention of the transistor.[2]

1. National Science Foundation. National Patterns of Science and Technology Resources 1986. NSF 86-309. 1986, pp. 59, 56.
2. Rosenberg, Nathan. “Why Do Firms Do Basic Research (with Their Own Money)?” Research Policy 19.2: 165-174.

## IT-enabled process improvements increase productivity 7 ways

Cost breakdown of initial projects to implement new information technology in large manufacturing firms. The IT investment is just a part, and the total project is just a part, of developing IT-enabled process improvements.

IT-enabled process improvements show that IT is a general-purpose technology

…criteria for a general-purpose technology…

• wide scope for improvement and elaboration
• applicability across a broad range of uses
• potential for use in a wide variety of products and processes
• strong complementarities with existing or potential technologies.

…David… described the invention of the dynamo and its effect on the organization of the factory. …decades passed before factories reorganized themselves internally and made truly significant productivity gains possible.

Bresnahan and Trajtenberg… developed a model of the use of semiconductors as a general-purpose technology, characterized by “pervasiveness, inherent potential for technical improvements, and ‘innovational complementarities’”… On one level, computing invention-possibility can make existing processes run faster.

…a more exciting use of computing would be to push out the frontier. Computing can change the way business is done. …we believe that there are decades’ worth of potential innovations to be made by creatively combining inventions that we already have in creative ways.

IT-enabled process improvements use hard-to-value capabilities to generate, in part, hard-to-value results

…the literature agrees on one basic point: the size of the total stock of intangible capital in the United States is very large—as much as several trillion dollars. Often this capital does not show up in balance sheets or economic figures, either in government accounts or as an item in firm-level balance sheets.

If we used consumer surplus data to examine the effects of technological innovation over the decades, we would find hundreds of billions, perhaps trillions of dollars of unmeasured benefits in the economy.

Although decreasing communications costs have been affecting incentives for innovation for centuries, free and perfect copies that are easy to distribute were never possible until recently. But the Internet, so far, has not killed innovation. Rather, it has created an entire generation of individual innovators. If history is any guide, the Internet will encourage vast amounts of innovation.

IT-enabled process improvements led by the United States are increasing productivity

…IT is playing an important role in the US productivity resurgence since 1995, and… something unique is occurring in the United States.

The further productivity acceleration since 2001 in the absence of substantial investments in IT remains a subject of debate in the literature. …our hypothesis is that firms benefited from the organizational capital that they built at the end of the 1990s. That is, there may be a lag of approximately 3 or 4 years before the process improvements to IT appear in the productivity statistics.

Major empirical and case studies from the period 1995– 2008 point to business-process reorganization as a major factor in explaining productivity differences across plants or firms.

IT-enabled process improvements have come from seven practices

…the firms that simultaneously invested in IT and in the practices did disproportionately better than firms that did only one or the other. In other words, the practices are complementary to IT investment.

1. Move from analog to digital processes
Moving an increasing number of processes into the paperless, digital realm…
2. Open information access
Digital organizations… encourage the use of dispersed internal and external information sources.
3. Empower the employees
Digital organizations decentralize authority—pushing decision rights to those with access to information.
4. Use performance-based incentives
Meritocratic pay structures, incentive pay for individuals and groups, and stock options are common at digital organizations.
5. Invest in corporate culture
Part of making productive use of IT is to define and promote a cohesive set of high-level goals and norms that pervade the company.
6. Recruit the right people
The fact that technology gives employees more information and authority implies that such employees need to be more capable…
7. Invest in human capital
…digital organizations provide more training… Many of the changes… call for increased levels of thinking and ingenuity on the part of employees.[1]

1. Brynjolfsson, Erik, and Adam Saunders. Wired for innovation: how information technology is reshaping the economy. MIT Press, 2009.

## Learn easier by planning better, and thinking harder

Think about the same problem repeatedly and you learn less.
Think about different problems in-between and you learn easier.

To learn easier, think harder.

Learn easier by knowing your capabilities better

One reason to make things difficult while studying is that making things too easy leads to overconfidence, which in turn leads students to stop studying too soon. Students should actively avoid overconfidence, especially students who have a pattern of doing worse on exams than they expected:

1. Test yourself.
2. Consider what could go wrong on a test.
3. Think about what you don’t know.

Ironically, students also tend to be underconfident in their ability to learn and improve, and so if you are a student who is discouraged by how difficult the material is, you might benefit if you:

1. Remember if you are prone to underestimating your capacity for learning.

Learn easier by planning better

There are also ways to overcome another huge problem for studiers, the planning fallacy:

1. Break the task down into elements and consider how long each subtask will take.
2. Consciously estimate that everything will take twice as long as you think it will take.

Procrastination is a huge hurdle to effective studying. Advice that one should avoid procrastination is easy to find (e.g., Benjamin Franklin: “Don’t put off until tomorrow what you can do today,”) but advice on how to do so is difficult to come by. Research suggests that there are ways of decreasing procrastination:

1. Increase expectancy of success.
2. Set appropriate and achievable subgoals.
3. Form predictable work habits that essentially make the decision that it is time to work for you.

Learn easier by learning to think harder

With respect to how to study, our most general advice is this:

1. Struggle while thinking.
Easy studying is often ineffective.
2. Do not try to take shortcuts on the path to knowledge.
3. Make it as easy as possible to think hard.
Avoid pitfalls such as trying to study in a situation that leads to too much distraction.

We have already alluded to multiple productive ways to make things difficult.

1. Summarize notes during a lecture.
Don’t transcribe notes during a lecture.
2. Ask yourself questions while studying.
3. Simulate test conditions by quizzing yourself and see if you really know the answers.
Don’t go over the answers and decide that you know them—which is easy when they are right in front of you.
4. Space repeated study sessions apart in time to allow forgetting.
5. Return to restudy information that seemed well-learned at one point but might have been forgotten.

These strategies have dual benefits: They enhance learning, and they make self-monitoring more accurate.

Learn easier by learning longer

Studying more is not effective unless one is smart about how to study. We have tried to explain how students can become smarter studiers. Making bad choices about how to study can be akin to pedaling a stationary bike: You put in effort but you go nowhere. Making bad choices about what and when to study can be like riding in the wrong direction (what) or starting a race at the wrong time (when). Our goal in this chapter is to point studiers in the right direction and give them a faster bike.

There is one last piece of advice, and it is the most obvious of all: The more time you spend riding, the farther you get—and the same is true of studying:

1. Learn to study efficiently.
2. Study a lot.

Distance = rate × time, and learning = efficiency × time.

If you end up accomplishing your goals and have free time afterward:

1. Study some more.

Learn easier

learning = efficiency × time

1. Test yourself.
2. Consider what could go wrong on a test.
3. Think about what you don’t know.
4. Remember if you are prone to underestimating your capacity for learning.
5. Break the task down into elements and consider how long each subtask will take.
6. Consciously estimate that everything will take twice as long as you think it will take.
7. Increase expectancy of success.
8. Set appropriate and achievable subgoals.
9. Form predictable work habits that essentially make the decision that it is time to work for you.
10. Struggle while thinking.
Easy studying is often ineffective.
11. Do not try to take shortcuts on the path to knowledge.
12. Make it as easy as possible to think hard.
Avoid pitfalls such as trying to study in a situation that leads to too much distraction.
13. Summarize notes during a lecture.
Don’t transcribe notes during a lecture.
14. Ask yourself questions while studying.
15. Simulate test conditions by quizzing yourself and see if you really know the answers.
Don’t go over the answers and decide that you know them—which is easy when they are right in front of you.
16. Space repeated study sessions apart in time to allow forgetting.
17. Return to restudy information that seemed well-learned at one point but might have been forgotten.
18. Learn to study efficiently.
19. Study a lot.
20. Study some more.[1]

1. Kornell, Nate, and Bridgid Finn. “Self-regulated learning: An overview of theory and data.” The Oxford Handbook of Metamemory, edited by John Dunlosky and Sarah (Uma) K. Tauber, Oxford University Press, 2016, pp. 325-340.

## Stories help us heal when our thinking deepens over time

[1]

Stories help us heal when they add understanding

One of the exciting aspects of the Linguistic Inquiry and Word Count program was that we were able to identify word categories that reflected the degree to which people were actively thinking. Two of the cognitive dimensions included insight or self-reflection words (such as think, realize, believe) and another made up of causal words (such as because, effect, rationale).

The people whose health improved the most started out using fairly low rates of cognitive words but increased in their use over the four days of writing. It wasn’t the level of cognitive words that was important but the increase from the first to last day.

In some ways, use of insight and causal words was necessary for people to construct a coherent story of their trauma. On the first writing session, people would often spill out their experience in a disorganized way. However, as they wrote about it day after day, they began to make sense of it. This greater understanding was partially reflected in the ways they used cognitive words. These findings suggested that having a coherent story to explain a painful experience was not necessarily as useful as constructing a coherent story.

We can get stuck, and while we’re stuck it’s no longer true that stories help us heal

This helped to explain a personal observation that had bothered me for years. When the first writing studies were published, my work was often featured in the media. At cocktail parties or informal gatherings, I sometimes found myself to be a trauma magnet. People who knew about my research would gravitate to me in order to tell me all about their horrific life experiences. Many of them also were in very poor physical health. At first, I thought that their talking about their stories would be good for them. However, I’d see the same people at another gathering months later and they would often tell me exactly the same stories and their health would be unchanged.

The word count research revealed the problem. The people telling their traumatic stories were essentially telling the same stories over and over. There was no change to the stories, no growth, no increase in understanding. Repeating the same story in the same way is not unlike ruminative thinking—a classic symptom of depression.

Stories help us heal when they add perspective

There is an important lesson here. If haunted by an emotional upheaval in your life, try writing about it or sharing the experience with others.

However, if you catch yourself telling exactly the same story over and over in order to get past your distress, rethink your strategy. Try writing or talking about your trauma in a completely different way. How would a more detached narrator describe what happened? What other ways of explaining the event might exist?

If you’re successful, research studies suggest that you will sleep better, experience better physical health, and notice yourself feeling happier and less overcome by your upheaval.

Thanks to the Linguistic Inquiry and Word Count program, we found that three aspects of emotional writing predicted improvements in people’s physical and mental health: accentuating the positive parts of an upheaval, acknowledging the negative parts, and constructing a story over the days of writing.

Stories help us heal when we switch between perspectives, and we include first-person

The more people changed in the ways they used function words from writing to writing, the more their health later improved. As we started to focus on different classes of function words, one particular group of culprits stood out as more important than the others: personal pronouns. More specifically, the more people changed in their use of first-person singular pronouns (e.g., I, me, my) compared with other pro- nouns (e.g., we, you, she, they), the better their health later became. The effects were large and held up for study after study.

The writings of those whose health improved showed a high rate of the use of I-words on one occasion and then high rates of the use of other pronouns on the next occasion, and then switching back and forth in subsequent writings.

In other words, healthy people say something about their own thoughts and feelings in one instance and then explore what is happening with other people before writing about themselves again.

The multiple perspectives in stories help us heal, in a way, like therapy helps us heal

This perspective switching is actually quite common in psychotherapy.

If a man visits his therapist and begins repeatedly complaining about his wife’s behavior, what she says, how aloof she is, and so forth, the therapist will likely stop the client after several minutes and say, “You’ve been talking about your wife at length but you haven’t said anything about yourself. How do you feel when this happens?”

Similarly, if another client—a woman in this case—with marital problems sees her therapist and spends most of her time talking about her own thoughts, feelings, and behaviors without ever talking about her spouse, the therapist will probably redirect the conversation in a similar way by asking, “You’ve told me a lot about your own feelings when this happens—how do you think your husband feels about this?”

Perhaps like good therapy, healthy writing may involve looking at a problem from multiple perspectives.[2]

1. Kapur, Shekhar. “We are the stories we tell ourselves.” com, Mar. 2010, www.ted.com/talks/shekhar_kapur_we_are_the_stories_we_tell_ourselves/transcript. Accessed 14 June 2017.
2. Pennebaker, James W. The secret life of pronouns. What our words say about us. Bloomsbury Press, 2011, Scribd pp. 23-27.

## PID controller responds to error, to error footprint, and to projected change

The top graph shows the measured process variable (the process’s output); the bottom graph shows the controller output (the process’s input). The setpoint is changed at t=0, and the external process load is changed at t=10. The PID control’s D-action is on the process variable only, not on the setpoint. The PID control action is fast and accurate; the PI-only actions keep the valve movements smaller.[1]

PID controller responds to diverse needs

…proportional-integral-derivative (PID) is by far the dominant feedback control algorithm.[2]

PID controllers are found in large numbers in all industries. The PID controller is a key part of systems for motor control. They are found in systems as diverse as CD and DVD players, cruise control for cars, and atomic force microscopes. The PID controller is an important ingredient of distributed systems for process control.[3]

There are approximately three million regulatory controllers in the continuous process industries…

Based on a survey of… controllers in the refining, chemicals and pulp and paper industries… 97% of regulatory controllers utilize a PID feedback control algorithm.[2]

Many sophisticated control strategies, such as model predictive control, are also organized hierarchically. PID control is used at the lowest level; the multivariable controller gives the set points to the controllers at the lower level.

The PID controller can thus be said to be the “bread and butter” of control engineering.[3]

PID controller responds to error with proportional action

PID controllers are defined by the control algorithm, which generates an output based on the difference between setpoint and process variable (PV). That difference is called the error…

…the most basic controller would be a proportional controller. The error is multiplied by a proportional gain and that result is the new output.

When the error does not change, there is no change in output. This results in an offset for any load beyond the original load for which the controller was tuned. A home heating system might be set to control the temperature at 68˚F. During a cold night, the output when the error is zero might be 70%. But during a sunny afternoon that is not as cold, the output would still be 70% at zero error. But since not as much heating is required, the temperature would rise above 68˚F. This results in a permanent off-set.

PID controller responds to error footprint with integral action

Integral action overcomes the off-set by calculating the integral of error or persistence of the error.

This action drives the controller error to zero by continuing to adjust the controller output after the proportional action is complete. (In reality, these two actions are working in tandem.)

PID controller responds to projected change with derivative action

And finally, there is a derivative term that considers the rate of change of the error. It provides a “kick” to a process where the error is changing quickly…

Derivative action is sensitive to noise in the error, which magnifies the rate of change, even when the error isn’t really changing. For that reason, derivative action is rarely used on noisy processes and if it is needed, then filtering of the PV is recommended.

Since a setpoint change can look to the controller like an infinite rate of change and processes usually change more slowly, many controllers have an option to disable derivative action on setpoint changes and instead of multiplying the rate of change of the error, the rate of change of the PV is multiplied by the derivative term.

Derivative is not often required, but can be helpful in processes that can be modelled as multiple capacities or second order.[4]

PID controller responds simply and intuitively

The PID controller is a simple implementation of feedback.

It has the ability to eliminate steady-state offsets through integral action, and it can anticipate the future through derivative action.[3]

1. Skogestad, Sigurd, and Chriss Grimholt. “The SIMC method for smooth PID controller tuning.” PID Control in the Third Millennium. Springer London, 2012. 147-175.
2. Desborough, Lane, and Randy Miller. “Increasing customer value of industrial control performance monitoring-Honeywell’s experience.” AIChE symposium series 326 (2002): 172-192.
3. Åström, Karl Johan, and Tore Hägglund. Advanced PID control. ISA-The Instrumentation, Systems and Automation Society, 2006, p. 1.
4. Heavner, Lou. “Control Engineering for Chemical Engineers.Chemical Engineering 124.3 (Mar. 2017): 42-50.

## Calculating impact of change for simple models is intuitive

Calculating impact of change of a car [1]

Calculating impact of change is the second of calculus’s two principal operations

…the two principal symbols that are used in calculating… are:

1. d which merely means “a little bit of.”
Thus dx means a little bit of ..
2. ∫ which is merely a long S, and may be called (if you like) “the sum of.”
Thus ∫ dx means the sum of all the little bits of x… Ordinary mathematicians call this symbol “the integral of.” Now any fool can see that if x is considered as made up of a lot of little bits, each of which is called dx, if you add them all up together you get the sum of all the dx‘s, (which is the same thing as the whole of x). The word “integral” simply means “the whole.” When you see an expression that begins with this terrifying symbol, you will henceforth know that it is put there merely to give you instructions that you are now to perform the operation (if you can) of totalling up all the little bits that are indicated by the symbols that follow. That’s all.

Calculating impact of change means calculating a curve’s footprint

Like every other mathematical operation, the process of differentiation may be reversed; thus, if differentiating y = x4 gives us dy/dx= 4x3; if one begins with dy/dx = 4x3 one would say that reversing the process would yield y = x4. But here comes in a curious point. We should get dy/dx = 4x3 if we had begun with any of the following: x4, or x4 + a, or x4 + c, or x4 with any added constant. So it is clear that in working backwards from dy/dx to y, one must make provision for the possibility of there being an added constant, the value of which will be undetermined until ascertained in some other way.

One use of the integral calculus is to enable us to ascertain the values of areas bounded by curves.

Let AB… be a curve, the equation to which is known. That is, y in this curve is some known function of x. Think of a piece of the curve from the point P to the point Q.

Let a perpendicular PM be dropped from P, and another QN from the point Q. Then call OM = x1 and ON = x2, and the ordinates PM = y1 and QN = y2. We have thus marked out the area PQNM that lies beneath the piece PQ. The problem is, how can we calculate the value of this area?

Calculating impact of change one strip at a time

The secret of solving this problem is to conceive the area as being divided up into a lot of narrow strips, each of them being of the width dx. The smaller we take dx, the more of them there will be between x1 and x2. Now, the whole area is clearly equal to the sum of the areas of all such strips. Our business will then be to discover an expression for the area of any one narrow strip, and to integrate it so as to add together all the strips.

Now think of any one of the strips. It will be like this: being bounded between two vertical sides, with a at bottom dx, and with a slightly curved sloping top.

Suppose we take its average height as being y; then, as its width is dx, its area will be y dx. And seeing that we may take the width as narrow as we please, if we only take it narrow enough its average height will be the same as the height at the middle of it. Now let us call the unknown value of the whole area S, meaning surface. The area of one strip will be simply a bit of the whole area, and may therefore be called dS. So we may write

area of 1 strip = dS = y ∙ dx.

If then we add up all the strips, we get

total area S = ∫ dS = ∫ y dx.

Calculating impact of change from start to finish

…how do you find an integral between limits, when you have got these instructions?

First, find the general integral thus:

∫ y dx,

and, as y = b + ax2 is the equation to the curve…,

∫ (b + ax2) dx

is the general integral which we must find.

Doing the integration in question by the rule…, we get

bx +(a/3)x3 + C;

and this will be the whole area from 0 up to any value of x that we may assign.

Therefore, the larger area up to the superior limit x2 will be

bx2 + (a/3)x23 + C;

and the smaller area up to the inferior limit x1 will be

bx1 + (a/3)x13 + C.

Now, subtract the smaller from the larger, and we get for the area S the value,

area S = b(x2 – x1) + (a/3)( x23 – x13).

This is the answer we wanted.

All integration between limits requires the difference between two values to be thus found. Also note that, in making the subtraction the added constant C has disappeared.

Calculating as simple as possible and no simpler

“Calculus made Easy” shows how… easy most of the operations of the calculus really are. The aim of this book is to enable beginners to learn its language, to acquire familiarity with its endearing simplicities, and to grasp its powerful methods of solving problems…[2]

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.[3]

1. Keisler, H. Jerome. Elementary calculus: An infinitesimal approach. Courier Corporation, 2012, p. xi.
2. Thompson, Silvanus Phillips. Calculus made easy. 2nd ed., enlarged, MacMillan, 1914.
3. Einstein, Albert. “On the method of theoretical physics.” Philosophy of science 1.2 (1934): 163-169.