January 27, 2005
Complex Reality Part 1
There are thoughts bugging me from several days, and it's about the complexity of reality around us. This complexity is so pervasive that I don't know where to begin, actually. Yesterday I begun to write a post, but it was horrible.
Reality can be described as a number of complex systems, which interact with each other. Systems are objects, mechanical devices, computers, airplanes. But also living cells and higher beings are systems, as groups of people are. Also corporations and markets are systems, together with the atmosphere and lands.
The degree of complexity of all these systems is very variable: it can go from trivial to inextricable. But there are common aspects between all systems: they are consituted of components (which can be sub-systems in their turn) which interact with each other, and often also communicate with each other.
It's the patterns of interaction and communication that vary wildly, in quality and quantity. In a chemical plant (which are fairly complex systems), there are hundreds of components - thermometers, pressure sensors, flow and level sensors, heaters/coolers, pumps, valves, pressure regulators, reactors, distillation columns, flow regulators - just to cite the main ones. The communications between all these components pass through electrical signals (more recently, optic fibers) going from the sensors to elaborators which first correlate the signal from the instrument to a numerical value, display it to the human operators, then confront this value with a set point and/or measurements from other instruments and finally, if needed, send signals to actuators that will open or close valves, give more or less power to heaters, all in order to keep the process variables within the design limits - all this in the interested of quality and safety. This is a regulation mechanism. In many systems, like jetliners, certain operations cannot be performed if certain conditions are not met, because it would jeopardize safety. For example, commercial planes cannot deploy flaps at cruise speed because it would destabilize the plane.
But an Airbus A380 is a trivial system compared to even a simple animal, like a fly. Its components are cells, and objects smaller than cells. There are billions of them, and they communicate all the time, with each other, through hundreds chemical messengers and receptors. All these mechanisms are only in part clear. But probably the most complex system in existence is the human brain, with its staggering number of neurons and ever-astounding capabilities. We know only very little of how a human brain really works, actually.
Systems can be described with mathematical models (models are not necessarily mathemathical; they can also be pictorial... you get the idea, I hope), but these models tend to become quickly difficult and cumbersome. Sometimes we don't know how to write the pertinent equations, and even when can do that, it is possible that there is not enough computational power available to resolve them. So, when facing the task of describing a system, scientists and engineers begin with a simple model, as simple as possible, taking into account only the major factors and features of the system. Then the results of this model are compared with experimental observations, and the model is modified to take into account minor factors and subtler features. The process is repeated until the predictions of the model become sufficiently accurate. What "sufficiently accurate" means, varies greatly from case to case - and from the resources you can use in the modelling phase. It can go from a rough approximation to numbers exact to the sixth decimal place. If the structure of the system is not known well, or it is impossible to measure properly the input and output variables, the resulting model will be poor.
As a general rule, the more complex is a system, the more difficult will be to develop a sufficiently accurate mathematical model. We want a model in order to design systems that can succesfully perform a certain task, but also to improve the performance of existing ones and to predict how certain systems will react to perturbations and changes in their operational environment. Mathematical modelling also can uncover similarities and analogies between apparently different systems, and that is of extreme importance.
Some systems reach a steady state (that's what chemical engineers strive for, when designing their plants), while others tend to oscillate around an equilibrium position. It's an uncomfortable situation, but it happens. Some ecosystems follow this trend, and I strongly suspect that also free markets fall in the same category: phases of boom and bust are an inherent features of them.
The extreme case is a chaotic system, one for which we are unble to predict with any useful accuracy its reactions to a perturbation or environmental changes (this is a pragmatic definition of a chaotic system). Steven DenBeste, who is a system engineer himself, wrote quite a few times about complex and chaotic systems.
Regulation mechanism often manifest as feedback loops, which can be negative or positive (feedforward loops exist, too), and the feedback amplitude is different from system to system, from cycle to cycle. In some cases, it even is non-linear. An example of negative feedback are smart" temperature controllers, that reduce the heating power when a furnace is near its set temperature. On the other hand, there are level controllers that can increase the flow out of a tank when the liquid level increases: positive feedback. Feedback loops can be interlocked, meaning that one influences another.
Systems also show latency, that is a time delay between the perturbation and the reaction: latency can be tricky, because while we wait for a perturbation to cause an effect, another change with a shorter latency time may occur and thus the response we observe is caused by the superimposition of two different perturbations.
The last feature is hysteresis. This is a bit more difficult to explain: we apply a perturbation to a system, and we that the system responds in a certain way and a certain time. Then we remove the perturbation, and we notice that the system returns to its initial state following a different path, more slowly or faster than in the first case. Certain perturbations may even cause permanent changes. This is hysteresis.
But what are the practical, everyday, implications of all this, you may ask?
Well, for now my time is over. Soon I will draw conclusions from my exposition of complex systems.
Reality can be described as a number of complex systems, which interact with each other. Systems are objects, mechanical devices, computers, airplanes. But also living cells and higher beings are systems, as groups of people are. Also corporations and markets are systems, together with the atmosphere and lands.
The degree of complexity of all these systems is very variable: it can go from trivial to inextricable. But there are common aspects between all systems: they are consituted of components (which can be sub-systems in their turn) which interact with each other, and often also communicate with each other.
It's the patterns of interaction and communication that vary wildly, in quality and quantity. In a chemical plant (which are fairly complex systems), there are hundreds of components - thermometers, pressure sensors, flow and level sensors, heaters/coolers, pumps, valves, pressure regulators, reactors, distillation columns, flow regulators - just to cite the main ones. The communications between all these components pass through electrical signals (more recently, optic fibers) going from the sensors to elaborators which first correlate the signal from the instrument to a numerical value, display it to the human operators, then confront this value with a set point and/or measurements from other instruments and finally, if needed, send signals to actuators that will open or close valves, give more or less power to heaters, all in order to keep the process variables within the design limits - all this in the interested of quality and safety. This is a regulation mechanism. In many systems, like jetliners, certain operations cannot be performed if certain conditions are not met, because it would jeopardize safety. For example, commercial planes cannot deploy flaps at cruise speed because it would destabilize the plane.
But an Airbus A380 is a trivial system compared to even a simple animal, like a fly. Its components are cells, and objects smaller than cells. There are billions of them, and they communicate all the time, with each other, through hundreds chemical messengers and receptors. All these mechanisms are only in part clear. But probably the most complex system in existence is the human brain, with its staggering number of neurons and ever-astounding capabilities. We know only very little of how a human brain really works, actually.
Systems can be described with mathematical models (models are not necessarily mathemathical; they can also be pictorial... you get the idea, I hope), but these models tend to become quickly difficult and cumbersome. Sometimes we don't know how to write the pertinent equations, and even when can do that, it is possible that there is not enough computational power available to resolve them. So, when facing the task of describing a system, scientists and engineers begin with a simple model, as simple as possible, taking into account only the major factors and features of the system. Then the results of this model are compared with experimental observations, and the model is modified to take into account minor factors and subtler features. The process is repeated until the predictions of the model become sufficiently accurate. What "sufficiently accurate" means, varies greatly from case to case - and from the resources you can use in the modelling phase. It can go from a rough approximation to numbers exact to the sixth decimal place. If the structure of the system is not known well, or it is impossible to measure properly the input and output variables, the resulting model will be poor.
As a general rule, the more complex is a system, the more difficult will be to develop a sufficiently accurate mathematical model. We want a model in order to design systems that can succesfully perform a certain task, but also to improve the performance of existing ones and to predict how certain systems will react to perturbations and changes in their operational environment. Mathematical modelling also can uncover similarities and analogies between apparently different systems, and that is of extreme importance.
Some systems reach a steady state (that's what chemical engineers strive for, when designing their plants), while others tend to oscillate around an equilibrium position. It's an uncomfortable situation, but it happens. Some ecosystems follow this trend, and I strongly suspect that also free markets fall in the same category: phases of boom and bust are an inherent features of them.
The extreme case is a chaotic system, one for which we are unble to predict with any useful accuracy its reactions to a perturbation or environmental changes (this is a pragmatic definition of a chaotic system). Steven DenBeste, who is a system engineer himself, wrote quite a few times about complex and chaotic systems.
Regulation mechanism often manifest as feedback loops, which can be negative or positive (feedforward loops exist, too), and the feedback amplitude is different from system to system, from cycle to cycle. In some cases, it even is non-linear. An example of negative feedback are smart" temperature controllers, that reduce the heating power when a furnace is near its set temperature. On the other hand, there are level controllers that can increase the flow out of a tank when the liquid level increases: positive feedback. Feedback loops can be interlocked, meaning that one influences another.
Systems also show latency, that is a time delay between the perturbation and the reaction: latency can be tricky, because while we wait for a perturbation to cause an effect, another change with a shorter latency time may occur and thus the response we observe is caused by the superimposition of two different perturbations.
The last feature is hysteresis. This is a bit more difficult to explain: we apply a perturbation to a system, and we that the system responds in a certain way and a certain time. Then we remove the perturbation, and we notice that the system returns to its initial state following a different path, more slowly or faster than in the first case. Certain perturbations may even cause permanent changes. This is hysteresis.
But what are the practical, everyday, implications of all this, you may ask?
Well, for now my time is over. Soon I will draw conclusions from my exposition of complex systems.
Comments:
Post a Comment