In principle unsafe forecasts

Despite supercomputers – why the behavior of nature is so much predictable

Forecasts are difficult – especially when it comes to the future. This simple wisdom wants to make us the technology too often. Always new supercomputer systems calculate the state of the world at the meter and degree exactly – and yet, exactly you do not know, you have to restrict yourself to "could". The discrepancies are particularly clear in the current climate discussion: During researchers, the climate heat for individual German landscapes to 0.5 degrees precisely predict, others do not succeed in determining the weather for the overall day.

That this is a principal problem of computer simulations of the real world describes the US mathematician James McWilliams in a recent publication of the US Academy of Sciences. McWilliams refers specifically to atmosphere and ocean models (Atmospheric and Oceanic Simulation, AOS), but also generally draws interesting conclusions.

Such AOS models are used primarily in weather and climate research, but also for special problems approximately in connection with turbulence, clouds or gravitational waves. You have the advantage that you can handle phanomena with you, the purely mathematical persecution is not yet accessible. However, their commonality is that they rarely reflect the reality exactly, even if they had previously tuned themselves specifically to certain natural circumstances. This leads to that, for example, forecasts investigating the influence of man on the world climate developing plausibly into a particular direction – but quantitatively even in important parameters to provide some ten percent different results.

This is, McWilliams, in a computer simulation immanent, not reducible inability to reproach nature. This has its cause, among other things, that the influencing influence is temporally and oral of highly different dimensions. Streams in atmosphere and ocean are at the time level between the season and millennium and distances for thousands of kilometers, while fluids mix and dissipate in timeframe of millimeters and milliseconds. Even with optimistic look at the art supercomputers, it seems impossible to take into consideration all these factors in the future.

This leads to two important restrictions: AOS models become particularly inaccurate when you move to your resolution limits temporally and oriented. And they contain artificial simplifications, in the production of the model fixed-chosen parameters that can not be influenced on basic principles.

Degree of uncertainty has been amazingly constant since first simulations end of the 70’s

Especially with problems of the complexity degree of climate simulation, one also waives to involve all theoretically possible couplings of individual parameters. If a certain coarse is changed, the dependent coarse changes, the winter becomes warm, is removed, for example, less heated, the carbon dioxide outlet is reduced. It is hardly possible to view all such couplings in a computer simulation to the bitter end. Especially since their influence on the result often is rather low – this is never exactly before.

Which makes AOS simulations more effectively difficult, the so-called butterfly effect – in a dynamic model can itself be even smallest, below the limit of measuring accuracy coarse over time so swing so that they influence the behavior of the whole system.

These problems, which shows McWilliams in his work mathematically also examples, are not in principle soluble. This shows about the IPCC report (cf. Climate protection is feasible) very already: it gives a doubled carbon dioxide concentration in the atmosphere one by three degrees increased mean earth surface temperature. With a security of +/- 50 percent. This degree of uncertainty has already remained steadily constant since first simulations at the end of the 70’s – although the supercomputers can hardly keep up with today’s PCs.

So it is not a problem of pure computing power. If you know your limits, you can rate the models better. And you can install a few security mechanisms. For example, it has proved to be practical to calculate whole simulation ensembles. So can better find what degree of inaccuracy is a specific model. But even if you know this, you can not simply view the "middle" model as the most accurate – it is also possible that the model at the limits of ensemble reproduces the reality best.

McWilliams therefore demands that in the processing of simulations not only the input requirements are changed systematically (as is often the case today), but also the way of incorporating the parameters – even if these are far more requirements for the construction of such a model set.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: