Faqs | Help | @ | Contact |

## Severe Uncertainty

On this page I discuss some aspects of the modeling of severe uncertainty in the context of decision-making. This discussion is related to my Info-Gap campaign.

Inn keeper: "He also has with him a monkey with the rarest talent ever seen among monkeys or imagined among men, because if he's asked something, he pays attention to what he's asked then jumps onto his master's shoulders and goes up to his ear and tells him the answer to the question, and then Master Pedro says what it is; he has much to say about past things than about future ones, and even though he isn't right all the time, he is not wrong most of the time, so he makes us think he has the devil in his body."

...

Don Quixote asked: "Senior Soothsayer, can your grace tell me ... what will become of us? ..."

[[Senior Soothsayer]] "Senior, this animal does not respond or give information about things to come; about past things he knows a little, and about present ones, a little more."

"By god", said Sancho, "I wouldn't pay anything to have somebody tell me what's already happened to me! Who knows that better than me? And it would be foolish to pay anybody to tell me what I already know ..."

Cervantes, M., Don Quixote, 2003, HarperCollins, NY.
Part II, Chapter 25, p. 624.

## Overview

Recall that classical decision theory distinguishes between three levels of certainty/uncertainty pertaining to a state-of-affairs, namely

• Certainty

• Risk

• Uncertainty

The "Risk" category refers to situations where the uncertainty can be quantified by standard probabilistic constructs such as probability distributions.

In contrast, the "Uncertainty" category refers to situations where our knowledge about the parameter under consideration is so meagre that the uncertainty cannot be quantified even by means of an "objective" probability distribution.

The point is then that "Uncertainty" eludes "measuring".  It is simply impossible to provide a means by which we would "measure" the level, or degree, of "Uncertainty" to thereby indicate how great or daunting it is. To make up for this difficulty a tradition has developed whereby the level, or degree, of "Uncertainty" is captured descriptively that is, informally through the use of "labels" such as these:

• Strict uncertainty
• Severe uncertainty
• Extreme uncertainty
• Deep uncertainty
• Substantial uncertainty
• Essential uncertainty
• Hard uncertainty
• High uncertainty
• True uncertainty
• Fundamental uncertainty
• Wild uncertainty
• Profound uncertainty
• Knightian uncertainty
• True Knightian uncertainty

The trouble is, however, that all too often, these terms are used as no more than buzzwords with a web of empty rhetoric spun around them. So, to guard against this, it is important to be clear on their meaning in the context of the problem under consideration.

It is heartening, therefore, that this message comes through loud and clear in the following assessment (color is mine):

... Another concern of the committee regarding the content of this chapter involves the use of the concept of "robustness." The committee finds that this term is insufficiently defined. A plausible argument can be made that there is no meaningful distinction from usual optimality analysis and that the concept discussed in this report is a matter of a poorly defined utility function. If indeed there is a real technical distinction to be made, the authors should consider expanding and supporting the discussion of this concept. Furthermore, the committee suggests that the authors address the concept of adaptive management in conjunction with discussions of robustness and in particular address how different sources of uncertainty affect different kinds of decisions. Finally, the committee would appreciate a further elucidation of what the author considers to constitute "deep uncertainty" (page 34 and other locations). The committee understands that there is overlap between this concept and the others defined in this section (e.g., "robust"), but nevertheless finds that it is not entirely clear when the author considers the situation inappropriate for use of conventional methods for characterizing uncertainty.

Review of the U.S. Climate Change Science Program's Synthesis and Assessment Product 5.2
"Best Practice Approaches for Characterizing, Communicating, and Incorporating
Scientific Uncertainty in Climate Decision Making"
pp. 17-18, 2007
http://www.nap.edu/catalog/11873.html

So, ... if you are here for buzzwords, then you are in the wrong place, mate!

There are, of course, also the "legal" aspects of this terminology. For instance,

" ... The Norwest court noted that uncertainty is also required under the Code Sec. 174 regulations and under the process of experimentation test of Code Sec. 41. But since the economic risk test uses the term "substantial uncertainty," whereas Code Secs. 174 and 41 require only "uncertainty," the court found that the economic risk test required internal-use software to take a "further step" and to have a higher threshold of technology advancement than in other fields ..."
Rashkin, Michael
Practical Guide to Research and Development Tax Incentives: Federal, State, and Foreign
ISBN 0808014323, 9780808014324
405 pages

In any case, in this discussion I prefer to use the term "Severe Uncertainty".

I understand "Severe Uncertainty" to connote a state-of-affairs where uncertainty obtains with regard to the true value of a parameter of interest.  That is, the true value of this parameter is unknown and the estimate we have of this true (correct) value is:

• A wild guess.
• A poor indication of the true (correct) value of the parameter of interest.
• Likely to be substantially wrong.

According to some of my colleagues, under true severe uncertainty the estimate of the true value of the parameter of interest could be based even on

• Intuition
• Gut feeling
• Rumours

Clearly then, decision-making under severe uncertainty has to deal with situations where the estimate we have is based on the flimsiest grounds, sometimes ... rumours

Needless to say, this is a formidable challenge, especially if we are expected to provide robust decisions.

## Example

The following example illustrates a typical situation of severe uncertainty.

We want to deliver a personal note to a young kangaroo, known to his friends as Jack. Unfortunately, we do not know Jack's exact whereabouts. All we know is that he was last seen in a huge game reserve somewhere in Australia.

In case you have not heard of Jack, he is the one on the left in the picture shown on the right hand side of the page. We suspect that this picture was taken about a year ago, but we are not sure. His passport photo, taken 3 years ago, is shown on the left.

We do not know where these pictures were taken.

Actually, some even argue that these are not Jack's pictures, but never mind.

In short, we do not know Jack's exact location. All we have is a very poor estimate of the location, a kind of wild guess. This estimate is likely to be substantially wrong.

## Knightian Uncertainty

This type of uncertainty is named after the economist Frank Hyneman Knight (1885-1972), who was one of the founders of the so-called "Chicago school of economics'' and who is credited with the distinction between "risk'' and "uncertainty''.
To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term "risk'' to designate the former and the term "uncertainty'' for the latter. The word "risk'' is ordinarily used in a loose way to refer to any sort of uncertainty viewed from the standpoint of the unfavorable contingency, and the term "uncertainty'' similarly with reference to the favorable outcome; we speak of the "risk'' of a loss, the "uncertainty'' of a gain. But if our reasoning so far is at all correct, there is a fatal ambiguity in these terms, which must be gotten rid of, and the use of the term "risk'' in connection with the measurable uncertainties or probabilities of insurance gives some justification for specializing the terms as just indicated. We can also employ the terms "objective'' and "subjective'' probability to designate the risk and uncertainty respectively, as these expressions are already in general use with a signification akin to that proposed. The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique. The best example of uncertainty is in connection with the exercise of judgment or the formation of those opinions as to the future course of events, which opinions (and not scientific knowledge) actually guide most of our conduct.
Knight (1921, III.VIII.1-2)

Personally, I prefer the following characterization of uncertainty.  It is taken from a paper by the famous British economist John Maynard Keynes (1883 - 1946), whose "Keynesian economics" had a major impact on modern economic and political theory.

By "uncertain" knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.
John Maynard Keynes (1937, pp. 213-214)
The General Theory of Employment
Quarterly Journal of Economics, 209-223, 1937.

## Decision-making under severe uncertainty

Given the enormity of the challenge presented by severe uncertainty the question of course is: how do we approach the task of solving problems that are subject to these conditions?

The point here is that in the world of science the ruling convention is that the results generated by a model can be only as good as the estimates on which the results are based. That is, the picture is this

 unreliable input ----> Model ----> unreliable output

But considering the poor quality of the estimates we have under conditions of severe uncertainty, models based solely on these estimates are unlikely to generate reliable results.

So, it is important to be clear at the outset, whether the methodology that one intends to employ for this purpose indeed addresses the severity of the uncertainty under consideration and whether the strategy set out by is indeed capable of providing a sensible solution to the problem in question.

In particular, it is important to determine, at the outset, whether a given methodology is based on a voodoo theory, that is a theory that is based on the following relaxation of the above picture:

 unreliable input ----> Voodoo Model ----> reliable output

But how do you do this?

I shall address these issues ... in due course. For the time being have a look at my other related campaigns, namely

One point that I do want to address now is the following.

## Info-Gap decision theory

It would seem that the temptation to tackle severe (Knightian) uncertainty by simply fixing on the estimate of the parameter of interest must be considerable. For, how else would you explain the rationale behind such a simplistic approach to the daunting challenges posed by severe uncertainty. Yet, this is precisely the approach to severe uncertainty that is advocated by Info-Gap decision theory.

Namely, Info-Gap's prescription for the treatment of severe uncertainty is to conduct the analysis exclusively in the neighborhood of a (very) rough estimate of the parameter of interest whose true value is subject to severe uncertainty.

Of course, you would argue that this amounts to ignoring the severity of the uncertainty altogether, which is tantamount to practicing voodoo decision-making and I'll ... agree with you. Yet, you would be surprised how much mileage can be made of this misguided simplistic approach!

I shall discuss this issue in due course. For the time being consider the following Tale of the Treasure Hunt that I use extensively in my lectures to explain Info-Gap's woefully flawed treatment of Severe Uncertainty:

 The island represents the region of uncertainty under consideration (the region where the treasure is located). The tiny black dot represents the estimate of the parameter of interest (estimate of the location of the treasure). The large white circle represents the region of uncertainty affecting the analysis. The small white square represents the true (unknown) value of the parameter of interest (true location of the treasure) that is subject to severe uncertainty.

Following Info-Gap's prescription we conduct the robustness analysis in the vicinity of Brisbane (QLD), whereas for all we know the true location of the treasure may be somewhere in the middle of the Simpson desert (AUS) or perhaps in down town Melbourne (VIC). Perhaps.

In short, in the language of the Land of No Worries, the Info-Gap's recipe for the treatment of severe uncertainty is as follows:

 Decision-Making Under Severe Uncertainty a la Info-Gap Decision Theory Fundamental Difficulty Robust Solution The estimate we have is a wild guess, a poor indication of the true value of the parameter of interest, and is likely to be substantially wrong. No worries, mate! Conduct the analysis in the immediate neighborhood of the poor estimate!

As I explain to my colleagues and students, this prescription for the management of severe uncertainty is what makes Info gap decision theory a voodoo decision theory par excellence.

The question is: how far can one go (wrong) in this direction?

Apparently, the opportunities here are unbounded!

For example, consider this table, which is constructed from the information provided in Figure 1 (page) in the paper "Accounting for uncertainty in marine reserve design" (Halpern et al. 2006, Ecology Letters, 9: 2–11).

 -------------------- uncertainty --------------------> Model Approach TraditionalStatistics TraditionalModeling ProbabilityTheory BayesianStatistics ProbabilityBounds IntervalAnalysis Info-gapModeling Uncertainty Assumptions normaldistribution otherdistributions probabilitydensity priordistribution boundedprob. density upper & lower limits unbounded

This is reproduced in a recent (2010) PhD dissertation entitled "The EU as an Actor in International Environmental Negotiations: The Role of the Mixity Principle in Fishery Agreements" (see Figure 1, page 124).

The "unbounded" feature is a reflection of the fact that, according to the info-gap literature (Ben-Haim 2001, 2006), the most commonly encountered info-gap models are ubbounded.

One can argue that this is magic rather than science!. How can a local analysis in the neigghborhood of a poor estimate be considered as a reliable approach to decision making under severe uncertainty in cases where the uncertainty space is unbounded?

And on what grounds can one claim (e.g. Halpern et al. 2006, p. 6) that such a local analysis " ... maximizes the reliability of an adequate outcome..." ?

As I indicated above, I don't regard this a magic. I regard this as voodoo decvision-making.

## Modern Alchemy, Freudian Slips, Quick-Fixes and Suchlike

If you are taking it for granted that the quest for a magic formula capable of transforming severe lack of knowledge / information into substantial knowledge was abandoned with the Enlightenment, I have news for you!

Apparently, against all scientific odds, Info-Gap scholars were successful in imputing likelihood and chanceto results generated by a non-probabilistic model that is completely devoid of any notion of likelihood!

Recall that Info-Gap decision theory prides itself on being non-probabilistic and likelihood-free. Yet, Info-gap scholars -- the Father of Info-Gap included -- now claim that Info-Gap's robustness model is capable of identifying decisions that are most likely to satisfy a given performance requirement.

Consider for instance the following quotes (emphasis is mine):

Information-gap (henceforth termed 'info-gap') theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing. It provides a platform for comprehensive sensitivity analysis relevant to a decision.

Burgman, Wintle, Thompson, Moilanen, Runge, and Ben-Haim (2008, p. 8).
Reconciling uncertain costs and benefits in Bayes nets for invasive species management
ACERA Endorsed Core Material: Final Report, Project 0601 - 0611, July 2008.

However, if they are uncertain about this model and wish to minimize the chance of unacceptably large costs, they can calculate the robust–optimal number of surveys with eqn 5.

Tracy M. Rout, Colin J. Thompson, and Michael A. McCarthy (2009, p. 785)
Robust decisions for declaring eradication of invasive species
Journal of Applied Ecology 46, 782–786.

This is a major scientific breakthrough!

For, until now we have been warned repeatedly by Info-Gap scholars that no likelihood must be attributed to results generated by Info-Gap decision models. Indeed, we have been advised that this would be deceptive and even dangerous (emphasis is mine):

However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgment would be deceptive and could be dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.

Ben-Haim (1994, p. 152)
Convex models of uncertainty: applications and implications
Erkenntnis, 4, 139-156.

This point is also made crystal clear in the second edition of the Info-Gap book (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

Ben-Haim (2006, p. 18)
Info-Gap Decision Theory: Decisions Under Severe uncertainty

So the question is: have Info-gap scholars managed to accomplish a major feat in the area of decision-making under severe uncertainty?

Of course the answer is that this new claims (Burgman et al. 2008, Rout et al. 2009) are not due to a breakthrough in decision-making under severe uncertainty, but rather to a serious misrepresentation of Info-gap’s robustness model, culminating in a thoroughly incorrect representation of the results.

My view on these episode -- based as it is on numerous discussions with Info-Gap scholars over the past five years -- is that these new claims are simply -- but not surprisingly -- ... Freudian slips.

The point is that -- see my FAQs about Info-Gap -- without imputing some sort of "likelihood" to Info-Gap's decision model, Info-Gap decision theory is, and cannot escape being, a voodoo decision theory.

So, all that these Freudian slips manage to do is to extend the already existing error -- an alternative that some Info-Gap scholars seem to prefer to an admission to a mistake.

It is interesting to note, though, that some Info-Gap scholars have taken note of my criticism of Info-Gap's robustness analysis to the effect that they now introduce an assumption that explicitly imputes "likelihood" to Info-Gap's uncertainty model. For instance, consider this (emphasis is mine):

An assumption remains that values of u become increasingly unlikely as they diverge from û.

Hall, J. and Harvey, H. (2009, p. 2)
Decision making under severe uncertainty for flood risk management: a case study of info-gap robustness analysis.
Eighth International Conference on Hydroinformatics
(January 12-16, 2009, Concepcion, Chile)
(PDF file)

Although this attempt at a quick-fix fails to fix the problem (see FAQ # 78), it does attest to a recognition that without such an assumption, conducting an analysis of the kind prescribed by Info-Gap's robustness model is utterly senseless.

One can only wonder then: how long will it take other Info-Gap scholars such as Burgman et al. (2008) and Rout et al. (2009) to reach this unavoidable conclusion?

Only time will tell (March 21, 2009).

Postscript:

The good news is that I am extremely pleased that, apparently for the first time, an official Government commissioned report takes notice of my criticism of Info-Gap decision theory.

This is long overdue!

Sadly, it is not an Aussie report!

As they say,

You can't be a prophet in your own land!

What a pity!

What a waste!

I hope that AU government agencies that sponsor info-gap projects will soon follow suit and re-examine this voodoo decision theory.

This is long overdue.

More important: I hope that senior academics promoting this theory in Australia and elsewhere will reconsider their position, especially insofar as supervising PhD students on this subject.

This is long overdue!

In any case, the following paragraph is a quote from page 75 of the 2009 Department for Environment Food and Rural Affairs (DEFRA) report:

More recently, Info-Gap approaches that purport to be non-probabilistic in nature developed by Ben-Haim (2006) have been applied to flood risk management by Hall and Harvey (2009). Sniedovich (2007) is critical of such approaches as they adopt a single description of the future and assume alternative futures become increasingly unlikely as they diverge from this initial description. The method therefore assumes that the most likely future system state is known a priori. Given that the system state is subject to severe uncertainty, an approach that relies on this assumption as its basis appears paradoxical, and this is strongly questioned by Sniedovich (2007).

Mervyn Bramley, Ben Gouldby, Anthony Hurford, Jaap-Jeroen Flikweert
Marta Roca Collell, Paul Sayers, Jonathan Simm, Michael Wallis
Delivering Benefits Through Evidence
PAMS (Performance-based Asset Management System)
Phase 2 Outcome Summary Report (PDF File)
Project: SC040018/R1
Environment Agency -- December 2009
Department for Environment Food and Rural Affairs
UK

The diplomatic language of the report cannot veil the obvious fact: Info-Gap decision theory is a voodoo theory!

But, .... if the flaws are so obvious, how is it that senior academics continue to promote this paradoxical, voodoo theory?

Hence, ... my campaign is still on!

## The Black Swan

Only time will tell what impact (if any) Nassim Taleb's recent popular and controversial book The Black Swan: The Impact of the Highly Improbable will have on the field of decision-making under severe uncertainty.

I, for one, hope that the issues raised in this book and in its predecessor, Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life, will be instrumental in helping decision-makers to identify voodoo decision theories -- such as Info-Gap decision theory -- that promise robust decisions under severe uncertainty.

I fear though -- in view of my experience of the past 40 years - that the danger is that the huge success of the Black Swan will inspire a new wave of voodoo decision theories, purportedly capable of ... "domesticating" black swans and preempting the discovery of ... purple swans!

We shall have to wait and see.

For those who have "been in hiding" I should note that Taleb has become quite a celebrity. According to the Prudent Investor Newsletters (Tuesday, June 3, 2008):

• Mr. Taleb charges about \$60,000 per speaking engagement and does about 30 presentations a year to "to bankers, economists, traders, even to Nasa, the US Fire Administration and the Department of Homeland Security" according to Timesonline’s Bryan Appleyard.

• He recently got \$4million as advance payment for his next much awaited book.

• Earned \$35-\$40 MILLION on a huge Black Swan event-on the biggest stockmarket crash in modern history-Black Monday, October 19,1987.

So, if you haven’t heard him in person you can easily find on the WWW numerous videos of his interviews.

Here is a link to a very short (2:45 min) clip, recorded by Taleb himself, apparently at Heathrow Airport, of 10 tips on how to deal with Black Swans, and life in general.

1. Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.

2. Go to parties. You can't even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.

3. It's not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.

4. Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act -- if you can't control outcomes, you can control the elegance of your behaviour. You will always have the last word.

5. Don't disturb complicated systems that have been around for a very long time. We don't understand their logic. Don't pollute the planet. Leave it the way we found it, regardless of scientific 'evidence'.

6. Learn to fail with pride -- and do so fast and cleanly. Maximise trial and error -- by mastering the error part.

7. Avoid losers. If you hear someone use the words 'impossible', 'never', 'too difficult' too often, drop him or her from your social network. Never take 'no' for an answer (conversely, take most 'yeses' as 'most probably').

8. Don't read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants ... or (again) parties.

9. Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.

10. Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.

It is interesting to juxtapose Prof. Taleb’s thesis in The Black Swan that severe uncertainty makes (reliable) prediction in the Socio/economic/political spheres impossible, with the polar position taken by his colleague, Prof. Bruce Bueno de Mesquita, who actually specializes in predicting the future.

One thing for sure: Sooner or later info-gap scholars will find a simple reliable recipe for handling Black Swans!

Stay tuned!

And what do you know ?????? See Review 17

I was bound to happen!!

Not only professionals specializing in "decision under uncertainty", but also the proverbial "man in the street", take it for granted that the ability to accurately predict future events is one of the most onerous challenges facing humankind — especially persons in authority, persons responsible for the management of business or economic organizations etc.

A notable exception to this rule is the New _Nostradamus" : Prof. Bruce Bueno de Mesquita, a political science professor at New York University and Senior Fellow at the Hoover Institution.

Who, according to Good Magazine, specializes in predicting future events — at least in the area of international conflicts.

The claim is that this distinguished political scientist can actually predict the outcome of any international conflcit!

To do this Prof. Bueno de Mesquita does not use a Crystal Ball, but a thoroughly scientific method which he claims, is based in a branch of applied mathematics called Game Theory.

" ... Bruce Bueno de Mesquita is a political scientist, professor at New York University, and senior fellow at the Hoover Institution. He specializes in international relations, foreign policy, and nation building. He is also one of the authors of the selectorate theory.

He has founded a company, Mesquita & Roundell, that specializes in making political and foreign-policy forecasts using a computer model based on game theory and rational choice theory. He is also the director of New York University's Alexander Hamilton Center for Political Economy.

He was featured as the primary subject in the documentary on the History Channel in December 2008. The show, titled Next Nostradamus, details how the scientist is using computer algorithms to predict future world events ..."

Here is an interview with Prof. Bueno de Mesquita (with Riz Khan - The art and science of prediction - 09 Jan 08):

And here is a 20-minute lecture on the ... future of Iran (TED, February 2009):

Apparently, all you need to accomplish this is a computer, expert-knowledge on Iran, and game theory!

Some of the predictions attributed to Prof. Bueno de Mesquita are:

1. The second Palestinian Intifada and the death of the Mideast peace process, two years before this came to pass.

2. The succession of the Russian leader Leonid Brezhnev by Yuri Andropov, who at the time was not even considered a contender.

3. The voting out of office of Daniel Ortega and the Sandanistas in Nicaragua, two years before this happened.

4. The harsh crack down on dissidents by China's hardliners four months before the Tiananmen Square incident.

5. France's hairs-breadth passage of the European Union's Maastricht Treaty.

6. The exact implementation of the 1998 Good Friday Agreement between Britain and the IRA.

7. China's reclaiming of Hong Kong and the exact manner the handover would take place, 12 years before it happened.

Impressive, isn't it!

 As might be expected, these and similar claims by Prof. Bueno de Mesquita have sparked a vigorous debate not only in the professional journals but also on the WWW. Interested readers can consult this material to see for themselves, whether Bueno de Mesquita's claims attest to a major scientific breakthrough or ... voodoo mathematics. Also, in addition to consulting this material you may want to have a look at a short video clip by Matt Brawn (right) which, he compiled in response to a short note entitled This man can actually predict the future!.

Of particular interest is, of course, the "success" rate of the Prof. Bueno de Mesquita's predictions: over 90% — yes over ninty percent!

Here is Trevor Black's common sense reaction to this claim:

I am a little skeptical about anyone who claims to have a 90% success rate. I just don't buy it. Especially when they say that they can explain away a lot of the other 10%.

If you come to me and tell me you have a model that gets it right 60% or 70% of the time, I may listen. Skeptically, but I will listen. 90% and I start to smell something.

All I wish to add here is that Prof. Bueno de Mesquita (left) makes his predictions under conditions of "severe uncertainty" which of course render them hugely vulnerable to what Prof. Naseem Taleb (right) dubs the Black Swan phenomenon.

 Hence, the very proposition that such predictions can be made at all, let alone be reliable, is diametrically opposed to Nassim Taleb's categorical rejection of any such position. For his thesis is that Black Swans are totally outside the purview of mathematical treatment, especially by models that are based on expected utility theory and rational choice theory. Interesting, though, this is precisely the stuff that Prof. Bueno de Mesquita's method is made of: expected utility theory and rational choice theory! Even more interesting is the fact that Nassim Taleb (right) and Bueno de Mesquita (left) are staff members of the same academic institution, namely New York University. So, all that's left to say is: Go figure!

As indicated above, the debate over Bueno de Mesquita's theories is not new. It has been ongoing, in the relevant academic literature, at least since the publication of his book The War Trap (1981).

For an idea of the kind of criticism sparked by his work, take a look at the quotes I provide from articles that are critical of Bueno de Mesquita theories.

Of course, there are other New Nostradamuses around.

According to the Associated Press, the latest (2009, Mar 4, 4:39 AM EST) news from Russia about the future of the USA is that

" ... President Barack Obama will order martial law this year, the U.S. will split into six rump-states before 2011, and Russia and China will become the backbones of a new world order ..."

Apparently this prediction was made by Igor Panarin (right), Dean of the Russian Foreign Ministry diplomatic academy and a regular on Russia's state-controlled TV channels (see full AP news report).

Regarding the future of Russia,

"You don't sound too hopeful".
"Hopeful? Please, I am Russian. I live in a land of mad hopes, long queues, lies and humiliations. They say about Russia we never had a happy present, only a cruel past and a quite amazing future ..."
To the Hermitage (2000, p. 347)

We should therefore be reminded of J K Galbraith's (1908-2006) poignant observation:

There are two classes of forecasters: those who don't know and those who don't know they don't know.

And in the same vein,

 The future is just what we invent in the present to put an order over the past. Malcolm Bradbury Doctor Criminale (1992, p. 328)

So, we shall have to wait and see.

And how about this more recent piece by Heath Gilmore and Brian Robins in the Sydney Morning Herald (March 27, 2009):

"... COUPLES wondering if the love will last could find out if theirs is a match made in heaven by subjecting themselves to a mathematical test.

A professor at Oxford University and his team have perfected a model whereby they can calculate whether the relationship will succeed.

In a study of 700 couples, Professor James Murray, a maths expert, predicted the divorce rate with 94 per cent accuracy.

His calculations were based on 15-minute conversations between couples who were asked to sit opposite each other in a room on their own and talk about a contentious issue, such as money, sex or relations with their in-laws.

Professor Murray and his colleagues recorded the conversations and awarded each husband and wife positive or negative points depending on what was said. ..."

Such interviews should perhaps be made mandatory for all couples registering their marriage.

More details on the mathematics of marriage can be found in The Mathematics of Marriage: Dynamic Nonlinear Models by J.M. Gottman, J.D. Murray, C. Swanson, R. Tyson, and K.R. Swanson (MIT Press, Cambridge, MA, 2002.)

On a more positive note, though, here is an online Oracle from Melbourne (Australia: the land of the real Black Swan!).

You may wish to consult this friendly 24/7 facility about important "Yes/No" questions that you no doubt have about the future.

More on this and related topics can be found in the pages of the Worst-Case Analysis / Maximin Campaign, Severe Uncertainty, and the Info-Gap Campaign.

## Recent Articles, Working Papers, Notes

Also, see my complete list of articles
 Moshe's new book!
• Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, in press.

• Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, in press.

• Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

• Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

• Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

• Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

• Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

• Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

• Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
• .
• Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

• Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

• Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

• Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

• Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

• Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

• Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

• Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

• Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

• Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
This paper is dedicated to the modeling aspects of Maximin and robust optimization.

• Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

• Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

• My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

This is an exciting development!

• Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

• Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!

• A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

• A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

• A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
This is a very short article entitled The GAP in Info-Gap (PDF File) .
It is a math-free version of the paper above. Read it if you are allergic to math.

• A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

## Recent Lectures, Seminars, Presentations

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.

Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.