The Spin Stops Here!
Decision-Making Under Severe Uncertainty  
Faqs | Help | @ | Contact | home  
voodoostan info-gap decision theory info-gap economics severe uncertainty mighty maximin robust decisions responsible decisions

Responsible decision-making in the face of severe uncertainty

Last modified:

  Latest News  
  Supplementary Material  

What prompted me to create this page was the observation that there are gross misconceptions out there as to what constitutes "Responsible" decision-making under conditions of severe uncertainty.

Table of contents


The specific trigger for this project was this beauty:

Making Responsible Decisions (When it Seems that You Can't)
Engineering Design and Strategic Planning Under Severe Uncertainty

What happens when the uncertainties facing a decision maker are so severe that the assumptions in conventional methods based on probabilistic decision analysis are untenable? Jim Hall and Yakov Ben-Haim describe how the challenges of really severe uncertainties in domains as diverse as climate change, protection against terrorism and financial markets are stimulating the development of quantified theories of robust decision making.

Hall and Ben-Haim, 2007, p. 1

This is the opening paragraph of a paper posted on the web site FloodRiskNet in the UK since November 2007. I submitted my critical comments on this paper twice, but never got any response other than an "out of office" AutoReply (Feb 8, 2008).

So, no big deal.

I'll comment on this paper -- and related issues -- here. This discussion is intimately related to my campaign to contain the spread of

Voodoo Decision Theory

in Australia.

This is, and will continue to be, "work in progress". For the time being all I do is post here the comments I sent Jim Hall in Feb 2008.

Info-Gap decision theory

The point of Hall and Ben-Haim's (2007) paper is to advocate the use of Info-Gap decision theory for responsible decision-making under severe uncertainty.

It is important therefore to point out, especially for the benefit of readers who are not familiar with Info-Gap Decision Theory, that this theory turns a blind eye to the universal

GIGO Axiom

Garbage In --- Garbage Out

As a consequence it does not follow the well known maxim:

GIGO Corollary

The results of an analysis are only as good as the estimates on which they are based.

Hence, the info-Gap rhetoric would have us believe that, apparently by some miracle, an analysis conducted in the immediate neighborhood of a wild guess can generate results that are ... meaningfull/ worthwhile/ useful / reliable etc.!!!

But the most astounding thing of all is that, in their paper, Hall and Ben-Haim (2007) do not provide even a single reference to the very relevant and thriving field of Robust Optimization.

How about this!!!

See my Voodoo Decision-Making Campaign for details. I also discuss this issue in my contribution to

WIKIPEDIA article on Info-Gap Decision Theory

and in

The Mighty Maximin!

where I show, among other things, that Info-Gap robustness model is in fact a ... simple Maximin model (circa 1940) in disguise.

In short, not only do Hall and Ben-Haim (2007) unwittingly re-invent a very famous 1940 wheel, they do not use it properly.

The comment

Here is the comment I sent Jim in February 2008.

The title of the article suggests that the methodology that it promotes, namely Info-Gap, is almost too good to be true.

After all, we are led to believe that there is a new methodology out there that is capable of generating robust decisions in situations where "... the uncertainties facing a decision maker are so severe that the assumptions in conventional methods based on probabilistic decision analysis are untenable ..."

With this as the appetizer, what should we expect for the main course? Would it be a description of a breakthrough in decision-making under severe uncertainty or ... a recipe for voodoo decision making?

Let's see.

Classical decision theory offers the Maximin paradigm as a "natural" framework for dealing with robust decision-making under severe uncertainty. And as we know only too well, the price tag for the ultimate robustness provided by this paradigm is significant: the worst case philosophy underpinning Maximin can result in extremely conservative decisions. This is the familiar consequence of over-protection.

So the question arises: how is it that Maximin, the stalwart of robust decision-making, is not mentioned, let alone discussed, in the article? Indeed, how is it that this paradigm is not mentioned in the two editions of the Info-Gap book?

Readers interested in this aspect of Info-Gap are welcome to visit my website to read more about this intriguing phenomenon. Here it suffices to point out that Info-Gap's generic model is -- surprise surprise -- a simple Maximin model (The formal proof is 2 lines long).

This is the good news.

The bad news is that Info-Gap's generic model conducts its analysis in the immediate neighborhood of the nominal value of the parameter of interest, hence the worst-case analysis a la Maximin conducted by Info-Gap is local in nature. In other words, Info-Gap deploys a definition of robustness that does not attempt to explore the entire region of uncertainty. It focusses on the nominal value of the parameter of interest and its immediate neighborhood.

Since under severe uncertainty the nominal value is a poor indication of the true value of the parameter of interest and is likely to be substantially wrong, it follows that actually robustness a la Info-Gap does not deal with severe uncertainty: it simply ignores it.

This fundamental flaw can be vividly illustrated by inspection: the results generated by the generic Info-Gap model are invariant with an increase in the actual size of the region of uncertainty. Thus, if you increase the size of the actual region of uncertainty, say ten-fold, the analysis and the results are not sensitive to this change. And if you discover that actually the region of uncertainty should be increased 100-fold, or 2034578-fold, still this has no impact whatsoever on the analysis and the results generated by the generic Info-Gap model.

In short, the "breakthrough" reported on in the article is not a breakthrough at all. Rather, the article promotes a methodology that is based on a simple Maximin model whose effective region of uncertainty is concentrated around a nominal value of the parameter of interest. Since under severe uncertainty this nominal value is a "wild guess", we have no choice but assume that the results generated by this methodology are also wild guesses.

Now back to the title of the article.

Given that Info-Gap's generic model is a simple Maximin model and given the local nature of the uncertainty analysis, it would be more appropriate to change "Responsible" to "Irresponsible".

A full paper inspired by this article will be completed soon.

The saga continues ...

Almost a year later, we find the paper entitled


by Jim Hall and Hamish Harvey, in the program of the Eighth International Conference on Hydroinformatics (January 12-16, 2009, Concepción, Chile).

The abstract reads as follows:

Flood risk analysis is subject to uncertainties, often severe, which have the potential to undermine engineering decisions. This is particularly true in strategic planning, which requires appraisal over long periods of time. Traditional economic appraisal techniques largely ignore this uncertainty, preferring to use a precise measure of performance, which affords the possibility of unambiguously ranking options in order of preference. In this paper we describe an experimental application of information-gap theory, or info-gap for short to a flood risk management decision. Info-gap is a quantified non-probabilistic theory of robustness. It provides a means of examining the sensitivity of a decision to uncertainty. Rather than simply presenting a range of possible values of performance, info-gap explores how this range grows as uncertainty increases. This allows considerably greater opportunity for insight into the behaviour of our model of option performance. The information generated may be of use in improving the model, refining the options, or justifying the selection of one option over the others in the absence of an unambiguous rank order. Secondly, we demonstrate the possibility of exploring the value of waiting until improved knowledge becomes available by constructing options that explicitly model this possibility.

Interestingly, but not surprisingly, the full paper follows the usual script. In particular, it makes no mention whatsoever of the fact that the most famous indeed classic non-probabilistic approach to decision-making under severe uncertainty is Wald's Maximin paradigm (circa 1940); and it gives not the slightest hint that Info-Gap's robustness model is in fact a simple Maximin model (see FAQs about info-gap decision theory).

This is really an amazing story!

Question: how long can scholars in the field keep their heads burried in the sand?

Longer than you might think, mate!

Of course, the trouble is that in academia voodoo theories can be propagated and perpetuated via research grants. In practice this involves the supervision of graduate students — particularly PhD students — and Post-Docs. So, it may well be that the current crop of PhD students and Post-Docs working on Info-Gap decision theory will sustain its growth for a while.

For examples of PhDs of this type see

both offered by the School of Civil Engineering and Geoscience, Newcastle University, UK.

Note that the completion of a PhD takes at least three years. So if these two projects are taken up by students, we should expect a stream of Info-Gap publications on these topics for at least the next ... five years (see What's next?).

My view on this is rather cynical.

I believe that the pressure on academics, especially when it comes to procuring research grants, is such that they are driven to portray the methods that they propose/develop as new and revolutionary. For think about it: what is the likelihood of your obtaining a grant if you explain in your grant application clearly, in plain words, that your research will be based on an old, mainstream theory?

In any case, we shall have to wait and see. Only time will tell (see What's next?).

But ... there is also some good news!

It appears that Hall and Harvey (2009) have taken note of my criticism of Info-Gap's robustness model. So much so that they have deemed it necessary to incorporate in their paper the following qualification regarding Info-Gap's robustness model:

An assumption remains that values of u become increasingly unlikely as they diverge from û.

This is a very significant development!

And yet, in spite of the fact that this assumption explicitly introduces a new dimension to the thinking underlying Info-Gap's robustness model, it is not nearly strong enough to correct the fundamental flaws in this thinking. Consequently, it is unable to correct the flaws in the robustness model (see FAQ-76.).

But more than this, this assumption sharply contradicts Ben-Haim's many statements categorically banning any talk of likelihood in the context of Info-Gap's uncertainty and robustness model. For instance (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

Ben-Haim (2006, p. 18)

More on this can be found in FAQ-76.

What's next?

The latest news from the ESPRC (Engineering and Physical Sciences Research Council, UK) is that a �1,345,580.00 EPSRC Platform Grant entitled

Earth Systems Engineering: Sustainable systems engineering for adapting to global change

was awarded to a team led by Prof. Jim Hall.

According to the Civil Engineering and Geosciences News (26.9.08) from Newcastle University (UK), one of the six themes on the agenda is

  • Uncertainty Analysis
    (lead: Jim Hall) will advance robust decision analysis under severe uncertainty and will develop customised Bayesian methods for model calibration and emulation.

The grant covers the period 1 December 2008 — 30 November 2013, so it looks like the Info-Gap saga will continue for at least 5 more years!

As expected, such grants will keep info-gap decision theory alive for a while. The latest product is another peer-reviewed article by Daniel Hine and Jim W. Hall (2010). See my review report on this gem.


It will be interesting to see how Hall and Ben-Haim reconcile the fundamental difference between their views on embedding a "likelihood" structure in Info-Gap's uncertainty model. Recall (see Saga continues ... ) that whereas Ben-Haim categorically prohibits any attribution of likelihood to Info-gap's uncertainty model, Hall now (January 2009) adds an assumption that embeds a very strong (monotonic) likelihood structure in the model.

Stay tuned!

Good news!

I am extremely pleased that, apparently for the first time, an official Government commissioned report takes notice of my criticism of Info-Gap decision theory.

Sadly, it is not an Australian report!

As they say, you can't be a prophet in your own Land!

What a pity! What a waste!

I do hope, though, that AU government agencies that sponsor info-gap projects will follow suit soon. It is long overdue.

In any case, The following paragraph is a quote from page 75 of the DEFRA report:

More recently, Info-Gap approaches that purport to be non-probabilistic in nature developed by Ben-Haim (2006) have been applied to flood risk management by Hall and Harvey (2009). Sniedovich (2007) is critical of such approaches as they adopt a single description of the future and assume alternative futures become increasingly unlikely as they diverge from this initial description. The method therefore assumes that the most likely future system state is known a priori. Given that the system state is subject to severe uncertainty, an approach that relies on this assumption as its basis appears paradoxical, and this is strongly questioned by Sniedovich (2007).

Mervyn Bramley, Ben Gouldby, Anthony Hurford, Jaap-Jeroen Flikweert
Marta Roca Collell, Paul Sayers, Jonathan Simm, Michael Wallis
Delivering Benefits Through Evidence
PAMS (Performance-based Asset Management System)
Phase 2 Outcome Summary Report (PDF File)
Project: SC040018/R1
Environment Agency -- December 2009
Department for Environment Food and Rural Affairs

The diplomatic language of the report cannot hide the obvious conclusions!

Needless to say, Ben-Haim will adamantly disagree with Hall and Harvey's (2009) position regarding the need for a likelihood structure to justify the validity of the local nature of Info-Gap's robustness model.

The point is that whereas Hall and Harvey (2009) reached the (right) conclusion and decided to spell out clearly the logic behind info-gap's local approach to robustness, Ben-Haim insists on hiding this from the public!

What a mess!

Modern Alchemy, Freudian Slips, Quick-Fixes and Suchlike

If you are taking it for granted that the quest for a magic formula capable of transforming severe lack of knowledge / information into substantial knowledge was abandoned with the Enlightenment, I have news for you!

Apparently, against all scientific odds, Info-Gap scholars were successful in imputing likelihood to results generated by a non-probabilistic model that is completely devoid of any notion of likelihood!

Recall that Info-Gap decision theory prides itself on being non-probabilistic and likelihood-free. Yet, Info-gap scholars -- the Father of Info-Gap included -- now claim that Info-Gap's robustness model is capable of identifying decisions that are most likely to satisfy a given performance requirement.

Consider for instance the following quote from ACERA Endorsed Core Material (emphasisis mine):

Information-gap (henceforth termed 'info-gap') theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing. It provides a platform for comprehensive sensitivity analysis relevant to a decision.

Burgman, Wintle, Thompson, Moilanen, Runge, and Ben-Haim (2008, p. 8).
Reconciling uncertain costs and benefits in Bayes nets for invasive species management
ACERA Endorsed Core Material: Final Report, Project 0601 - 0611.
(PDF file, Downloaded on March 21, 2009)

This is a major scientific breakthrough.

For, until now we have been warned repeatedly by Info-Gap scholars that no likelihood must be attributed to results generated by Info-Gap decision models. Indeed, we have been advised that this would be deceptive and even dangerous (emphasis is mine):

However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgment would be deceptive and could be deceptive and dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.

Ben-Haim (1994, p. 152)
Convex models of uncertainty: applications and implications
Erkenntnis, 4, 139-156.

This point is also made crystal clear in the second edition of the Info-Gap book (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

Ben-Haim (2006, p. 18)
Info-Gap Decision Theory: Decisions Under Severe uncertainty
Academic Press.

So the question is: have Info-gap scholars managed to accomplish a major feat in the area of decision-making under severe uncertainty?

Of course the answer is that this new claim (Burgman et al's 2008) is not due to a breakthrough in decision-making under severe uncertainty but, rather to an unfortunate, blatant error of judgment.

My view on this episode -- based as it is on numerous discussions with Info-Gap scholars over the past five years -- is that this new claim is simply -- but not surprisingly -- ... a Freudian slip.

The point is that -- see my FAQs about Info-Gap -- without imputing some sort of "likelihood" to Info-Gap's decision model, Info-Gap decision theory is, and cannot escape being, a voodoo decision theory.

So, all that this Freudian slip manages to do is to extend the already existing error -- an alternative that some Info-Gap scholars seem to prefer to an admission to a mistake.

One can only wonder then: how long will it take other Info-Gap scholars such as Burgman et al (2008), to reach this unavoidable conclusion?

Only time will tell (March 21, 2009).

The Black Swan


Only time will tell what impact (if any) Nassim Taleb's recent popular and controversial book The Black Swan: The Impact of the Highly Improbable will have on the field of decision-making under severe uncertainty.

I, for one, hope that the issues raised in this book and in its predecessor, Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life, will be instrumental in helping decision-makers to identify voodoo decision theories -- such as Info-Gap decision theory -- that promise robust decisions under severe uncertainty.

I fear though -- in view of my experience of the past 40 years - that the danger is that the huge success of the Black Swan will inspire a new wave of voodoo decision theories, purportedly capable of ... "domesticating" black swans and preempting the discovery of ... purple swans!

We shall have to wait and see.

For those who have "been in hiding" I should note that Taleb has become quite a celebrity. According to the Prudent Investor Newsletters (Tuesday, June 3, 2008):

  • Mr. Taleb charges about $60,000 per speaking engagement and does about 30 presentations a year to "to bankers, economists, traders, even to Nasa, the US Fire Administration and the Department of Homeland Security" according to Timesonline’s Bryan Appleyard.

  • He recently got $4million as advance payment for his next much awaited book.

  • Earned $35-$40 MILLION on a huge Black Swan event-on the biggest stockmarket crash in modern history-Black Monday, October 19,1987.

So, if you haven’t heard him in person you can easily find on the WWW numerous videos of his interviews.

Here is a link to a very short (2:45 min) clip, recorded by Taleb himself, apparently at Heathrow Airport, of 10 tips on how to deal with Black Swans, and life in general.

  1. Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.

  2. Go to parties. You can't even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.

  3. It's not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.

  4. Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act -- if you can't control outcomes, you can control the elegance of your behaviour. You will always have the last word.

  5. Don't disturb complicated systems that have been around for a very long time. We don't understand their logic. Don't pollute the planet. Leave it the way we found it, regardless of scientific 'evidence'.

  6. Learn to fail with pride -- and do so fast and cleanly. Maximise trial and error -- by mastering the error part.

  7. Avoid losers. If you hear someone use the words 'impossible', 'never', 'too difficult' too often, drop him or her from your social network. Never take 'no' for an answer (conversely, take most 'yeses' as 'most probably').

  8. Don't read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants ... or (again) parties.

  9. Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.

  10. Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.

It is interesting to juxtapose Prof. Taleb’s thesis in The Black Swan that severe uncertainty makes (reliable) prediction in the Socio/economic/political spheres impossible, with the polar position taken by his colleague, Prof. Bruce Bueno de Mesquita, who actually specializes in predicting the future.

One thing for sure: Sooner or later info-gap scholars will find a simple reliable recipe for handling Black Swans!

Stay tuned!

And what do you know ?????? See Review 17

It was bound to happen!!

New Nostradamuses

Not only professionals specializing in "decision under uncertainty", but also the proverbial "man in the street", take it for granted that the ability to accurately predict future events is one of the most onerous challenges facing humankind — especially persons in authority, persons responsible for the management of business or economic organizations etc.

A notable exception to this rule is the New _Nostradamus" : Prof. Bruce Bueno de Mesquita, a political science professor at New York University and Senior Fellow at the Hoover Institution.

Who, according to Good Magazine, specializes in predicting future events — at least in the area of international conflicts.

The claim is that this distinguished political scientist can actually predict the outcome of any international conflcit!

To do this Prof. Bueno de Mesquita does not use a Crystal Ball, but a thoroughly scientific method which he claims, is based in a branch of applied mathematics called Game Theory.

According to,

" ... Bruce Bueno de Mesquita is a political scientist, professor at New York University, and senior fellow at the Hoover Institution. He specializes in international relations, foreign policy, and nation building. He is also one of the authors of the selectorate theory.

He has founded a company, Mesquita & Roundell, that specializes in making political and foreign-policy forecasts using a computer model based on game theory and rational choice theory. He is also the director of New York University's Alexander Hamilton Center for Political Economy.

He was featured as the primary subject in the documentary on the History Channel in December 2008. The show, titled Next Nostradamus, details how the scientist is using computer algorithms to predict future world events ..."

Here is an interview with Prof. Bueno de Mesquita (with Riz Khan - The art and science of prediction - 09 Jan 08):

And here is a 20-minute lecture on the ... future of Iran (TED, February 2009):

Apparently, all you need to accomplish this is a computer, expert-knowledge on Iran, and game theory!

Some of the predictions attributed to Prof. Bueno de Mesquita are:

  1. The second Palestinian Intifada and the death of the Mideast peace process, two years before this came to pass.

  2. The succession of the Russian leader Leonid Brezhnev by Yuri Andropov, who at the time was not even considered a contender.

  3. The voting out of office of Daniel Ortega and the Sandanistas in Nicaragua, two years before this happened.

  4. The harsh crack down on dissidents by China's hardliners four months before the Tiananmen Square incident.

  5. France's hairs-breadth passage of the European Union's Maastricht Treaty.

  6. The exact implementation of the 1998 Good Friday Agreement between Britain and the IRA.

  7. China's reclaiming of Hong Kong and the exact manner the handover would take place, 12 years before it happened.

Impressive, isn't it!

As might be expected, these and similar claims by Prof. Bueno de Mesquita have sparked a vigorous debate not only in the professional journals but also on the WWW. Interested readers can consult this material to see for themselves, whether Bueno de Mesquita's claims attest to a major scientific breakthrough or ... voodoo mathematics.

Also, in addition to consulting this material you may want to have a look at a short video clip by Matt Brawn (right) which, he compiled in response to a short note entitled This man can actually predict the future!.

Of particular interest is, of course, the "success" rate of the Prof. Bueno de Mesquita's predictions: over 90% — yes over ninty percent!

Here is Trevor Black's common sense reaction to this claim:

I am a little skeptical about anyone who claims to have a 90% success rate. I just don't buy it. Especially when they say that they can explain away a lot of the other 10%.

If you come to me and tell me you have a model that gets it right 60% or 70% of the time, I may listen. Skeptically, but I will listen. 90% and I start to smell something.

All I wish to add here is that Prof. Bueno de Mesquita (left) makes his predictions under conditions of "severe uncertainty" which of course render them hugely vulnerable to what Prof. Naseem Taleb (right) dubs the Black Swan phenomenon.

Hence, the very proposition that such predictions can be made at all, let alone be reliable, is diametrically opposed to Nassim Taleb's categorical rejection of any such position. For his thesis is that Black Swans are totally outside the purview of mathematical treatment, especially by models that are based on expected utility theory and rational choice theory.

Interesting, though, this is precisely the stuff that Prof. Bueno de Mesquita's method is made of: expected utility theory and rational choice theory!

Even more interesting is the fact that Nassim Taleb (right) and Bueno de Mesquita (left) are staff members of the same academic institution, namely New York University. So, all that's left to say is: Go figure!

As indicated above, the debate over Bueno de Mesquita's theories is not new. It has been ongoing, in the relevant academic literature, at least since the publication of his book The War Trap (1981).

For an idea of the kind of criticism sparked by his work, take a look at the quotes I provide from articles that are critical of Bueno de Mesquita theories.

Of course, there are other New Nostradamuses around.

According to the Associated Press, the latest (2009, Mar 4, 4:39 AM EST) news from Russia about the future of the USA is that

" ... President Barack Obama will order martial law this year, the U.S. will split into six rump-states before 2011, and Russia and China will become the backbones of a new world order ..."

Apparently this prediction was made by Igor Panarin (right), Dean of the Russian Foreign Ministry diplomatic academy and a regular on Russia's state-controlled TV channels (see full AP news report).

Regarding the future of Russia,

"You don't sound too hopeful".
"Hopeful? Please, I am Russian. I live in a land of mad hopes, long queues, lies and humiliations. They say about Russia we never had a happy present, only a cruel past and a quite amazing future ..."
Malcolm Bradbury
To the Hermitage (2000, p. 347)

We should therefore be reminded of J K Galbraith's (1908-2006) poignant observation:

There are two classes of forecasters: those who don't know and those who don't know they don't know.

And in the same vein,

The future is just what we invent in the present to put an order over the past.

Malcolm Bradbury
Doctor Criminale (1992, p. 328)

So, we shall have to wait and see.

And how about this more recent piece by Heath Gilmore and Brian Robins in the Sydney Morning Herald (March 27, 2009):

"... COUPLES wondering if the love will last could find out if theirs is a match made in heaven by subjecting themselves to a mathematical test.

A professor at Oxford University and his team have perfected a model whereby they can calculate whether the relationship will succeed.

In a study of 700 couples, Professor James Murray, a maths expert, predicted the divorce rate with 94 per cent accuracy.

His calculations were based on 15-minute conversations between couples who were asked to sit opposite each other in a room on their own and talk about a contentious issue, such as money, sex or relations with their in-laws.

Professor Murray and his colleagues recorded the conversations and awarded each husband and wife positive or negative points depending on what was said. ..."

Such interviews should perhaps be made mandatory for all couples registering their marriage.

More details on the mathematics of marriage can be found in The Mathematics of Marriage: Dynamic Nonlinear Models by J.M. Gottman, J.D. Murray, C. Swanson, R. Tyson, and K.R. Swanson (MIT Press, Cambridge, MA, 2002.)

On a more positive note, though, here is an online Oracle from Melbourne (Australia: the land of the real Black Swan!).

You may wish to consult this friendly 24/7 facility about important "Yes/No" questions that you no doubt have about the future.

Enter your "Yes/No" question:



More on this and related topics can be found in the pages of the Worst-Case Analysis / Maximin Campaign, Severe Uncertainty, and the Info-Gap Campaign.

Recent Articles, Working Papers, Notes

Also, see my complete list of articles
    Moshe's new book!
  • Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, Early View.

  • Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, 19(1-2), 253-281 (Available free of charge)

  • Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

  • Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

  • Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

  • Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

  • Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

  • Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

  • Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
  • .
  • Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

  • Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

  • Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

  • Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

  • Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
    This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

  • Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

  • Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

  • Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

  • Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
    In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

  • Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
    This paper is dedicated to the modeling aspects of Maximin and robust optimization.

  • Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

  • Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
    In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

  • My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

    This is an exciting development!

    • Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

    • Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

      So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

      Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!

  • A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
    This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
    It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

  • A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
    This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
    It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

  • A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
    This is a very short article entitled The GAP in Info-Gap (PDF File) .
    It is a math-free version of the paper above. Read it if you are allergic to math.

  • A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
    This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

Recent Lectures, Seminars, Presentations

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.

Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.

Last modified: