Models and Decisions Conference 2013
print


Breadcrumb Navigation


Content

Program

"Models and Decisions" will take place from 10 to 12 April, 2013. There will be an evening lecture by Julian Nida-Rümelin and a dinner reception on the first day of the conference and a joint dinner (Dutch treat) on the second day. Please find the program below and various maps on the page "Practical Info".

All invited plenary talks are assigned 45+30 minutes (talk/Q&A), for contributed talks please plan on 30+15 minutes (talk/Q&A). All conference days start with a plenary talk and continue as parallel sessions, shown below in two columns (rooms V002 and VU107). The registration desk is open on all days (and located in the cafeteria near room V002).


Program

Download the program as a PDF file here.

Wednesday, 10 April
08:45 Registration and Coffee Reception (Cafeteria near Room V002)
09:15 Stephan Hartmann: Welcome
Invited Session 1 – Chair: Stephan Hartmann (Room V002)
09:30 Luc Bovens: Evaluating Risky Prospects: The Distribution View
1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!
10:45 Coffee Break
Contributed Session 1 – Chair: Roland Poellinger (Room V002) Contributed Session 2 – Chair: Martin Rechenauer (Room VU107)
11:00 Jason Konek: Accuracy Without Luck (Graduate Paper Award) Raphael van Riel: Truth According to a Model
11:45 Bengt Hansson: General Probabilistic Updating Wolfgang Pietsch: Big Data: Is More Different?
12:30 Lunch Break
Contributed Session 3 – Chair: Ulrike Hahn (Room V002) Contributed Session 4 – Chair: Bengt Hansson (Room VU107)
14:00 Wes Anderson: Population Viability Analysis with Explicit Causal Models? Franz Dietrich: Reasons for Choice and the Problem of Individuating the Alternatives
14:45 Kevin Korb: A Bayesian Approach to Evaluating Computer Simulations Dominik Klein and Eric Pacuit: Expressive Voting: Modeling a Voter's Decision to Vote
15:30 Roland Poellinger: Unboxing the Concepts in Newcomb’s Paradox: Causation, Prediction, Decision in Causal Knowledge Patterns Peter Stone and Koji Kagotani: Optimal Committee Performance: Size versus Diversity
16:15 Coffee Break
Invited Session 2 – Chair: Roland Poellinger (Room V002)
16:45 Michael Strevens: Idealization, Prediction, Difference-Making
1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!
18:00 Break
Evening Lecture at Historisches Kolleg (Kaulbachstr. 15) – Introduction: Kärin Nickelsen
19:00 Evening Lecture by Julian Nida-Rümelin: Cooperation and (Structural) Rationality
20:00 Dinner Reception
 
Thursday, 11 April
09:00 Coffee Reception
Invited Session 3 – Chair: Mark Colyvan (Room V002)
09:30 Ulrike Hahn: Modelling Human Decision Making: Some Puzzles
1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!
10:45 Coffee Break
Contributed Session 5 – Chair: Dominik Klein (Room V002) Contributed Session 6 – Chair: Olivier Roy (Room VU107)
11:00  Mark Colyvan: Value of Information Models and Data Collection in Conservation Biology Brad Armendt: Imprecise Belief and What's at Stake
11:45  Koray Karaca: Modeling Data-Acquisition in Experimentation: The Case of the ATLAS Experiment Alistair Isaac: Uncertainty about Uncertainties: A Plea for Integrated Subjectivism
12:30  Lunch Break
Contributed Session 7 – Chair: Itzhak Gilboa (Room V002) Contributed Session 8 – Chair: Brad Armendt (Room VU107)
14:00 Michael Kuhn: Models in Engineering: Design Tools Jeff Barrett: Description and the Problem of Priors
14:45  Szu-Ting Chen: Representation as a Process of Model-Building Brian Hill: Confidence in Beliefs and Decision Making
15:30 Mariam Thalos: Expectational v. Instrumental Reasoning: Why Statistics Matter Francesca Toni, Robert Craven and Xiuyi Fan: Transparent Rational Decisions by Argumentation
16:15 Coffee Break
Invited Session 4 – Chair: Olivier Roy (Room V002)
16:45 Itzhak Gilboa: Rationality and the Bayesian Paradigm
1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!
18:00 Break
19:00  Joint Dinner (Dutch Treat) at Augustiner Bräustuben (Landsberger Straße 19)
 
Friday, 12 April
09:00 Coffee Reception
Invited Session 5 – Chair: Jan Sprenger (Room V002)
09:30  Claudia Tebaldi: Making Sense of Multiple Climate Models' Projections
1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!
10:45  Coffee Break
Contributed Session 9 – Chair: Claudia Tebaldi (Room V002) Contributed Session 10 – Chair: Matteo Colombo (Room VU107)
11:00  Erich Kummerfeld and David Danks: Model Selection, Decision Making, and Normative Pluralism: Theory and Climate Science Application Ekaterina Svetlova: Financial Models as Decision-Making Tools
11:45  Mathias Frisch: Modeling Climate Policies: A Critical Look at Integrated Assessment Models Carlo Martini: Modeling Expertise in Economics
12:30  Lunch Break
Contributed Session 11 – Chair: Michael Strevens (Room V002) Contributed Session 12 – Chair: Carlo Martini (Room VU107)
14:00  Susan G. Sterrett: Models of Interventions Frederik Herzberg: Aggregation Prior to Preference Formation: How to Rationally Aggregate Probabilities
14:45  David Danks, Stephen Fancsali and Richard Scheines: Constructing Variables for Causal vs. Predictive Inference Jennifer Jhun: Modeling Across Scales and Microeconomics
15:30 Coffee Break
Contributed Session 13 – Chair: Frederik Herzberg (Room V002) Contributed Session 14 – Chair: David Danks (Room VU107)
16:00 Roger Stanev: Data and Safety Monitoring Board and the Ratio Decidendi of the Trial Matteo Colombo: For a Few Neurons More… Tractability and Neurally-Informed Economic Modelling
16:45  Jan Sprenger: Could Popper have been a Bayesian? On the Falsification of Statistical Hypotheses.

Sam Sanders: On Models, Continuity and Infinitesimals

Abstracts

Wes Anderson: Population Viability Analysis with Explicit Causal Models?

One problem in conservation biology is calculating the probability of quasi-extinction time, given a set of interventions. This has been one of the tasks of population viability analysis (PVA) using individual-level, demography-based models. However, these models are not necessarily causal models. As such, I ask whether we can arrive at both correct pre-manipulation and correct post-manipulation causal models for PVA by using a basic causal discovery algorithm and a manipulation operator, given the kinds of systems that are of concern in conservation biology. To answer the question, I lay out the necessary background of the causal modeling literature I draw upon and set up two toy causal structures that could be of concern in conservation biology. Finally, I analyze how well the basic causal discovery algorithm and the manipulation operator would fare in getting the qualitative causal structure correct, if the target system is one of the toy causal structures of concern.

Brad Armendt: Imprecise Belief and What's at Stake

Belief guides deliberation and decision. Do the prospects or stakes confronting a decision-maker contribute to her belief? Do we move from one context to another, remaining in the same doxastic state concerning p, yet holding a stronger belief that p in one context than in the other? For that to be so, a doxastic state must have a certain sort of context-sensitive complexity. So the question is about the nature of belief states, as we understand them, or find it fruitful to model them. I explore the idea, and how it relates to work on imprecise probabilities and second-order confidence.

Jeff Barrett: Description and the Problem of Priors

Belief-revision models typically have little to say concerning how probabilities initially become associated with meaningful hypotheses. There are three aspects to this problem of priors: (1) determine a descriptive language appropriate for framing hypotheses, (2) arrive at whatever descriptive hypotheses we take as serious contenders, and (3) initially assign probabilities to these hypotheses. Here we consider a sender-predictor signaling game that illustrates how a very simple descriptive language and correspondingly simple inductive expectations might coevolve. On this model, well-tuned expectations may be the fortunate result of evolving a language that is suitable for describing the subject of inquiry.

Luc Bovens: Evaluating Risky Prospects; The Distribution View

Policy Analysts need to rank policies with risky outcomes. Such policies can be thought off as prospects. A prospect is a matrix of utilities. On the rows we list the people who are affected by the policy. In the columns we list alternative states of the world and specify a probability distribution over the states. I provide a taxonomy of various ex ante and ex post distributional concerns that enter into such policy evaluations and construct a general method that reflects these concerns, integrates the ex ante and ex post calculus, and generates orderings over policies. I show that Parfit’s Priority View is a special case of the Distribution View.

1stsvs_logo_s  Watch the video abstract on "MCMP at First Sight" here!

Szu-Ting Chen: Representation as a Process of Model-Building

What does it mean to say that a theory represents the targeted phenomenon that it aims to explain? Our interpretation of “representation” is closely related to the methodological position that we would adopt in answering the question of realism in science. As is pointed out by Nancy Cartwright (1999), the usual philosophical topic of realism in science is about how accurately the sciences can represent the world; but her focus of the question shifts to a concern about the range of science—i.e., how much of the world the sciences can represent. This shift in the methodological concern is by no means trivial; it indicates that there is a change of content in the concept of representation from a static idea to a dynamic one.

What then does it mean to say that the concept of representation is a static idea? According to the traditional syntactic, or orthodox, approach of explaining scientific theorization, a theory is conceived as a set of sentences—or, more precisely, a set of hypotheses—which are expressed in terms of first-order predicate logic and constitute a network of hypotheses. In this sense a theory is a logical structure that includes the most abstract hypotheses—the so-called axioms, which are expressed solely in theoretical terms—along with those hypotheses that are the logical deductive consequences of the axioms and are expressed in both theoretical and observational terms. Within this structure, there is also a set of correspondence rules that help make connections, through various stages, between the theoretical terms and the so-called topsoil of experience; and the anchoring points of these connections are the geneses of the meaning of the entire theoretical structure. The idea of “a theory representing what we see the world” that is captured by this description is a static idea, because it concerns how reliably, at a particular point in the development of a theory, the formal structure of a class of sentences—i.e., the formal structure of a theory—as a whole can stand for the targeted phenomenon in the world.

What then is a dynamic idea of representation? According to the semantic account of scientific theorization, a theory is still regarded as an object containing a class of hypotheses that together account for the targeted phenomenon of the world; however, these hypotheses are not conceived, as in the syntactic approach, as free-standing propositions located in the logical structure of a theory. Instead, each of these hypotheses is regarded as being derived from a specific concrete environment indicated by the theory. If we regard each specific concrete environment as a model, then each hypothesis is said to be derived from this model and to be true of it. From this perspective, a theory can thus be regarded as comprising a class of component models, each of which is used to represent the corresponding part of the targeted phenomenon. The idea of representation manifested in this description of a theory is a dynamic idea, because its focus is no longer a matter of investigating whether a theory as a whole at a particular time reliably represents the targeted phenomenon; rather, its focus is a matter of examining the long-range development of theorizers’ practice of using a class of models to stand for a class of corresponding parts of the phenomenon; that is, its focus is on examining how much of the world the theory can represent. 

As a consequence of this shift from a static to a dynamic mode of thinking, it seems that model-building constitutes the main content of the new concept of representation; but the immediate question is, How should we characterize model-building? Are there any competing philosophical accounts of the nature of model-building? The question is important, because different answers may result in different interpretations of the new concept of representation. By comparing two differing contemporary accounts of the nature of economic models—one proposed by Nancy Cartwright, and the other by Robert Sudgen—and presenting a case study of economic theorizing in international trade theory, this paper argues that, by combining the most characteristic features of Cartwright’s and Sudgen’s ideas about economic models, representation can be regarded as a process of economists’ repeatedly using “realistic representation of the isolated unrealistic world” at each step of their theorizing to build up a class of “unrealistic constructed credible worlds.”

Matteo Colombo: For a Few Neurons More… Tractability and Neurally-Informed Economic Modelling

Are economists provisionally justified to resist neuroeconomists’ advice to build neurally-informed models of choice behavior? Since the rise of neuroeconomics, considerable attention has been devoted to this question. In spite of much debate, however, there continues to be significant confusion about the scope and nature of actual modelling practice in neuroeconomics. One of the most recent arguments on whether neurobiological findings should inform models of choice behavior displays some such confusions (Fumagalli 2011). Based on considerations about modelling practice, this novel argument purports to establish that neurally-informed models of choice bear unacceptable tractability costs, and hence economists should ignore neurobiological information in modelling choice behavior. By using Fumagalli’s (2011) argument as a foil, this paper aims to dispel a number of common confusions that still surround debates about neuroeconomics modelling. Its main contributions are two: It clarifies the relationship between the tractability of a model, its descriptive accuracy and its number of variables. It explains what it can take to neurally-inform a model of choice behavior.

Mark Colyvan: Value of Information Models and Data Collection in Conservation Biology

I will look at recent uses of value of information studies in conservation biology. In the past, it has been mostly assumed that more and better quality data will lead to better conservation management decisions. Indeed, this assumption lies behind and motivates a great deal of work in conservation biology. Of course, more data can lead to better decisions in some cases but decision-theoretic models of the value of information show that in many cases the cost of the data is too high and thus not worth the effort of collecting. While such value of information studies are well known in economics and decision theory circles, their applications in conservation biology are relatively new and rather controversial. I will discuss some reasons to be wary of, at least, wholesale acceptance of such studies. Apart from anything else, value of information models treat conservation biology as a servant to conservation management, where all that matters is the relevant conservation management decision. In short, conservation biology loses some of its scientific independence and the fuzzy boundary between science and policy becomes even less clear.

David Danks, Stephen Fancsali and Richard Scheines: Constructing Variables for Causal vs. Predictive Inference

Scientific models are relative to a variable set, where the correct variables often must be constructed from measurements of other “raw” variables. Variable construction procedures typically focus on predictive inference, but scientific models are also used for causal inference. We provide a framework for measuring the “quality” of possible variable constructions for the purposes of causal inference. We show that variable constructions for predictive and causal inference diverge—the best variables (and so, the best models) for one goal are different from the best for another—due to a deep trade-off between maximizing estimated causal strength and minimizing causal uncertainty.

Franz Dietrich: Reasons for Choice and the Problem of Individuating the Alternatives

In orthodox rational choice theory, behaviour (under certainty) is rationalized by means of a preference relation, without invoking the motivating reasons, that is, without addressing why the agent makes his choices and holds his preferences. Many counterexamples and experiments indicate that real people's behaviour often isn't rationalizable by a preference relation. Proponents of the orthodox theory frequently respond by changing not the theory, but the description of the choice alternatives, through building additional information into the alternatives, for instance contextual information. But this move is ad hoc and undermines empirical falsifiability. We show how our reason-based model can rationalize many 'choice paradoxes', and how it handles the problem of individuation of alternatives. In our model, alternatives canbe specified by an arbitrarily rich set of properties, and we let observed behaviour reveal to us which of these properties are motivationally salient.

Mathias Frisch: Modeling Climate Policies: A Critical Look at Integrated Assessment Models

How do we bridge the gap between climate models and policy decisions? One influential answer is to use cost-benefit analysis and so-called optimization integrated assessment (IA) models to determine what policy maximizes intergenerational welfare. I argue that IA models are subject to deep uncertainties, which undermine the legitimacy of cost-benefit approaches to climate policy. A better approach is to use IA models as toy models in exploring a wide range of plausible climate change scenarios with the aim of finding a climate policy that is robust across a large class of scenarios, including those that predict catastrophic damages under business-as-usual.

Itzhak Gilboa: Rationality and the Bayesian Paradigm

It is claimed that rationality does not imply Bayesianism. We first define what is meant by the two terms, so that the statement is not tautologically false. Two notions of rationality are discussed, and related to two main approaches to statistical inference. It is followed by a brief survey of the arguments against the definition of rationality by Savage's axioms, as well as some alternative approaches to decision making.

1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!

Ulrike Hahn: Modelling Human Decision Making; Some Puzzles

A vast literature on human decision-making has highlighted its frailties and deviations from expected utility, leading to a plethora of alternative, descriptive models. However, more recent work has found maximization of expected value to provide a good descriptive account of decision-making in perceptual or perceptual motor domains. Trying to reconcile these two sets of results raises a number of interesting formal considerations.

1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!

Bengt Hansson: General Probabilistic Updating

Orthodoxy is that probabilities are updated by setting the probability of a certain set to one and adjusting probabilities outside this set in proportion to their previous values (“conditionalisation”). In realistic situations, like in decision making and modelling in the social sciences, it can be shown that this may lead to distorted assignments.

An updating is defined as a principled redistribution of the probability mass resulting from new information. It can be shown by examples that many natural updatings do not conform to orthodoxy and an analysis is given of the reasons why.

Frederik Herzberg: Aggregation Prior to Preference Formation; How to Rationally Aggregate Probabilities

The problem of how to rationally aggregate probability measures occurs e.g. when (i) a group of agents with individual probabilistic beliefs wants to rationalise a collective decision based on an 'aggregate belief system' or (ii) an individual whose belief system is compatible with several (possibly infinitely many) probability measures wishes to evaluate her options on the basis of a single aggregate prior. Simply taking the probability measure induced by aggregated expected-utility preferences is a questionable solution, as we shall see. We propose an alternative based on a generalisation of McConway's theory of probabilistic opinion pooling.

Brian Hill: Confidence in Beliefs and Decision Making

The standard representation of beliefs in decision theory and much of formal epistemology, by probability measures, is incapable of representing an agent's confidence in his beliefs. However, as shall be argued in this talk, the agent's confidence in his beliefs plays, and should play, an central role in many of the most difficult decisions which we find ourselves faced with - including some that rely on (at times controversial) scientific expertise. The aim of this talk is to formulate a representation of agents' doxastic states and a (axiomatically grounded) theory of decision which recognises and incorporates confidence in belief. Time-permitting, consequences for decision making in the face of radical uncertainty, and in particular the debate surrounding the Precautionary Principle, will be examined.

Alistair Isaac: Uncertainty about Uncertainties; A Plea for Integrated Subjectivism

A question at the intersection of scientific modeling and public choice is how to model uncertainty about uncertainties. Low-level uncertainties (e.g. variance in the data) are typically taken to be qualitatively different from high-level uncertainties (e.g. questions about model construction). I argue that in the context of modeling for public policy, high-level uncertainties and other subjective properties of the policymaker (e.g. risk aversion, utilities) should be explicitly represented in the model, transforming the scientific model into a decision-theoretic one. Precedent for this strategy can be found in the literature on monetary policy, but I argue it could be effectively employed in other contexts, e.g. climate models.

Jennifer Jhun: Modeling Across Scales and Microeconomics

It's sometimes unclear what the role of game theoretic equilibria is, whether it be explanatory or predictive. I propose that we exploit resources from the physical and material sciences, in particular appealing to Wilson's (forthcoming) and Batterman's (2012) work on multi-scale modeling. I'll argue that there is a useful analogy with those sciences that yields interesting ways of thinking about solution concepts, in particular the Nash equilibria concept that predominates the economics literature.

Koray Karaca: Modeling Data-Acquisition in Experimentation; The Case of the ATLAS Experiment

According to the “hierarchy of models” (HoM) account of scientific experimentation, originally developed by Patrick Suppes and further detailed by Deborah Mayo and Todd Harris, different stages of experimentation are organized by models of different types that interact with each other through a hierarchical structure ranging from low-level data-models to high-level theoretical models. I point out that the HoM account, as it stands, is not able to provide a modeling framework for the description of the process of data-acquisition. As a result, the HoM account treats data-acquisition like a “black-box” process that produces certain outputs in the form of data-sets given appropriate inputs; thereby taking away all the operational details of the process of data-acquisition that are crucial to understand how experimentation is actually conducted in the laboratory. I argue that this shortcoming can be remedied by supplementing the HoM of experimentation with what I shall call “models of data-acquisition”. I illustrate the need for data-acquisition models with a case-study that is focused on the data-acquisition process of the ATLAS experiment, one of the Large Hadron Collider (LHC) experiments being conducted at CERN. I show that in the case of the ATLAS experiment various types of diagram models adopted from the literature of the System and Software Engineering function as models of data-acquisition, in that they diagrammatically represent various procedures through which data are selected during the course of experimentation. Moreover, I point out that by virtue of diagrammatic representation they deliver diagram models greatly facilitate the communication among the various research groups involved in the design project of the data-acquisition system of the ATLAS experiment. I argue that diagrammatic representation’s being explicit and, thus its being more readily perceivable, and its locality feature, i.e., its ability to group related pieces of information, make them better tools of communication than texts based on sentential representations in the design process of the data-acquisition system of the ATLAS experiment.

Dominik Klein and Eric Pacuit: Expressive Voting; Modeling a Voter's Decision to Vote

In a recent paper, Aragones, Gilboa and Weiss have presented a model for elections in which voters' motivation to vote is to be heared in their political opinions, rather than influencing the outcome. They develop a formal model evaluating different voting systems. We critically examine their model, in particular the semantics for expressing one's opinion. We offer a different model and show that a simiolar analsysis holds while we avoid some weak points of their model.

Jason Konek: Accuracy Without Luck

When a microbiologist designs and performs an experiment aimed at adjudicating between competing hypotheses about, for example, a particular virus' infection mechanism, it is imperative that she somehow take her prior information into account. It would be epistemically and practically disastrous to do otherwise. But taking account of multifarious and complex prior information, of the sort microbiologists typically possess, seems to require first specifying causal models (causal Bayes nets, perhaps) that characterize the competing hypotheses, and then adopting a 'prior' probability distribution over these models that somehow captures or reflects the relevant prior information. Frequentists doubt, however, that there is any well-motivated, sufficiently 'objective' method for constructing such 'priors'. This is the problem of the priors. Objective Bayesians aim to supply this method. Edwin Jaynes, for example, prescribes adopting the maximum entropy (MaxEnt) prior, for the "positive reason that it is... maximally noncommittal with regard to missing information" (Jaynes 1957, 623). This paper motivates and details an alternative method, a new kind of objective Bayesianism. It does so by first providing reason to think that MaxEnt is inadequate. Then it outlines a novel, anti-luck rationale for adopting an alternative prior, the maximally sensitive (MaxSen) prior. In doing so, it highlights a new, particularly promising route for resolving the problem of the priors. It also elucidates one important virtue of model-based reasoning: when done appropriately, such reasoning minimizes our need for *epistemic luck* in arriving at an accurate picture of the world.

Kevin Korb: A Bayesian Approach to Evaluating Computer Simulations

The growing use of computer simulation raises the stakes for its epistemology. Why should we trust their predictions? I shall present a Bayesian point of view.

A Bayesian approach to model validation segregates prior probabilities from empirical support, helping us to understand the relation between calibration and validation and also to make sense of the role of expert opinion. Bayesian reasoning can help arbitrate the dispute between those who want realism in their models and those who want simplicity (KIDS v KISS). Bayesian confirmation theory will be applied to validating an epidemiological and a causal simulation.

Michael Kuhn: Models in Engineering: Design Tools

Engineers make use of models as design tools in developing new concepts and in planning their realization. Engineering models in this sense help to sharpen initially vague ideas and at the same time allow analyzing them. The target system, i. e. the artifact to develop, is not already given in these cases. It evolves in parallel with its models. The often mentioned relation of simplification between model and target system has to be questioned on that ground for models in engineering. This reflective use of models is not found in the philosophy of science literature.

Erich Kummerfeld and David Danks: Model Selection, Decision Making, and Normative Pluralism; Theory and Climate Science Application

At least two aspects of climate science pose problems for traditional approaches to theory choice. First, climate scientists utilize a plurality of non-competing climate models to investigate their various questions, rather than searching for a unique best climate model. Second, climate scientists rely heavily on heuristics in their development and analysis of their models. Our approach treats theory choice as a decision problem rather than an inference problem. We develop a decision-theoretic framework for model selection that provides normative justification for both aspects of climate science. The generality of the approach suggests a much broader array of applications.

Carlo Martini: Modeling Expertise in Economics

One of the most difficult tasks in science is to model phenomena that are subject to constant changes and adaptations to external factors, or whose underlying causal mechanisms are still partly or completely obscure. When that is the task, subjective expert judgment plays an ineliminable role in science. But expert judgment is highly fallible, for a number of reasons. The idea of modeling the contribution of expertise in science is to to provide a number of methodological principles that can help making subjective judgment reproducible and subject to empirical control, and a transparent and reliable tool for decision making and policy applications.

Julian Nida-Rümelin: Cooperation and (Structural) Rationality

Cooperation remains a challenge for the theory of rationality, rational agents should not cooperate in one shot prisoner's dilemmas. But they do, it seems. There is a reason why mainstream rational choice theory is at odds with cooperative agency: rational action is thought to be consequentialist, but this is wrong. If we give up consequentialism and adopt a structural account of rationality, the problem resolves, as will be shown. In the second part of my lecture I shall show that structural rationality can be combined with bayesianism, contrary to what one may expect. And finally I shall discuss some philosophical implications of structural rationality.

Wolfgang Pietsch: Big Data; Is more different?

Recently, computer scientists working with the enormous data sets of the information age have claimed that 'big data' fundamentally changes the way science is conducted. Some even speak of a fourth paradigm in scientific research, besides theory, experiment, and simulation. My paper attempts to assess these claims from a philosophy of science perspective. I argue that big data allows for a novel type of models that I label horizontal models. These lack a number of features that have been considered typical for scientific modeling, namely: a pronounced hierarchical structure; explanatory power; certain idealizations or simplifications; a perspectival character.

Roland Poellinger: Unboxing the Concepts in Newcomb’s Paradox: Causation, Prediction, Decision in Causal Knowledge Patterns

In Nozick’s rendition of the decision situation given in Newcomb’s Paradox dominance and the principle of maximum expected utility recommend different strategies. While evidential decision theory (EDT) seems to be split over which principle to apply and how to interpret the principles in the first place, causal decision theory (CDT) seems to go for the solution recommended by dominance (“two-boxing”). As a reply to the CDT proposal by Wolfgang Spohn, who opts for “one-boxing” by employing reflexive decision graphs, I will draw on the framework of causal knowledge patterns, i.e., Bayes net causal models (cf. e.g. Pearl 2000), augmented by non-causal knowledge (epistemic contours), to finally arrive at “one-boxing” – more intuitively and more closely to what actually is in Nozick’s story. This proposal allows the careful re-examination of all relevant concepts in the original story and might cast new light on the following questions:

  • How may causality in general be understood to allow causal inference from hybrid patterns encoding subjective knowledge?
  • How can the notion of prediction be analyzed – philosophically and formally?
  • And what’s the decision-maker’s conceptualization of the situation he will act upon?

Raphael van Riel: Truth According to a Model

The present paper argues that an understanding of the functioning of the operator 'according to_' enables us to fully describe

  • the logical form of various kinds of sentences about models,
  • the metaphysics of models,
  • identity-conditions for models, and
  • the way models represent their target systems.

This interpretation shares some features with accounts that build upon pretense theory. It is shown, however, that focus on pretense is, at best, misleading within a theory of scientific modeling.

Sam Sanders: On Models, Continuity and Infinitesimals

We discuss a necessary property of scientific models, namely 'continuity in the model parameters', and identify a method which guarantees this continuity property, namely the use of 'computable' or 'constructive' Mathematics. As the the latter is quite far removed from the mathematical practice in Physics and Engineering, we discuss a recent alternative formulation of computable/constructive Mathematics in terms of infinitesimals from Nonstandard Analysis, as the latter is closer to said mathematical practice, i.e. an informal 'calculus with infinitesimals'. Finally, we provide an example from the Philosophy of Mathematics literature where the continuity property is violated 'by definition'.

Jan Sprenger: Could Popper have been a Bayesian? On the Falsification of Statistical Hypotheses.

Karl R. Popper was a fervent opponent of Carnap's logical probability approach, and more generally speaking, any inductivist logic of inference. On the other hand, Popper never developed a full account of reasoning under uncertainty and testing statistical hypotheses. Concrete proposals for falsifying a statistical hypothesis (e.g., Gillies 1971) were met with devastating criticism in the literature (e.g., Spielman 1974). So the problem of applying falsificationism to statistical hypothesis testing persists.

In this contribution, I explore whether an instrumental Bayesian approach can help to solve this problem. For instance, for Gelman and Shalizi (2013), the testing of complex statistical models combines a hypothetico-deductive methodology (similar to Popper's own approach) with the technical tools of Bayesian statistics. But also other proposals, like Bernardo's (1999) decision-theoretic approach to hypothesis testing, display a surprising similarity to Popper's principal ideas.

After discussing the sustainability of such hybrid views on statistical inference, I conclude that there is no principal incompatibility between Popperian falsification of scientific hypotheses and an instrumental Bayesian philosophy of statistical inference. Also from a historical point of view, the compatibility thesis can be supported by some sections of Popper's 1934/59 monograph.

Roger Stanev: Data and Safety Monitoring Board and the Ratio Decidendi of the Trial

In phase-III randomized controlled studies, it is incumbent upon the Data and Safety Monitoring Board (DSMB) to articulate not only the statistical monitoring rule guiding the study, but also the early stopping principle in cases of early stopping. In my paper I argue that the articulation of the early stopping principle (at the time of the DSMB's interim monitoring decision) should occur much the same way that a judge in a court of law announces the ruling of its case, i.e., the ratio decidendi of the case. This ratio connects the material facts of the case to general legal principles. Similarly, my decision framework for early stopping, provides the needed connection between the facts of the trial to general early stopping principles. In my paper I elaborate on my decision framework for early stopping by expanding on its qualitative form.

Susan G. Sterrett: Models of Interventions

When scientific knowledge is to be put into practice in an attempt to deal with a large-scale problem such as an invasive species or global warming, there are not only scientific models of organisms, phenomena, and processes involved: there is also what might be called a model of intervention. Whether the model of the (planned) intervention is explicitly stated or not, it is as much a matter of concern and evaluation as the scientific models appealed to in describing or promoting the intervention are. In this talk, I consider what's involved in constructing (or reconstructing) such a model of intervention.

Peter Stone and Koji Kagotani: Optimal Committee Performance, Size versus Diversity

The Condorcet Jury Theorem (CJT) established long ago that when it comes to decision-making, size matters. Other things equal, a large committee will outperform a small one. But a growing literature suggests that difference matters as well as size. Other things equal, a heterogeneous committee, with people representing different backgrounds and perspectives, will outperform a committee of clones. Recent generalizations of the CJT model demonstrate the truth of this claim, although they also note that heterogeneity only produces beneficial effects under moderately restrictive circumstances. This paper uses these generalizations to compare the respective merits of size and difference. It then makes policy recommendations based upon this comparison.

Michael Strevens: Idealization, Prediction, Difference-Making

Every model leaves out or distorts some factors that are causally connected to its target phenomena – the phenomena that it seeks to predict or explain. If we want to make predictions, and we want to base decisions on those predictions, what is it safe to omit or to simplify, and what ought a causal model to capture fully and correctly? A schematic answer: the factors that matter are those that make a difference to the target phenomena. There are several ways to understand the notion of difference-making. Which are the most useful to the forecaster, to the decision-maker? This paper advances a view.

1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!

Ekaterina Svetlova: Financial Models as Decision-Making Tools

The philosophy of science and the STS studies increasingly pay attention to the fact that models “travel” from the world of science to the pragmatic fields of their application and become instruments of decision-making. Outside of science, models play different roles and are used differently than in the purely scientific contexts. Taking some financial models as an example, the paper compares various patterns of model use and highlights the major functions of models in financial decision-making. Argumentation is based upon an empirical study.

Claudia Tebaldi: Making Sense of Multiple Climate Models' Projections

In the last decade or so the climate change research community has adopted multi-model ensemble projections as the standard paradigm for the characterization of future climate changes. Why multiple models, and how we reconcile and synthesize -- or fail to -- their different projections, even under the same scenarios of future greenhouse gas emissions, will be the themes of my talk.
Multi-model ensembles are fundamental to exploring an important source of uncertainty, that of model structural assumptions. Different models have different strength and weaknesses, and, how we use observations to diagnose those strengths and weaknesses, and then how we translate model performance in a measure of model reliability, are currently open research questions. The inter-dependencies among models, the existence of common errors and biases, are also a challenge to the interpretation of statistics from multi-model output. All this constitutes an interesting research field in the abstract, whose most current directions I will try to sketch in my talk, but is also critical to understand in the course of utilizing model output for practical purposes, to inform policy and decision making for adaptation and mitigation.

1stsvs_logo_s Watch the video abstract on "MCMP at First Sight" here!

Mariam Thalos: Expectational v. Instrumental Reasoning; Why Statistics Matter

Classical decision theory is considered, at least by some, the best-worked-out articulation of the idea that practical reason is thoroughly instrumental. This fundamental philosophical idea is expressed perhaps most forcefully in Bertrand Russell’s dictum that “rational choice” signifies choice of the right means to your ends, and has nothing whatever to do with the choice of ends themselves. Accordingly, we will refer to this idea as instrumentalism.

On a very natural view, instrumental reasoning is also an all-things-considered enterprise, concerned not merely with the realization of individual ends, but also with coordinating achievement of their sum in the current context. Thus the complexities of how to advance a complex menu of potentially conflicting concerns simultaneously, in the here and now, is the central and sole focus of decision theory. The most celebrated solution to this problem is the cornerstone of contemporary decision science: the very notion of utility. The concept of a totality of ranked concerns is the core of the concept of utility. The theory built upon it—the theory of Expected Utility (EU)—is then thought to give expression to the maximizing conception of practical rationality, according to which the agent’s problem is to choose, from among those “moves” currently on offer, the one that best advances the totality of his or her concerns.

Instrumentalism has of course been challenged. Some prominent challengers have contended that there is—as there must be—room for reasoning about goals or concerns as well within practical reasoning. Practical reasoning, say these challengers, must also regulate the adoption of goals, not merely work out how best to advance their sum. Other challengers suggest that articulation of options, in standard applications of classical decision theory, already introduces an element of value, so is not value-neutral in the way that its apologists purport. In practical life, “frames” render the service of articulating options, and there can be no assurance that one has achieved a value-neutral frame. Still other challengers purport that classical decision theory is rather ham-fistedly, handling inadequately the demands of instrumental reasoning in exceptional risk conditions. Hampton (1994) can be read this way, as can many of the architects of the different varieties of non-expected utility (non-EU) theories (for example Machina 1982, and Quiggin 1982). The view I shall be advancing here is a version of this last position.

Utility theories (both Eu and nonEU) theories offer numericalizations of certain principles meant to regulate choice—numerical or algebraic representations of the axioms underpinning the fundamental theory of risk. I shall refer to these as risk-numericalizing theories of decision. I shall argue that risk-numericalizing theories (whether we are referring to representations or to the underlying axioms) are simply not full-throated theories of instrumental reasoning. In other words, that they are not theories of the (best) means to one’s ends. The contrast between instrumental reasoning and the reasoning implicit in risk-numericalizing theories of is perhaps starkest in contexts of nonnormal statistical distributions. The reason for the divergence between instrumental reasoning and risk-numericalizing reasoning is fundamentally this: risk-numericalizing reasoning is concerned with weighting straight utility in some consistent, universally applicable way—so that an option has the numerical value it does in relation to two (important) categories of considerations: (1) our present goals all considered; and (2) all potential opportunities for present action. One might think (and not unreasonably) that this renders risk-numericalizing reasoning a very admirable rendering of instrumental reasoning. But an option, in this framework, is not evaluated—not numerically nor in any other way—in relation to other potential decisions. Decisions, in other words, are not automatically construed as made in the context of other decisions—instead, they’re treated as independent. And this means that future potential opportunities are not taken into consideration. In other words, numericalized theories of the sort that have become familiar evaluate only present action, but do little or nothing to take account of trajectories of decisions through an agent’s global option space over a lifetime or a substantial portion thereof. But from such a perspective, decisions are positively not independent; they are indeed networked, and the measure of a decision—in instrumental terms—depends on how the decision is itself networked with other decisions. (In some particular cases, the value of any given prospect—for example, the value of brushing one’s teeth on a given occasion—is networked with how often we expect to repeat the action—how often one brushes those teeth.) And so it is no wonder that risk-numericalizing reasoning cannot capture the full richness of instrumental reasoning, and in quite ordinary circumstances.

This essay will make good on these contentions through examples that involve only money. We will thus avoid excessively controversial cases, and avoid too the appearance of cheating.

Francesca Toni, Robert Craven and Xiuyi Fan: Transparent Rational Decisions by Argumentation

There is a well-documented indication that several applications would benefit from the transparency afforded by argumentation to support decision-making where standard decision theory is not useful, e.g. in healthcare. However, to date, research on argumentation-based decision making has only been partially successful in realising its promise. In our view this is predominantly due to its lack of theoretical validation in the form of rationality properties and its disregard for the interplay between individual rationality and social good when used in collaborative settings. We discuss how to address these challenges for the promise of argumentation-based decision-making to be fully realised as a principled mechanism for transparent and rational decision-making.

Unboxing the Concepts in
Newcomb’s Paradox: Causation, Prediction, Decision in Causal Knowledge Patterns

Downloads