Journal of Mechanical Design

companion website

Editorials

GUEST EDITORIAL: NOT SO SUBTLE SUBTLETIES REGARDING PREFERENCES

11/30/2014 Author: George A. Hazelrigg
J. Mech. Des. 136(12), 120301 (Dec 01, 2014) (2 pages)
doi: 10.1115/1.4028940
The past two decades have seen a significant shift in perspective on engineering design, from the view that design is a matter of problem solving to the view that it is decision making. This shift was encouraged by the ABET 2000 standards, defining design to be decision making, and by numerous papers appearing since, addressing various aspects of decision theory applied to design. It has opened engineering to the richness of the mathematics of decision theory, which holds the potential to deal far more realistically with design than other views, including a rigorous treatment of uncertainty. On the other hand, decision theory defines a decision in a very precise and perhaps limited way that needs to be recognized. Failure to acknowledge this has led to a number of spurious theories that have emerged to cope with the resulting issues. The purpose of this brief note is to explicitly acknowledge some of the conditions that underlie the mathematics of decision theory and to point to their consequences.

To begin, we may observe that decision making is synonymous with optimization. Both include the same elements, actions and objectives. To be specific, both require a preference statement. For optimization, we refer to the preference statement as the objective function. It is important to note that the objective function must exist and be valid if the optimization is to yield a useful result. The same is true of a preference statement in the case of a decision.

An objective function is valid if and only if it rank orders all conceivable outcomes precisely the same as the decision maker. Otherwise, it is not valid. And, in the case of uncertainty, it must have additional properties. But, in any case, the only purpose of the objective function is to produce a rank ordering. Hence, only relative values are important, and any positive affine transformation of a valid objective function yields a valid objective function.

The purpose of an objective function is to map outcomes onto the real number line (R1) such that outcomes of points to the right are preferred over outcomes of points to the left. When all achievable outcomes are mapped onto the real number line, optimization is achieved by choosing the input conditions that correspond to the rightmost point. Two conditions are necessary for the existence of an objective function:

  1. Preferences must exist—given outcomes x and y, one and only one of the following must hold: x is preferred to yy is preferred to x, or the decision maker is indifferent between x and y.
  2. Transitivity—given outcomes xy, and z, if x is preferred to y and y is preferred to z, then it must be the case that x is preferred to z.

These simple conditions have profound impact, which is frequently neglected.

The second condition is necessary to enable the mapping of outcomes onto the real number line. If the condition were not met, it would mean that x would have to map to a point that is to the right of y, and ywould have to map to a point to the right of z, while z would have to map to a point to the right of x. But zcannot simultaneously map to a point that is both to the right of x and to the left of x.

In the development of decision theory, mathematicians recognized the difference between the physical world, which exists outside our mind, and the mental world, which is entirely in our mind. We know what is in our mind, we can only sense the physical world and seek to draw conclusions about it. Thus, uncertainty relates only to the physical world. We are certain about what is in our mind. So, if I were to express the preference that I prefer vanilla ice cream to chocolate, I am certain about this preference. It would be entirely inappropriate to assign a probability to my preference, say, “With probability 0.90, I prefer vanilla to chocolate.” If I were to do this, I could no longer guarantee that condition 2 is met, and a valid objective function could not be written. This is why, in the derivation of utility theory, a key assumption is that preferences are clear and distinct over the full range of conceivable outcomes. This is an absolutely mandatory condition. Without it, we lose all of optimization theory and all of decision theory.

Since preferences must be clear and distinct over the full range of conceivable outcomes, they cannot be difficult to elicit. Hence, any formalism designed to elicit preferences—Analytical Hierarchy Process, for example—is at best superfluous, and more likely flawed. Indeed, any formalism that puts (active) constraints on a preference ordering, for example, weighted attribute preferences, is flawed. A clear and distinct preference could be, “I want to make money and more is better.”

On the other hand, there are many things over which we might not have a preference. For example, neither do I have a preference on the weather at the north pole of Saturn’s moon Titan nor do I have a preference on the properties of the material used in the cylinder walls of the engine in my car. In cases such as these, we fail to meet condition 1. It would be wrong to think that we could create a valid formalism to elicit a preference when the preference does not exist. Yet, this is exactly what many formalisms for the determination of preferences purport to do. The 19th century mathematician Weierstrass presented a problem that illustrates the fallacy in attempting to find a solution in a case where one does not exist. The problem he posed is: solve for the largest positive integer. His solution was as follows. Let n be a positive integer. All positive integers have a square, n2, which is also a positive integer. However, if n is the largest positive integer, it must be at least as large as n2, n ≥ n2. This inequality has one and only one solution, n = 1. Ergo, 1 is the largest positive integer. While the logic is impeccable, the obviously absurd result illustrates the danger of trying to solve for something that does not exist, and it can serve as a warning against trying to place preferences on attributes over which the decision maker has no preference. But, when the decision maker has a preference, it must be clear and distinct. So there is no need for formalisms to elicit the preference. In those cases where one might be inclined to use a formalism, the clear signal should be that we are asking for the expression of a preference that does not exist, and we need to look for the relevant preference that is clear and distinct.

A common formalism for the elicitation of preferences is a linearly independent weighted sum of attributes. This form is rarely valid. The following example illustrates this. Suppose we are comparing two alternative airplane designs in terms of, say ten attributes, including range, speed, payload, acquisition cost, reliability, maintainability, operating cost, safety, and so on. Let us say design A rates a 7 out of 10 score for each attribute, and we weight the attributes evenly resulting in a total score of 70 for the design. Design B is outstanding in nine attributes, scoring 10 in every one of these attributes. But it crashes almost every time it flies, giving it a score of 0 for safety. This design gets a total score of 90, well above design A. Yet, no one would fly the airplane. It is clearly the worse design. This preference is not properly reflected by the linear objective function. For linear preference functions such as this, it is almost always the case that we can create such pathological cases.

A common approach to eliminating the pathological cases that cause objective functions to yield intuitively invalid (often absurd) results is to impose constraints on the design, for example, safety must exceed a minimum amount. The problem with this approach is that the constraints, not the preference, then dictate the design, and we wind up not getting what we want.

In modeling the physical world, we are taught that the more detail we put into the model, generally the more accurately the model represents the physical reality. Preference models are the exact opposite. Detail in the model, such as the specification of a particular linearly additive form, constitutes constraints that prevent the model from accurately representing the preference. Ergo, the best preference model is the simplest model: “more money is preferred to less money.” This may be a very simple preference statement, but it does not mean that the computation of money (profit from a particular venture) will use a simple model. Indeed, the computational process for the determination of profit may be highly complex, involving the relationship of profit to design parameters, and it is in this computation that we embed all uncertainty.

Finally, it is important to note that decisions are made only by individuals. Groups have emergent behaviors, they do not make decisions, and group preferences, in general, do not exist. The emergent behavior of a group depends both on the decisions made by the individuals who comprise the group and the rules by which they interact. It is mathematically incorrect to apply decision theory to groups. The mathematics of group behavior is game theory, not decision theory. We cannot change peoples’ preferences, but we may be able to create a system of incentives and rewards that will align their decisions with a common design goal (profit, for example). The mathematics of this is called reverse game theory (how we should design the rules of the game to get the desired results) or mechanism design. It is an area of mathematics that is receiving increasing attention, and it holds considerable potential to improve the design of large systems.

Suppose, however, that we could find two engineers whose preferences are identical. Could we use decision theory in this case? The answer is no, not in general. Not only would their preferences have to be identical, but their beliefs, namely, their assessment of all probabilities relating to the design must also be identical. The likelihood of this coincidence is, for practical purposes, zero.

To summarize then, the necessary conditions for the existence of preferences impose strict conditions on the elicitation of preferences. Simply put, they must be clear and distinct, which means that there is no need for elaborate formalisms for their determination. Identified correctly, there should be no difficulty in determining a person’s preference.

Copyright © 2014 by ASME