The ignorant charity problem (person A’s perspective)

Speaking of conversations with Mr. Donovan Greene, here’s a bit of game theory based on a common social dilemma which he posed to me. Problem and intuition are his, mathematical development is mine.

Person A doesn’t like doing activity X, but knows that Person B does. A asks B to do X. B doesn’t want to do X (for whatever reason), but feels somehow obligated to A. Neither are all that happy. Seems to be like the prisoner’s dilemma except when both co-operate, both suffer. Since we can think of this as akin to as “market failure” of the social sphere, probably a few lessons that can be extrapolated and applied elsewhere.

The Legionnaire

For reference as a concrete example, you can envision a situation where A has been assigned to write a video game review for a new RPG, but they don’t like RPGs. A knows B likes RPGs, but maybe B is really busy right now or doesn’t like the game studio.

Let’s mock up some variables and discuss!

A(x1, x2, x3…) = Probability density function predicting whether A will ask B to do X
D_a = A’s desire to do X.
D_b = B’s desire to do X.

Because these are statements of desire, they may be interpreted as market prices and therefore expressed in dollars. A may not have a negative value even if A hates RPGs, because A wants the money, but at a minimum he thinks B would enjoy X for free and wouldn’t mind A pocketing the money. However, A is ignorant of B’s desire and just guessing, which means we need…you guessed it, probability!

E(D_b) = A’s expectation of D_b.

Probably, A has either observed that B has enjoyed X in the past, or has observed that people similar to B usually enjoy X (“I think video games are a waste of time, but techie types seem to like them”). The latter case would be an interesting topic to develop all on it’s own, but for now we’ll stick with the former:

E(D_b) = mu_b = average of previous observations.

From the statement of the problem, we know that E(D_b) > D_b. There are three possible relations between D_a and D_b. If D_a D_b. If D_a = D_b then it’s all a wash.

I could stop here, but I think it’s important to note A may be aware of a possible future profit or cost, depending on the relationship between A and B. If E(D_b) > 0, then A might think he can ask for a favor later, in response to his “expected” altruism (this would be altruism as in social reciprocity). If D_a < E(D_b) < 0, then A may feel obligated to do a favor for B in the future.

C_a = (future obligation) profit or cost to A for asking B to do X.

So far, the probability density function describing A's decision to ask B to do X is weighted toward the following factors: A(C_a, mu_b, D_a)

Next time I'm going to drill down into C_a using Vox's SSMV archetypes. Afterward, I'll tackle B's perspective.

Advertisements

About Aeoli Pera

Maybe do this later?
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s