January 2017

M T W T F S S
      1
2345678
9101112131415
16 171819202122
23242526272829
3031     

Style Credit

Expand Cut Tags

No cut tags
Friday, July 2nd, 2010 08:37 am
Claim: Ethical systems (religious or otherwise) are heuristics adopted by societies to cope with the intractable complexity of computing moral actions.

A bit of justification:

Ethical dilemmas come up all the time in our lives (both in principle and in practice). We need some standard by which to judge and compare competing solutions, but even that starting point is difficult to pin down. (I've tended to opt for something along the lines of "maximize the space-time integral of good" or "...the space-time integral of joy", but it's hard to make that well-defined.)

But that's the easy part. Once you sit down with some criterion in mind, you quickly realize that predicting and calculating the consequences of any action well enough to apply the criterion is essentially impossible. You need to consider not just the immediate impact of each choice but also the ripple effects of those choices as they spread throughout the world. Given that it's a chaotic system, it's probably provable that you can never really know what final effect your actions would have. ("If a time machine took you to 1935, should you kill Hitler?" "No, killing is bad." "But you might avert the holocaust, so do it." "But what if that changed the outcome Cold War and led to nuclear annihilation?" ...)

We clearly can't demand that every individual foresee and be responsible for the consequences of their actions to the umteenth degree. So instead, society adopts ethical systems (via religions, legal systems, or whatever else) that provide rough but easily understood guidance. The heuristics are chosen so that they lead to pretty good solutions most of the time, but their real importance is that society also indemnifies individuals against negative consequences that might occur despite following those rules. As long as you follow the heuristics as best you can, you won't be punished harshly if things go badly as a result.

This obviously leads to sub-optimal decisions in many cases (and can lead to truly terrible ones at times) and the heuristics still don't always make the "allowed" course(s) of action clear, but given the difficulty of the problem a system like this may be the best that we can do. So then the question is, how and when do societies update and improve their heuristics?
Tags:
Saturday, July 3rd, 2010 12:56 am (UTC)

Claim: Ethical systems (religious or otherwise) are heuristics adopted by societies to cope with the intractable complexity of computing moral actions.

I've never quite understood what the difference is between ethics and morals... I know there are some subtle differences. But to the extent they are synonyms, isn't this statement kind of circular? Ethical systems are there to help compute ethical actions. But if the ethical system is determined by whatever helps compute ethical actions the best, what are ethical actions if not, what society computes from its system of ethics?

(Not that I'm claiming to have a good answer to this.)
Saturday, July 3rd, 2010 12:41 pm (UTC)
The point of my claim is to acknowledge that there is value in having a set of simple agreed-upon rules that merely approximate the best choice in any given situation. To use my mathematical analogy, I'm drawing a distinction here between the functions used to measure the density of "good" inside the space-time integral and the simple heuristics endorsed by society that don't require thinking about the integral at all.

I have at times felt a bit dismissive of those simple heuristics ("Thou shalt not steal", etc.) since they clearly fail to capture so much. But I've come to realize that rejecting these not-bad approximations in favor of the fundamental definition is a lot like my youthful rejection of statistical mechanics as "just an approximation to the real physics": yes, it's technically true, but since making approximations is (provably) necessary in practice, they're deeply important nonetheless.

I don't know if that entirely clarifies my point, but I hope it gives you some idea of where the non-circular part of my claim is. :)
Saturday, July 3rd, 2010 04:44 pm (UTC)

I don't know if that entirely clarifies my point, but I hope it gives you some idea of where the non-circular part of my claim is. :)

Yeah, I see your point and I agree. Moral heuristics are useful in that nobody has the ability to compute every consequence of their actions. And even seeing past 1 or 2 steps is difficult.

I guess what I found interesting is that you see this as the hard part but coming up with a definition of what is "good" in the first place is the easy part. I think there are some pretty tough issues there and I might lean more towards saying that's the hard part.

For example, you mention measuring the "density of good" inside an integral. However, I don't think there is such a thing as a density of good--good seems like something with a highly non-local character, that involves complex relationships between groups of people. When something happens to one person, it can be bad or good for them, but can also be bad or good for a multitude of people connected to them in various ways... and the polarity of that doesn't necessarily match up between different people, especially if its between different people from very different cultures.
Sunday, July 4th, 2010 03:32 am (UTC)
I guess what I found interesting is that you see this as the hard part but coming up with a definition of what is "good" in the first place is the easy part.

As I alluded to earlier, I see a strong analogy between my thought process here and my feelings about statmech in physics. My comment about the underlying definitions being "easy" was somewhat tongue in cheek, meant in much the same sense that I might say "Determining the ultimate theory of fundamental physics is the easy part compared to computing the macroscopic implications of that theory in any specific instance." Namely, the "easy part" is actually a tremendous challenge and an unambiguous success at it would rank among the foremost achievements of the human mind, but extrapolating from fundamental principles to human-scale implications is far harder (and might be provably impossible in most cases due to implications of chaos theory).

(I'm not sure that the analogy really works, of course, since among other things I'm not entirely sure that fundamental ethics exists in a universal form in the way that fundamental physics seems to. But that's another long discussion of its own.)
Sunday, July 4th, 2010 05:04 am (UTC)
Wow, that's a great analogy!

And it actually spurs a new thought for me, which may be relevant here. In the same way that you can make a lot of progress on the emergent physics without having your theory of everything complete... presumably, you can come up with useful approximate ethical rules even if you don't fully understand more fundamental things like the definition of good.

I think in light of thinking it that way, it makes me appreciate the importance of what you're saying more. Perhaps I have placed too much emphasis on trying to figure out fundamental things first in ethics, which is why... I have to admit... I have never gotten very far in understanding ethics, and have often ended up getting confused or discouraged.
Saturday, July 3rd, 2010 10:43 pm (UTC)
There is a standard terminology, "bright line laws" are those which are obviously crude approximations to reality, but which remove lots of uncertainty by being clear and easy to interpret.

As you say, how to make approximations is a big part of making things work in practice.