Claim: Ethical systems (religious or otherwise) are heuristics adopted by societies to cope with the intractable complexity of computing moral actions.
A bit of justification:
Ethical dilemmas come up all the time in our lives (both in principle and in practice). We need some standard by which to judge and compare competing solutions, but even that starting point is difficult to pin down. (I've tended to opt for something along the lines of "maximize the space-time integral of good" or "...the space-time integral of joy", but it's hard to make that well-defined.)
But that's the easy part. Once you sit down with some criterion in mind, you quickly realize that predicting and calculating the consequences of any action well enough to apply the criterion is essentially impossible. You need to consider not just the immediate impact of each choice but also the ripple effects of those choices as they spread throughout the world. Given that it's a chaotic system, it's probably provable that you can never really know what final effect your actions would have. ("If a time machine took you to 1935, should you kill Hitler?" "No, killing is bad." "But you might avert the holocaust, so do it." "But what if that changed the outcome Cold War and led to nuclear annihilation?" ...)
We clearly can't demand that every individual foresee and be responsible for the consequences of their actions to the umteenth degree. So instead, society adopts ethical systems (via religions, legal systems, or whatever else) that provide rough but easily understood guidance. The heuristics are chosen so that they lead to pretty good solutions most of the time, but their real importance is that society also indemnifies individuals against negative consequences that might occur despite following those rules. As long as you follow the heuristics as best you can, you won't be punished harshly if things go badly as a result.
This obviously leads to sub-optimal decisions in many cases (and can lead to truly terrible ones at times) and the heuristics still don't always make the "allowed" course(s) of action clear, but given the difficulty of the problem a system like this may be the best that we can do. So then the question is, how and when do societies update and improve their heuristics?
A bit of justification:
Ethical dilemmas come up all the time in our lives (both in principle and in practice). We need some standard by which to judge and compare competing solutions, but even that starting point is difficult to pin down. (I've tended to opt for something along the lines of "maximize the space-time integral of good" or "...the space-time integral of joy", but it's hard to make that well-defined.)
But that's the easy part. Once you sit down with some criterion in mind, you quickly realize that predicting and calculating the consequences of any action well enough to apply the criterion is essentially impossible. You need to consider not just the immediate impact of each choice but also the ripple effects of those choices as they spread throughout the world. Given that it's a chaotic system, it's probably provable that you can never really know what final effect your actions would have. ("If a time machine took you to 1935, should you kill Hitler?" "No, killing is bad." "But you might avert the holocaust, so do it." "But what if that changed the outcome Cold War and led to nuclear annihilation?" ...)
We clearly can't demand that every individual foresee and be responsible for the consequences of their actions to the umteenth degree. So instead, society adopts ethical systems (via religions, legal systems, or whatever else) that provide rough but easily understood guidance. The heuristics are chosen so that they lead to pretty good solutions most of the time, but their real importance is that society also indemnifies individuals against negative consequences that might occur despite following those rules. As long as you follow the heuristics as best you can, you won't be punished harshly if things go badly as a result.
This obviously leads to sub-optimal decisions in many cases (and can lead to truly terrible ones at times) and the heuristics still don't always make the "allowed" course(s) of action clear, but given the difficulty of the problem a system like this may be the best that we can do. So then the question is, how and when do societies update and improve their heuristics?
Tags:
no subject
I don't know if that entirely clarifies my point, but I hope it gives you some idea of where the non-circular part of my claim is. :)
Yeah, I see your point and I agree. Moral heuristics are useful in that nobody has the ability to compute every consequence of their actions. And even seeing past 1 or 2 steps is difficult.
I guess what I found interesting is that you see this as the hard part but coming up with a definition of what is "good" in the first place is the easy part. I think there are some pretty tough issues there and I might lean more towards saying that's the hard part.
For example, you mention measuring the "density of good" inside an integral. However, I don't think there is such a thing as a density of good--good seems like something with a highly non-local character, that involves complex relationships between groups of people. When something happens to one person, it can be bad or good for them, but can also be bad or good for a multitude of people connected to them in various ways... and the polarity of that doesn't necessarily match up between different people, especially if its between different people from very different cultures.
no subject
As I alluded to earlier, I see a strong analogy between my thought process here and my feelings about statmech in physics. My comment about the underlying definitions being "easy" was somewhat tongue in cheek, meant in much the same sense that I might say "Determining the ultimate theory of fundamental physics is the easy part compared to computing the macroscopic implications of that theory in any specific instance." Namely, the "easy part" is actually a tremendous challenge and an unambiguous success at it would rank among the foremost achievements of the human mind, but extrapolating from fundamental principles to human-scale implications is far harder (and might be provably impossible in most cases due to implications of chaos theory).
(I'm not sure that the analogy really works, of course, since among other things I'm not entirely sure that fundamental ethics exists in a universal form in the way that fundamental physics seems to. But that's another long discussion of its own.)
no subject
And it actually spurs a new thought for me, which may be relevant here. In the same way that you can make a lot of progress on the emergent physics without having your theory of everything complete... presumably, you can come up with useful approximate ethical rules even if you don't fully understand more fundamental things like the definition of good.
I think in light of thinking it that way, it makes me appreciate the importance of what you're saying more. Perhaps I have placed too much emphasis on trying to figure out fundamental things first in ethics, which is why... I have to admit... I have never gotten very far in understanding ethics, and have often ended up getting confused or discouraged.