Some time ago I eagerly and delightedly soaked in Gregory Boyd’s Satan and the Problem of Evil, which, for those not in the know, is the single best treatment of theodicy in the English language. I’m gonna say Read It, with the full knowledge that, if you are of the Reformed persuasion, you’re going to have a tough time (maybe).
But this isn’t a review of Boyd’s text.
It’s been at least a year since I read the book, but last night it occurred to me that a significant (in importance, not necessarily size) portion of Boyd’s thesis is awfully similar to Asimov’s Three Laws of Robotics.
Among Boyd’s great points is the simple assertion that God is always doing as much good as He possibly can, but he will not violate human free will to accomplish the good. To summarize briefly,
1. God has established a good portion of His creation with free will, which includes the right to choose for or against Him;
2. the mere existence of that free will means the possibility of evil is ever present;
3. God is always combating evil to the highest degree possible, but He will not violate the first principle to do so.
It’s been some time since I read the book, so I’ve probably done a terrible disservice to the thesis, not least because it cannot be summed up so briefly, but there’s the pieces relevant to what I’m thinking today.
To compare, Asimov’s Three Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Is anyone else seeing this? I love it.