Lifestyle
Leave a comment

The Universe Doesn’t Do Charity: Bad News for Cosmic Bargain Hunters

The Universe Doesn’t Do Charity: Bad News for Cosmic Bargain Hunters


An intuitive explanation of conservation of information showing how it destroys the credibility of Darwinian naturalism

This republished article first appeared in Bill Dembski’s subsack.

Despite its flamboyant title, this paper is about a well-defined and sober concept—conservation of information. The term conservation of information sounds technical and suggests it can be difficult to understand. It’s not. I want in this brief article to cut to the heart of this concept. I’ll show that it is self-evidently true. Additionally, I’ll show how it obliterates the materialistic, nihilistic conception of the world championed by aggressive atheists who use science as a club to beat religion.

If I could go back in time and rename conservation of information with a single word, I’d go with “offload” (verb form) or “offloading” (noun form). When you offload something, you don’t eliminate the load. Rather, you shift it to someone or something else. Moreover, in doing so, you do nothing to reduce the load. In particular, the universe doesn’t step in and through some law of nature make the load less by shifting it. That would be charity. If anything, as a load is transferred, it can become more burdensome.

Offloading goes beyond the underlying idea in such expressions as “you can’t get something for nothing,” “there ain’t no such thing as a free lunch” (abbreviated TANSTAAFL), “there’s no free ride,” “everything comes at cost,” etc. Underlying “no free lunch” is the principle of sufficient reason—if something exists, then it couldn’t just magically materialize but there has to be some reason or cause to account for it.

The idea of offloading is downstream from no free lunch. Offloading considers something that exists and then asks what happens to it as it is shifted elsewhere. Offloading says that as a load is shifted, it doesn’t diminish, though it may actually increase.

To take a silly example, imagine that you have just received a million dollars. The natural question then is where did you get that million dollars. Let’s say Alice gave it to you. Explaining how you got the million dollars is thus offloaded to Alice. But that means Alice had to have at least a million dollars—no less but possibly more. And where did Alice get her money? Let’s say from Bob. In that case, explaining how Alice got her money is offloaded to Bob, and he in turn is going to need to have at least a million dollars—indeed, at least as much as Alice. Bottom line: the load doesn’t disappear but merely gets shifted around.

Language offers us a whole array of expressions to capture this idea of offloading. Indeed, the offloading principle is embedded in human language worldwide:

  1. Shifting the burden — Moving the problem (or its consequences) elsewhere instead of fixing the root cause.
  2. Robbing Peter to pay Paul — Taking from one source/area to cover another, achieving no net gain (often implies short-term juggling that leaves the overall situation unresolved).
  3. Undressing Peter to dress Paul (variant of the previous point; in French, Déshabiller Pierre pour habiller Paul) — As the French proverb makes clear, it serves no purpose to undress Peter to dress Paul. One of them will be naked.
  4. Displacing the problem (or problem displacement) — Relocating the issue without eliminating it.
  5. Passing the buck — Shifting responsibility, and thus the problem, to someone else.
  6. Shuffling the deck chairs on the Titanic — Rearranging things cosmetically while the fundamental disaster remains untouched.
  7. Zero-sum game — Any “fix” just redistributes the pain without reducing the total problem, perhaps even exacerbating the problem so that everybody goes below zero.
  8. Refinance or rollover — Taking out a new loan (often on different terms) to pay off the existing one, leaving the principal undiminished. Or as the Indonesian proverb goes: gali lubang, tutup lubang — go into debt to pay another debt.
  9. Trading one problem for another (or swapping/substituting/switching one problem for another) — Exchanging issues without real improvement.
  10. Filling one hole by digging another — You’ll need to dig at least as deep on the second hole to fill the first hole.
  11. Kicking the can down the road — Delaying the problem by shifting it forward in time, leaving it to others do deal with the can, but it’s still the same can.
  12. Juggling accounts — As in juggling credit card payments that leave the total to be paid unchanged and perhaps incurring even greater cost given interest and penalties.
  13. Shell game — deceptively shifting things around to hide them from coming to light (but those things are all still there).
  14. Dismantle the east wall to repair the west wall — Chinese proverb (chāi dōng qiáng bǔ xī qiáng), which describes a makeshift or temporary solution that fixes one problem by creating or worsening another.
  15. Squeezing the balloon — Applying pressure in one place only makes the bulge pop out somewhere else; the problem changes location, not substance.
  16. Externalizing the cost — Shifting the cost to others (the public, customers, future generations, the environment) so the original actor appears relieved while the cost remains.
  17. Borrowing from tomorrow — Covering today’s shortfall by consuming future resources, leaving the underlying deficit in place and often making it worse later.
  18. Eating your seed corn — Solving an immediate problem by consuming what was meant to secure the future; relief now, deeper trouble later.
  19. Sweeping it under the rug — Hiding the problem from view rather than resolving it; the issue remains, only less visible.
  20. Playing whack-a-mole — Knocking down one manifestation of the problem only to have it reappear elsewhere, often repeatedly and with no lasting gain.

Need I go on? I could go on. But the point should by now be proven beyond any shadow of doubt. Offloading is a fundamental principle about how the world works. It is self-evident. The offloading principle is true and obvious. Moreover, to deny it is fallacious. In fact, its denial, whether explicit or tacit, may rightly be called the “offloading fallacy.”

The offloading fallacy is rife in evolutionary thinking. Consider, for instance, the following exchange I had with Darwinian apologist Eugenie Scott on Peter Robinson’s program Uncommon Knowledge. Scott and I were discussing evolution and intelligent design when Robinson asked about monkeys, given enough time, producing the works of Shakespeare by randomly typing at a typewriter.

Scott responded by saying that this example fails to capture the power of Darwinian evolution in that the monkey’s typing merely produces random variation, but that Darwin adds to this natural selection, which acts like a technician who stands behind the monkey and whites out every mistake the monkey makes in typing Shakespeare.

At this point, alarm bells should go off for anyone with even the most rudimentary knowledge of the offloading fallacy. Indeed, where do you get a technician who knows enough about the works of Shakespeare to white out mistakes in the typing of Shakespeare? What are the qualifications of this technician? How does the technician know what to erase? Scott never said. She merely offloaded the problem from the monkey to the technician: The monkey’s success at typing Shakespeare is explained only momentarily, but at the cost of leaving unexplained the technician who corrects the monkey’s typing.

Or consider an example from Richard Dawkins’ The Blind Watchmaker. There he claims to show how natural selection can create information via his well-known METHINKS IT IS LIKE A WEASEL computer simulation. Pure random sampling of the 28 letters and spaces in this target phrase would, on a given trial, have a probability of only 1 in 27^28, or roughly 1 in 10^40, of achieving it (i.e., one in a ten thousand trillion trillion trillion). In evolving METHINKS IT IS LIKE A WEASEL, Dawkins’ simulation was able to overcome this improbability by carefully choosing a fitness landscape to assign higher fitness to character sequences that have more corresponding letters in common with this target phrase.

Thus, in place of pure randomness, Dawkins substituted a hill-climbing algorithm with exactly one peak and with a clear way to improve fitness at any place away from the peak (smooth and increasing gradients all the way). But where did this fitness landscape come from? Such a fitness landscape exists for any possible target phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins explains the evolution of METHINKS IT IS LIKE A WEASEL in terms of a fitness landscape that with high probability allows for the evolution to this target phrase. Yet he leaves the fitness landscape itself unexplained. In so doing, he commits the offloading fallacy.

I submit that all of Darwinism, insofar as it claims to explain the emergence of complexity from simplicity (and this is its preeminent claim to fame) is but an example of the offloading fallacy, and one that is not even thinly disguised. It is truly amazing how successful this fallacy has been at duping the biological community in particular and the scientific community in general (don’t get me started about the use of this fallacy in physics and cosmology).

But once scientists become committed to materialism, which is the most virulent form of atheism, Darwinism becomes the only game in town. In that case, when it comes to biological origins, something very close to Darwinism must be true. And so, materialistic scientists embrace the offloading fallacy even though in any other context they would reject it. Or as the Psalmist put it, “The fool has said in his heart there is no God.”

All of Darwinism, insofar as it claims to explain the emergence of complexity from simplicity (and this is its preeminent claim to fame) is but an example of the offloading fallacy, and one that is not even thinly disguised.

About 200 years ago, the philosopher Arthur Schopenhauer remarked, “It would be a very good thing if every trick could receive some short and obviously appropriate name, so that when a man used this or that particular trick, he could at once be reproved for it.” The offloading fallacy is one such trick and its name meets Schopenhauer’s requirement for a “short and obviously appropriate name.”

So where does conservation of information fit into all of this. It is an example of the offloading principle applied to information. Information is about narrowing possibilities. The more possibilities are narrowed, the more information is conveyed. If I tell you I live in the US, I’ve narrowed the possibilities of where I am in the world. But if I tell you I live 40 miles north of Dallas, I’ve narrowed the possibilities considerably more, and so have given you considerably more information about my location.

The usual way to measure the narrowing of possibilities is through probabilities. Consider poker hands, and let’s compare a hand with a single pair versus a hand with a royal flush. There are 2,598,960 total poker hands. Of these 1,098,240 form a single pair whereas 4 form a royal flush. The probability of getting a single pair is thus about .42 whereas the probability of getting a royal flush is 1 in 649,740, or .00000154. The smaller probability of the royal flush indicates that learning a hand came up as a royal flush conveys a lot more information that learning it came up a single pair.

Probabilities thus represent information. The smaller the probability, the more information. Information theorists prefer to represent increasing amounts of information by larger numbers, and so they measure information by taking a negative logarithm (to the base 2) of probabilities. This turns probabilities into numbers of bits, with smaller probabilities indicating greater numbers of bits. Thus getting 20 heads in a row with a fair coin corresponds to a probability of 1 in 2^20 (or roughly 1 in 1,000,000) and equivalently to 20 bits of information.

Let’s now look more closely at what happens with probabilities when information is offloaded. We’ve seen what’s at stake here in the METHINKS IT IS LIKE A WEASEL example of Dawkins. Generating this sequence by pure randomness was extremely improbable (1 in 10^40). But once a suitable evolutionary (hill-climbing) algorithm was introduced, the probability essentially went up to 1—it was no longer improbable but became virtually certain.

The certain event is always assigned a probability of 1 and thus contains zero information (the certain event is guaranteed to happen, so you learn nothing by learning it happened). But there was a cost to this certainty. The hill-climbing algorithm was one of many algorithms that could have been tried. Getting the right algorithm that with high (or certain) probability generates METHINKS IT IS LIKE A WEASEL is itself highly improbable.

It is a basic fact about probabilities (indeed, a corollary of the Law of Total Probability) that for events, objects, structures, or hypotheses E and F that the following inequality holds:

Here P(E) and P(F) denote the probability of E and F respectively. P(E|F) is a conditional probability. It signifies the probability of E given that we know F holds. E may thus denote rolling a fair die and getting a six. Its probability is therefore 1/6. F may then denote rolling an even number. If we know that an even number has been rolled, the probability of getting a six is then 1/3. In this case P(E|F) = 1/3.

This inequality may be called the “offloading inequality.” To see how it works, consider the following examples:

  1. E = it’s wet outside; F = it’s raining outside
  2. E = monkey types Shakespeare; F = lab tech whites out mistakes
  3. E = arrow hits target; F = favorable wind guides arrow to target
  4. E = find needle in a haystack; F = find procedure for finding needle
  5. E = safe cracks open on first try; F = insider slipped you the combination
  6. E = golfer makes a hole-in-one; F = hidden guide rail funnels the ball to the cup
  7. E = student guesses all answers right on a hard exam; F = answer key was leaked in advance
  8. E = treasure hunter finds the buried chest on the first dig; F = an old map gives the exact spot
  9. E = random mutation yields a working enzyme; F = the search is confined to a tiny preselected set of viable sequences
  10. E = message in a bottle reaches one particular person; F = ocean currents and release point were carefully engineered to carry it there

Examples like this are easily multiplied. The point to note is that as E’s probability is raised via F, the thing F that is doing the raising itself shares in any improbability associated with E. Put differently, any improbability of E gets offloaded to an improbability of F to the degree that F raises E’s probability.

To appreciate how this works, let’s walk through the first of these examples (I leave it as an exercise to the reader to run the offloading inequality through the rest of these examples). Suppose we discover it’s wet outside. That’s E. Next suppose that F denotes that it’s raining outside. Then P(E|F), the probability that it’s wet outside given that it’s raining outside, is 1 (obviously!). But this means that P(F), the probability that it’s raining outside, is less than or equal to the probability of it being wet outside. This, of course, makes perfect sense—there are other ways for it to be wet outside besides its raining outside, such as the operation of a sprinkler system.

The other examples all follow the same pattern, though in some cases it can happen that P(E|F) does not equal 1. In that case, P(F) need not be less than or equal to P(E), though the probability of obtaining E by first obtaining F (P(F)) and then by means of F obtaining E (P(E|F)), a probability calculated as

will be less than or equal to P(E). The bottom line here is that offloading the probability (information) of E to F does nothing to improve the probability of getting E. The offloading inequality is thus a precise mathematical way of stating the offloading principle in an information-theoretic context.

All of evolutionary theory, insofar as it seeks to dispense with teleology, is an exercise in cosmic bargain hunting, trying to work the laws of nature so that the information it needs to explain gets offloaded to less intensive forms of information. If you will, it always seeks to explain E by F where F is easier to account for than E.

To see that I’m not making this up, consider what may be the most revealing claim in Richard Dawkins’ The Blind Watchmaker: “The one thing that makes evolution such a neat theory is that it explains how organized complexity can arise out of primeval simplicity.” Complexity (more information) in the present is thus, according to Darwin’s theory, offloaded to simplicity (less information) in the past. The offloading principle in general, and the offloading inequality in particular, shows that this move to explain complexity from simplicity doesn’t—and indeed can’t—work.

Naturalistic evolutionists are in effect looking for the laws of nature to do charity work—to offload their information load to a place where the load is less. But that can’t be done. The principle is universal and the math doesn’t allow it. In particular, the offloading inequality shows that the informational load that needs to be explained becomes at least as intensive (often more intensive) as one tries to shift it in explaining the thing that initially needed to be explained.

It’s no accident that our language and metaphors across the globe capture the illegitimacy of offloading—that it can’t be made to work and that to pretend to have made it work is at best self-delusion and at worst outright fraud. Darwinismus delendus est!

For the full technical details about conservation of information, see my free monograph at Bio-Complexity titled The Law of Conservation of Information: Search Processes Only Redistribute Existing Information.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *