Showing posts with label vox day. Show all posts
Showing posts with label vox day. Show all posts

Peter Pike and Calvinist Information Theory

13 comments
Peter Pike's wrestling with the concepts of information theory and algorithmic complexity over here. He thinks there's something fishy with the idea of random strings being more complex than repetitive or structured strings. Let's take a look at his analysis...
Unfortunately for T-Stone, if he paid attention to what he has written here he’d see that he’s soundly refuted Dawkins. After all, if maximal randomness is equivalent to maximal complexity, then it is easy for me to write a program that will generate completely random output.
That's quite a claim, Peter. Do you know what's involved in writing a program that generates completely random output? It's a tricky problem, and "complete randomness" ends up having the program access some physical process external to the virtual environment -- radioactive decay events are often chosen as the source of random input. The system calls in your OS's standard libraries are pseudo-random, not "completely random", and without adding in additional code to address the problem, quite predictable and repeatable in many cases. Even then, if you look at the code you are invoking by a single call to rand(), you'll see it doesn't come for free, even pseudo-random data generation.

But it's important keep our points of reference intact, here. It's the design argument that objects to the idea of emergent complexity, and materialist interpretations of our history promote the idea that complexity emerges, and that in some cases, simpler configurations give rise to more complex configurations. If humans can point back to single-celled organisms as their ancestors, relying on impersonal, natural processes, clearly there are mechanisms and dynamics involved that will produce increasing complexity over time. This is why science supposes the design argument is a vacuous one. Dawkins "Ultimate 747" argument explicitly opposes the design argument, appealing to "crane" processes, and descrying "skyhook" processes as absurdities.
In other words, it is easy for me—a person who is not maximally complex—to produce a program with output that is maximally complex. Thus, if we want to play T-Stone’s game and use complexity in this sense, then Dawkin’s argument must be surrendered.
This is wrong in several ways. First, you are not a 1,000x,1,000 pixel grid, Peter. So, while such a grid populated by random values is maximally complex, it doesn't have nearly the scope a system as complex as a human being has, so in absolute terms, it's shy by multiple orders of magnitude. The random grid is as complex as it can be, for its size, but it's infinitesimal in size in comparison to a complete description of a human.

Second, there's a profound difference between a program that produces random output, and a program that (re)produces a given output that in this case happens to be random. For example, this bit of code has almost no algorithmic complexity:
int main()
{
for(i= 0; i < 1000; i++)
{
for(j= 0; j < 1000; j++)
cout << rand() ;
}
}
This program will produce 1,000x1,000 output of random integers (or pixel values), but it will produce a different output every time. Algorithmic complexity is a measure of the instructions needed to render a given, specific output, so this code would be a "disqualified" in terms of measuring complexity, Kolmogorov-style. It is incapable of rendering the output requested of it. In order to produce a given string, one that is provided and is non-compressible (random), the program needs to "echo" every single value, making the program scale linearly with the size of the output. So, in order to reproduce this string "99585249515829886853", something like this is needed programmatically:
int main()
{
cout << '9';
cout << '9';
cout << '5';
cout << '8';
cout << '5';
cout << '2';
cout << '4';
cout << '5';
cout << '1';
cout << '5';
// ... etc, shortened for brevity

}
So, in order to achieve the alogorithmic complexity needed for any given random output, Peter would need to "handcode" every value in the output. This is why we say a random string has maximal algorithmic complexity -- it defines algorithmic abstraction, and requires "hand-made" output echoes for every discrete value.

Third, Peter has gotten so wrapped around the axle of information theory that he has apparently who is arguing for the plausibility of emergent complexity. Just so we're straight, Peter, it's the materialist explanation that embraces emergent complexity, the progression from more simple configurations to more complex ones, and without any personal oversight or intervention. It is theistic arguments that cannot accept emergent complexity that lead to absurdities -- "skyhooks", as Dawkins calls them.
If I can make a program that is more complex than I am, then God can create a universe that is more complex than He is.
That may be! But it proves to much for the theist, as it makes God superfluous -- that was what the design argument aimed at, remember, demonstrating the necessity of God. If simpler can give rise to complex, then we have Dawkins' "crane", and the design argument is defeated. A simple, singularity can unfold to unfathomable complexity, and that is what materialist cosmologies and evolutionary biologies propose.
FWIW, I disagree with T-Stone’s version of information and complexity.
Well, then this would be a fine opportunity for Peter to show he isn't just BSing once again, and give us his "version of information and complexity". How do you define 'information' and 'complexity', Peter? How do you measure each?
And despite what his post would lead you to believe, the idea that “maximal randomness = maximal complexity” is not true for all information theories.
The competing theories are conspicuous in the absence, here, Peter. What alternative information/complexity theory do you embrace/propose, if not that of Shannon, Kolmogorov and Chaitin. If you've got something better, or even roughly equivalent, you'll be famous by Friday.
And in fact, if I were to use T-Stone’s definition of complexity then I would ask him to explain not why there is so much complexity in the universe, but rather why there is so little complexity.
Peter, how little is there? And how much do you calculate there should be? If you give me the calculations for your expectations, and your calculation for the actuals, I can try to give you an account for the difference, looking at your maths. As is it, I suspect you have no clue what you are talking about in terms of your request.
If complexity = randomness, then it doesn’t take a rocket scientist to realize that there’s a lot of the universe that is not random, and therefore there is a lot of this universe that is not complex. Under his information theory, randomness is the default. We do not need to explain random data. We do need to explain structured and ordered data. Therefore, we do not need to explain complexity; we need to explain non-complexity.
I have no idea what "randomness is the default" -- it's a seemingly random thing to assert, here. Be that as it may, noone with any expertise or even casual knowledge of the involved sciences is promoting the idea that the "universe is random". That's a creationist bogeyman, a concept alien to science. Our understanding of the universe identifies sources of randomness, combined with uniform constraints that provide structure. A combination of random "inputs" then, filtered through structuralizing processes, producing (often) complex outputs. The timing of decay events in a radioactive isotope is random at the event level, but the physical laws that randomness operates within produce a very nice logarithmic curve in charting the production of daughter isotopes, over time and statistically significant instances. Randomness driving structured output through physical constraints.
T-Stone is just giving a sleight of hand here. It would be like a mathematician saying "a > b" and having T-Stone say, "The greater than sign is inverted with the less than sign, therefore 'a > b' means 'a is less than b'."
This is complete nonsense. What is the 'sleight of hand' here, Peter? I've not inverted any operator semantics, nor can I identify anything that maps to "operator inversion". I'm deploying the concepts of information theory and algorithmic complexity in completely uncontroversial fashion, using them as they are used day in and day out by people who understand and work with information and algorithmic complexity everyday, for purposes mundane and sublime.
Butas soon as he engages in his sleight of hand, we respond: "If the greater than sign is inverted with the less than sign, then 'a > b' is no longer true, rather 'a < b' is. Inverting the operator without inverting the operands does not refute the original expression.
Complete gibberish, not matched to anything I've said. Pathetic hand-waving.