User:Average/Big O notation
Big O notation (ex: O(n)) is a simple way to measure complexity. It comes from Computer Science, where it's often used to compare performance of algorithms. Just like the phrase "rough order of magnitude" is used in physics to get a ballpark estimate of a formulaic interaction, big O is used in the domain of actual interactions to turn complex phenomena into a simple macro-level concept. It's a lovely shorthand for talking about the complexity of algorithms. The value (ex: n2) inside means that a function is asymptotic to that line-curve.
Take sorting algorithms. Say you need to sort a list of 100 numbers, perhaps more. Since you have to look at every number, you know that it has to be over O(n) complexity (where n=100) because once you check you'll have to compare and swap and perhaps. One ignores the actual value (100) and generalizes it (to any n), knowing that their error will be bounded asymptotically by some constant factor only. In sorting the list, in the crudest algorithm, you would compare each element to every other until it's all sorted. Since you don't have to compare a number to itself and there's then 99 pairs to compare, that is 99*98 comparisons. That's roughly (n-1)(n-2), or n2-3n+2 comparisons. But, in Big O, you take the limit as n is unbounded. In so doing, the squared value predominates the other two expressions to the point that they become insignificant. This means it is asymtoptic to n2. So, to show this, we say that it is O(n2). That's not that good of performance as n grows large. The best sorting algorithm, quicksort, is O(n log n).
A hash function turns a linear (non-sorted) searching algorithm from O(n) to O(1) computation. You use "1", to say that the performance is independent of n, the number of terms you're searching through. Of course, this might mean that you use n2 as much memory. So sometimes, these trade-offs are qualified by Big-O memory or computation.
But here's something interesting: you can apply the basic concept in the inverse -- to simplify understanding of complex systems. This is called Big-Ω (omega). Take capitalism and economic theory. In a free market, any individual can setup a business. That's Ω(n) value to start, where n is the population. But add roads where everyone can reach any other, and you've got Ω(n2) value generation. Now I can trade my eggs at the farm for your textiles in town. But that's just to start. Add money, a neutral medium of exchange, rather than barter, and you've just increased it to Reed's Law growth, Ω(2n). Now you can build civil strategies of business relationships, urban development cross-fertilizing with business values. The theoretical growth rate is Ω(nn), by the addition of information networks, where people can find out about all the products available to them (in constant time Ω(1)).
Unfortunately, our present economy has been stupid and reduced the power of capitalism to Ω(n log n) by concentrating value-production into huge, monolithic multi-nationals, subsidizing transportation through use of fossil fuels and other unearned labor, reducing the natural inclination for individuals to produce value themselves (there went your Reed's Law value-production). Television (now mostly cable) also gives preferential treatment to large customers.
So, just for emphasis, the potential of a free-market economy (socialist or capitalist), outside corruption and stupidity is Ω(nn) value -- that is HUGE, but no one outside this wiki knows how to deliver it -- they're still hooked on the bullshit of mass marketed, mass production of Henry Ford, (cf. Dr. Suess).
Want more? See Zen Code for quantifying the untapped value of the internet which has the same Big-Omega value.
Marcos`s Law: The theoretical power of two networks of order Ω(n2) and Ω(2n) working in tandem with one another provides Ω((nn) value generation. The phrase "in tandem" means that at least one of those dimensions has to be able to transit to any other position in the 3-d space in roughly constant time and that the arena is topologically bounded such that interactions that go off in one direction come back eventually, perhaps from another of the dimensions. This equates to four dimensions (or axiis) of interaction (contained within a computer+network system, a planetary ecosystem, a business-monetary network system, etc.).
Your body-mind would be an example. The mind would be the 2n as it has the ability to create arbitrary groupings, with body providing the n2 value, perhaps by its ability to move in 2 dimensions (A bird would be n3 under that supposition). An interesting consideration is how one of these four get constant time transit to any other dimension to create the nn value. And the answer is, that the mind's self-awareness allows transit. I'd surmise that that is one more dimension that what animals have -- an ability gained from eating the Tree of Knowledge, you might say.
BTW, FYI, and just as a guess, two nn networks (like free-market coupled with internet revolution) equates to nn (n tetrated to the n). The humans in the middle act as the free agensts giving constant transit time between systems. That's nnn... (n to the n to the n to the..., n times). BOOM. Now you not only have a revolution, you have a 26,000 year divine plan. Can you handle THAT?
And, shit, if that's not enough for you, I just named 4 nn systems (can you spot them?), reducing to 2 tetrated systems, creating 1 ???. What's your nomenclature for that, sib?
- Credit for the concept of Reed's Law, from Stephen Guerin of Santa Fe Complex.