Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months. But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years. To not internalize this more deeply is to miss investment opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.
Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 15% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let's say these industries have grown large enough that their growth rate slows down to an average of 12% a year for the next couple of decades.
So, we can crudely conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78.
The Impact of Computing grows at a screaming rate of 78% a year.
Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves. Consider the most popular television shows of the 1970s, such as The Brady Bunch or The Jeffersons, where the characters had virtually all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.
In the 1970s, there was virtually no household product with a semiconductor component. Even digital calculators were not affordable to the average household until very late in the decade.
In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.
In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.
In the late 1990s, the PC was joined by the Internet connection and the DVD player, bringing the number of household devices on the Moore's Law-type curve to three.
Today, many homes also have a wireless router, a cellular phone, an iPod, a flat-panel TV, a digital camera, and a couple more PCs. In 2006, a typical home may have as many as 8 or 9 devices which are expected to have descendants that are twice as powerful for the same price, in just the next 12 to 24 months.
To summarize, the number of devices in an average home that are on this curve, by decade :
1960s and earlier : 0
1970s : 0
1980s : 1-2
1990s : 3-4
2000s : 6-12
If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.
One extraordinary product provides a useful example, the iPod :
First Generation iPod, released October 2001, 5 GB capacity for $399
Fifth Generation iPod, released October 2005, 60 GB capacity for $399, or 12X more capacity in four years, for the same price.
Total iPods sold in 2002 : 381,000
Total iPods sold in 2005 : 22,497,000, or 59 times more than 2002.
12X the capacity, yet 59X the units, so (12 x 59) = 708 times the impact in just three years. The rate of iPod sales growth will moderate, of course, but another product will simply take up the baton, and have a similar growth in impact.
Now, we have a trend to project into the near future. It is a safe prediction that by 2015, the average home will contain 25-30 such computationally advanced devices, including sophisiticated safety and navigation systems in cars, multiple thin HDTVs greater than 60 inches wide diagonally, networked storage that can house over 1000 HD movies in a tiny volume, virtual-reality ready goggles and gloves for advanced gaming, microchips and sensors embedded into several articles of clothing, and a few robots to perform simple household chores.
Not only does Moore's Law ensure that these devices are over 100 times more advanced than their predecessors today, but there are many more of them in number. This is the true vision of the Impact of Computing, and the shocking, accelerating pace at which our world is being reshaped.
I will expand on this topic greatly in the near future. In the meantime, some food for thought :
Visualizing Moore's Law is easy when viewing the history of video games.
The Law of Accelerating Returns is the most important thing a person could read.
How semiconductors are becoming a larger share of the total economy.
Economic Growth is Exponential and Accelerating, primarily due to information technology becoming all-encompassing.
Now you're talking. The acceleration of technology is the biggest story of our time.
I remember reading articles ten years ago questioning whether computers had really made any difference in productivity and not just made reports look typeset rather than typewritten. That concern has been laid to rest.
Posted by: hgwells | February 24, 2006 at 09:25 AM
What Goes Up Comes Around
So...What Goes Around Must Come Down?
If everyone is thinking alike, then somebody isn't thinking.
General George S. Patton
The Top Three Worst, Most Miserable Failed Technology Trends of the Last Twenty Years
Artificial intelligence. In the noble and epic struggle between hardware and software for who is the waggee and who is the wagger, the idea that machines should conform to the thinking and behavior of humans rather than expecting humans to master machine logic and geekspeak somehow got lost. For as long as I have been an arms-length observer of information science, we have had the capability and understood the value of inference engines. Why does everyone not see how profoundly Google gets IT?
Convergence of computers, communications, and content. The MIT Media Lab rose to prominence on the slide of three converging circles used by Nicholas Negroponte to raise billions in corporate funds for academic brainstorming and prototyping of new media technologies. A generation of young adults had such a swell time in grad school most could not stand working in those same restrictive corporations but showed they were paying attention by making their personal fortunes on options for the billions raised for Internet start-ups. They co-habit labs but they don't converge.
Convergence of personal computing and consumer electronics. This notion has had computers playing movies with surround sound speakers and interlaced displays and is about to see a lot of microprocessors, fat I/Os, recorders, and cache memory integrated into consumer electronics. They just won't converge, no matter how much lost money is used to threaten them to behave. Gary Kildall said at the Mirosoft CD-ROM conference in 1985, "The degree to which people want to interact with their TVs has not been demonstrated."
The Top Five Best Behaved Technology Trends
Moore's Law. The functionality per chip (number of bits, transistors on a chip) and the performance (clock frequency in MHz x instructions per clock = millions of instructions per second or MIPS) doubles every 24 months. Looking ahead at the semiconductor roadmap, chip size is dropping by half every 5 years while areal density doubles every two. 1965 to 2020, so far.
Moore's Law's Corollary. Cost-per-function (microcents per bit or transistor) decreases 29% per year.
Metcalf's Law. The utility of a network equals the square of the number of nodes. Think the fax machine, the Internet, warchalking, podcasting and the chicken or the egg.
Schwerin's Law. Storage capacity increases an order of magnitude every five years. Holds for fixed, removable and solid state. 1985 to 2020, so far.
Kurzweil's Law. Each technological paradigm shift accelerates subsequent innovation logarithmically, or at an exponential rate of change. Think ripples in a pool around a large heavy object.
I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you're being had.
Michael Crichton
How Aliens Cause Global Warming
Ditto consensus market forecasts. With a comparable display of critical thinking as went into swallowing WMD, the media has bitten down on the blue laser standards war like it was real news. For shame. I will argue for the contrarian position this time around (I confess upfront I wrote in favor of it the last time around in the DVD stand-off between it and MMCD in my monthly for Information Today).
Here's why I am switching sides. Conventional wisdom has held that the industry must standardize on the technology and innovate on features, quality, and price/performance. And there is some evidence that the failure to standardize limited the technology. Just not enough.
The optical recording market in the 1980's was so badly balkanized, there was one and only one technology, the Gigadisc, that had more than one source of supply. I'll never forget Francis Pelletier, the Editor of Memoires Optiques, in the Comdex press room the year Thomson decided to quit making the Gigadisc explaining in broken English (it was always hard for me to understand angry Frenchmen, so he used my language because he had to talk more slowly this way) how far that set back the magneto-optical disc industry. This is not a perfect corollary to blue laser since each side has or will have more than one vendor of hardware and media.
Removable storage finally united behind CD-RW, the most inferior of all the formats because it traded off the fast access time of the other constant angular velocity disc architectures for a constant linear velocity format that allowed the recorded discs to play on a standard CD-ROM drive, which by that time had proved Metcalf's Law. CD-ROM was also CLV because it made the same trade-off to read CD-Audio discs, which by that time had also proved Metcalf's Law.
Competition, good; monopoly, bad
Example #1: Silicon processors
In the olden days of consumer electronics, analog formats had more time to become ubiquitous. Vinyl records had thirty-five years. Videotape had thirty. That's an unfair statement because both technologies bore the burden of introducing the concept of buying audio and video content and building a home library. By the time the digital generations of CD and DVD replaced them, consumer electronics had assumed one and only one aspect implied in Negroponte's convergence diagram, semiconductor cycles.
Gordon Moore, co-founder of Intel, originally said chips double in performance every year. This slowed down to nearly two years during the fat years of the Wintel monopoly in the 1990s. Semiconductor analyst David Risley observed in his excellent history of the CPU at www.pcmech.com/show/processors/35/11 that the pace has accelerated again in the last several years, entirely due to the competition mounted by AMD to Intel's dominant position.
Example #2: Silicon Memory
On more or less the same semiconductor cycle, flash memory, or solid state memory chips have never settled on a standard, and not just between NOR and NAND designs. There are half a dozen different NAND ones, including Secure Digital, Compact Flash, MultiMedia Card, Memory Stick, and USB flash. A universal reader/writer in most PCs now takes any of them so you can download and upload to your camera, MP3 player, smart phone, or PDA. This competition has resulted in rapid price reduction as well as technological evolution.
Not surprisingly, perhaps, solid state memory is the link between Schwerin's Law and Moore's Law. Both the cycle of doubling every two years according to Moore's Law and increasing exponentially every five years according to Schwerin's Law apply to semiconductor memory. Flash lags about an order of magnitude behind removable discs, which in turn lag one or two orders behind fixed disks.
Example #3: Audio Signal Carriers
Most content businesses have a love-hate relationship with new technology. The resistance put up by the music industry and movie industry respectively to CD-Audio and VHS would absolutely astonish anyone who first saw an account of the enormous wealth the two formats generated for the content owners. The movie industry was quite a bit more reasonable in the ramp up to DVD, insisting mainly on one standard and a good content scrambling system. Then they were all over it. Warren Lieberfarb, then head of Warner Home Video, was a tireless and articulate champion.
Unfortunately for the music industry, the record label execs were not all over DVD-Audio. The consumer electronics industry was distracted by the excellent DVD-Video market response and the DVD Forum was so slow to finalize the audio spec that the first few year's worth of players were sold without the codec to play it. By the time they got around to it, the window of opportunity was shut, except for Sony who stuck their foot in it with SACD.
Meanwhile, the consumer opted spontaneously for a lower quality encoding format being distributed online by college kids with more time to rip CDs than money for a format that had not passed on 20 years of production cost efficiences to the consumer. These MP3 files were stored on three media: memory chips, which have not been cost-reduced down to the level of making prerecorded content distribution practical; hard drives which are not removable; and on the same old CDs as data files playable on CD-ROM equipped PCs, in response to which the consumer electronics industry hurriedly shipped MP3 compatible audio CD players. The music execs were left with NO distribution medium, except that created by a Silicon Valley vegan, on which to resell their catalog.
Example #4: DVD recorders
Finally, we have the example of the recordable version of DVD.
Philips had won the compact disc standard in the early 1980's out of sight of the major media by inviting Sony to share the standard and the royalties, who in turn convinced the other Japanese CE manufacturers, each of whom had been working on some sort of digital optical disc, to join with them, and they administered it with a firm grip and quite a lot of brilliant strategizing. They had won all of the battles over supremacy of each CD variant with the exception of CD-I and Video-CD, which met with modest success. The others, CD-Audio, CD-ROM, and CD-RW were over the top successful.
Toshiba partnered with Time Warner to help convince the movie studios to create MPEG2 versions of feature and catalog films. This was a much better idea than the MPEG1 video found in VideoCD. Being a whole new technology not tied compact disc except in size and backward compatibility also was an advantage. The discs supported Dolby Digital for 5.1 surround sound which helped the consumer electronics industry sell home theatre systems to process and amplify the sound through 6 speakers. The gorgeous looking video helped sell big screen TVs, paving the way for HDTV even though the highest resolution DVD still fell short of HDTV resolution.
Inevitably Philips and Sony planned a three-pronged come back, one by introducing a rival audio format, one by holding out on the DVD+R/RW format, and one by coming up with a blue laser version to feed the hungry HDTV pixels sooner than Toshiba and partners could or would counter. Philips and Sony has a fresh platter unencumbered by the legacy of DVD in designing Blu-Ray, just as Toshiba had had with DVD. They also demonstrated again that a standard is not really necessary in bringing a generation to market, when a multi-media device can be made to read and write both. All DVD recorders sold now read and write both the DVD Forum's DVD-R/RW and Philips/Sony version DVD+R/RW.
Two catchy names, two compelling stories, two patent pools. What goes around comes down? Too soon to tell. Both groups have solid companies with good balance sheets, strong management, and lots of MBA students who studied game theory in business school.
The case of DVD recorders also highlighted the change in consumer electronics manufacturing. There is an additional link in the chain with a separation of R&D and manufacturing brought on by the aggressive cost cutting to capture market share strategy of the Chinese and the willing retailers like WalMart. But the Chinese cost advantage is eroding, as social needs are postponed to reduce massive unemployment, financial reform is unavoidable. The DVD recorders bear the royalties of all relevant patents, plus and dash, the major markets stop the output of those who do not pay the royalty, and yet DVD recorders as storage devices sell.
Conclusion
The next few years ought to be good ones for consumer electronics, personal computers, and intellectual property, separately if not convergently. The worldwide economy in 2005 is in recovery with sustainable GDP growth rates here and bubbles elsewhere: Japan 3%, India 8, Western Europe 2.4%, Eastern Europe 7, US 4.3%, China 7.8.
In the US, the post-election economy is not so bad, with inflation slowing, credit rates rising only 1.5%, exchange rates continuing decline and stimulating exports, labor productivity growth around 4%, consumer income and spending growth slowing slightly, and housing starts down 10% for the new year 2005.
The next generation of discs is about storage more than publishing, which is not to imply content distribution on physical media is finished. The more network content consumption, the more storage demand. The more consumer content creation, the more storage demand. Whether content distribution on blue laser media comes sooner or later will not impede the commercialization of this generation. It can't. Holographic and nanotech generations are in sight to take blue laser's place if it stumbles.
On the PC, the hardware and media market for blue laser is a done deal. For CE, the videogame set-top box market is a done deal. The audio version of blue laser could only have a small chance if the spec is the same as for movie audio and music videos are emphasized, but it is unlikely the industry will make the effort given the risk and past history. The video hardware and media market segment depends on the pace of HDTV broadcasting acceleration beyond the US and Japan, and HD movie production on a replicated disc. Only the latter market for replicated home video, it could be argued, depends to any degree whatsoever on a standard.
Infotech has been providing independent information and entertainment content and technology market research and forecasting since 1984.
Infotech's mission is figuring out the client's strategies based on the best numbers (not the other way around).
Infotech's business is building long-range, large scale time series market models based on granular, bottom-up market assessments within the context of economic, demographic, technological, legal, and historic realities, and using them to make unbiased forecasts for better decision making, allocation of capital, and return on investment.
Infotech's latest quantitative history and forecast, The Future of DVD from 1996 to 2025, is available for ordering at www.infotechresearch.com
Posted by: jeffolie | February 24, 2006 at 09:58 AM
Be careful what you call failure. It's not a failure, we have made huge progress in AI. Sometimes you have to go down a whole hell of a lot of blind alleys before you "understand" what the hell is going on.
"For as long as I have been an arms-length observer of information science, we have had the capability and understood the value of inference engines. Why does everyone not see how profoundly Google gets IT?"
Strong AI. Jeffolie, please read the book "On Intelligence"
http://tinyurl.com/r3aj5
Jew Hawkings figured out what the hell intelligence is: the ability to predict
Google does get it. But strong AI is dangerous, so brin and serge have to watch themselves, but I think they have figured this out. There is so much convergence around Bayesian nets in research (I am activily researching bayesian nets) it is scary. In the last 3 years, I have seen several huge fields of statistics get swallowed up by it. Expert systems dead. Frequentists dead.
You missed a law discovered by Dr. Feynman:
Mans ability to manipulate and control matter on an arbitrary atomic scale is growing exponentially. Meter tech, Centimeter tech, Millimeter tech, Micro tech, nano tech, pico tech, femto tech.
You can take our history and track innovations in each technology generation.
You missed one of the greatest programming languages of all time: DNA.
Programming DNA allows for the arbitrary control of a whole class of atomic structures. Search for repressilator. These guys are building circuits using bacteria, proteins, and DNA!
"Infotech's business is building long-range, large scale time series market models based on granular, bottom-up market assessments within the context of economic, demographic, technological, legal, and historic realities, and using them to make unbiased forecasts for better decision making, allocation of capital, and return on investment."
That last paragraph is what a Bayesian net does btw.
I don't want to sound like the perma bull, but there is come great stuff coming down the pike. We just have to shrugg off all the moochers so we can get down to business.
Fewlesh.
Posted by: Fewlesh | February 24, 2006 at 10:37 AM
Fewlesh
Bayesian logic systems have proven effective over time. A certain amount of randomness intervenes into scientific approaches and reality that logic systems can never predict.
DNA technology has ethical constraints. AI may get ethical constraints when and if it becomes effective.
Posted by: jeffolie | February 24, 2006 at 11:26 AM
Ethical constraints barely slow anything does not slow anything down. China, Korea, etc. have no such ethical hesitations.
And Humanity does not even have the choice of slowing down AI or the Technological Singularity. It existed before humanity, and preset-day humans are just the most recent among hundreds of steps.
Posted by: GK | February 24, 2006 at 11:38 AM
Ethical contraints often become political issues which can and often do remain in place. The Catholic church's positions dominated for centuries. Abortion remains a hot issue. Other issues include incest, marriage, polygamy, cloning, etc. The USSR's politics contraint effected its genetic research for decades. Who knows what weird approachs China and North Korea will come up with.
AI and Technology existed before humans? Are saying the aliens exist and are active on earth? Weird!!!
Posted by: jeffolie | February 24, 2006 at 12:02 PM
Exponentially growing intelligence among creatures on Earth existed far before humans.
Microbes - Fish - Amphibians - Reptiles - Simple Mammals - Primates - Proto Humans - Modern Humans - Post Human Intelligence.
Each step took exponentially shortening time to emerge, too, BTW.
Don't think in terms of 'artificial' intelligence, but rather the natural progression of evolution. What we are to primates is what the post-Singularity intelligence will be to us.
Humans have far less control over this inexorable evolution than most people would like to think.
Read the second link on Accelerating Returns.
Posted by: GK | February 24, 2006 at 12:11 PM
...proposed by Raymond Kurzweil in his 2001 essay: "...There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light."
This is Bullsh$t. A few decades from 2001, immortal software-based humans - nuts.
He wrote the Spirtual Machine.
All of this is way over the top.
Do you believe that machines will have spirits? Will ultra-high levels of intelligence that expand outward in the universe at the speed of light in decades if ever?
In between the doctrines of religion and science, stands the philosophical perspective of metaphysical cosmology. This ancient field of study seeks to draw logical conclusions about the nature of the Universe, man, god and/or their connections based on the extension of some set of presumed facts borrowed from religion and/or observation.
I reject metaphysical cosmology and Kurzweil now that I am aware of his predictions and philosophy.
Posted by: jeffolie | February 24, 2006 at 01:14 PM
Do you believe that humans are the end of evolution?
Do you believe that computer hardware and AI software, will never reach the power of the human brain?
Posted by: GK | February 24, 2006 at 01:20 PM
I accept that asteroids hit the Earth and wipe out developed life. Perhaps we will or will not survive that if humans do not die out first. New dominant life evolved after such impacts and ice ages. I accept that humans can and probably will evolve over the very, very long time, if we survive. Our type of human evolve long ago. What Darwian events would it take to evolve again?
Yes, I doubt that computer hardware and AI software will reach the power of the human brain. Show me a computer that dreams and adapts on its own. Show me a computer with emotions. Show me a computer that comes up with ideas.
It will never happen.
Posted by: jeffolie | February 24, 2006 at 01:31 PM
jeffolie,
Who said anything about such computers having this now? We are talking about 50-60 years from now.
At an even more basic level, are you acceleration aware? No long-term predictions about the future are credible unless acceleration is taken into account.
Do you agree that the last 100 years saw more change than the 5000 years before that?
If so, do you agree that the next 25 years will have as much change as the previous 100?
And even despite ice ages and asteroids, each new dominant life form was more intelligent, and evolved in a shorter time.
Posted by: GK | February 24, 2006 at 01:36 PM
Why would evolution speed up so the each new dominant, more intelligent life evolve faster?
Are you a believe in intelligent design with intervention from a higher power? I reject that metaphysical cosmology.
Darwian events need not favor higher intelligence. Perhaps a plague will push humans to evolve based on immune systems or other mutations.
No computer will have ideas, emotions, inventing manufacturing from scratch abilities in 50 years or ever.
Posted by: jeffolie | February 24, 2006 at 01:51 PM
Neanderthals were adapted to the cold, as shown by their large braincases, short but robust builds, and large noses — traits selected by nature in cold climates, as observed in modern sub-arctic populations. Their brain sizes have been estimated as larger than "modern" humans, but their brains may in fact have been approximately the same as those of modern humans.
Perhaps modern humans evolved because they are more vicious, murderous and cruel rather than more intelligent. Already 1 of 3 births in France are Islamic. Japan is dying and aging.
Perhaps the Western World will die off or wiether away because of its low birthrate.
Posted by: jeffolie | February 24, 2006 at 01:58 PM
"Why would evolution speed up so the each new dominant, more intelligent life evolve faster?"
It HAS been the case for 4 billion years. Why would it stop now?
Plus, you have not commented on the accelerating nature of change, particularly the evolution of intelligence. Go read up on evolution and notice that each greater intelligence evolved in shorter and shorter times
Without acceleration-awareness, no person can credibly predict the future.
Plus, you are just saying 'it can't happen, ever'. You didn't read the links at the bottom of the article about accelerating change. You would have to point to specific limitations in AI, nanotech, quantum computing, etc. to credibly claim that AI cannot exceed human intelligence.
Read up on those, then this discussion will be meaningful.
Plus, you suggest that the Western World will wither from low birthrate. Well, non-Westerers are humans too. That says nothing about whether evolution of intelligence on Earth is continuing or not, unless there is some reason to believe Westerners are the only humans that can evolve.
Posted by: GK | February 24, 2006 at 03:12 PM
You did not address that modern man may simply be more murderous.
You did not address why evolution has to be based on intelligence. You simply said the past is the future. I disagree.
"Without acceleration-awareness, no person can credibly predict the future." What is acceleration-awareness.
"You would have to point to specific limitations in AI, nanotech, quantum computing, etc. to credibly claim that AI cannot exceed human intelligence." I do not have to point to the absence of a positive. You can not point to anything that indicates AI will be capable of original thinking.
Posted by: jeffolie | February 24, 2006 at 03:25 PM
1) So what if it is more murderous? It could still be more intelligence. Present-day man is 'murderous' of life forms like elephants and whales, which were on Earth long before us, but less intelligent.
2) Evolution has been based on intelligence for 4 billion years. The newer the creature, the more intelligent (for the most part). Fish have been here much longer than gorillas. Worms much longer than fish, etc...
3) 'What is acceleration-awareness'? Exactly my point. Please reply to my previous questions :
Do you agree that the last 100 years saw more change than the 5000 years before that?
If so, do you agree that the next 25 years will have as much change as the previous 100?
Also, familiarize yourself more with the concept of a 'Singularity'. Whether you think it is a positive or negative event for humanity, at least become more familiar with it.
***Read up on this before we can discuss further.****
4) You can't say it is not possible, particularly when we are just projecting the trends of the last 4 billion years forward at the same rate (log scale).
Saying it is not possible is like someone in the 18th century saying 'faster than air flying machines will never be there'.
You must first recognize the 4-billion-year trend of greater intelligence evolving, and then have a very good reason why it won't continue for even 100 years more, if it already has 4 billion years behind it (withstanding asteroids, ice ages, extinctions, World Wars, etc.)
Posted by: GK | February 24, 2006 at 03:35 PM
I do not accept that expodential curves predict the future. Past is not necessarily prologue.
Microorganisms in a culture dish will grow exponentially, at first, after the first microorganism appears (but then logistically until the available food is exhausted, when growth stops).
A virus (SARS, West Nile, smallpox) of sufficient infectivity (k > 0) will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people.
Human population, if the number of births and deaths per person per year were to remain constant (but also see logistic growth).
Many responses of living beings to stimuli, including human perception, are logarithmic responses, which are the inverse of exponential responses; the loudness and frequency of sound are perceived logarithmically, even with very faint stimulus, within the limits of perception. This is the reason that exponentially increasing the brightness of visual stimuli is perceived by humans as a smooth (linear) increase, rather than an exponential increase. This has survival value. Generally it is important for the organisms to respond to stimuli in a wide range of levels, from very low levels, to very high levels, while the accuracy of the estimation of differences at high levels of stimulus is much less important for survival.
I reject Raymond Kurzweil's metaphysical cosmology and singularity.
I reject the Law of Accelerating Returns, proposed by Raymond Kurzweil.
Society will not allow it. In the twentieth century, eugenics movements gained popularity in a number of countries and became associated with reproduction control programmes such as compulsory sterilisation laws, then were stigmatised.
Posted by: jeffolie | February 24, 2006 at 04:13 PM
You are suggesting that something that has been true for 4 billion years will cease to be true. There has to be a much more compelling case made than that.
Als, please comment more on :
"Do you agree that the last 100 years saw more change than the 5000 years before that?
If so, do you agree that the next 25 years will have as much change as the previous 100?"
There are many, many pieces of evidence of the rate of acceleration, and you would have to do a lot to say that *all* of them will taper off.
Posted by: GK | February 24, 2006 at 04:28 PM
Trees grow fast then stop, they do not grow to heaven.The last 100 years has seen a lot of change, but that does not mean an expondential curve will continue.
Islam is the fastest growing religion on the Earth. In the last 100 years there have been no great technological advances from them. Islam not is 10% of China. At an ever increasing growth rate Islam will control the world before there is a singularity.
I gave you examples of how expondential thinking fails. Here is a couple more:
A courtier presented the Persian king with a beautiful, hand-made chessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third etc. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for 2n − 1 grains on the nth square demanded over a million grains on the 21st square, more than a million million on the 41st and there simply was not enough rice in the whole world for the final squares. (From Porritt 2005)
The water lily
French children are told a story in which you imagine having a pond with water lily leaves floating on the surface. The lily doubles in size every day and if left unchecked will smother the pond in 30 days, killing all the other living things in the water. Day after day the plant seems small and so you decide to leave it grow until it half-covers the pond, before cutting it back. On what day will that occur? The 29th day, and then you will have just one day to save the pond. (From Porritt 2005)
I reject Raymond Kurzweil's metaphysical cosmology and singularity.
I reject the Law of Accelerating Returns, proposed by Raymond Kurzweil.
They make good science fiction.
Posted by: jeffolie | February 24, 2006 at 04:51 PM
Jeffolie,
To give you an idea how a machine intelligence can be much smarter than any individual human, or even large groups of humans.
Lets take Google. What does google have? Google has 20+ million (this is pure guess) people relating search terms to web pages. Google has access to all written information related to mankind at the ends of its network tendrils. It has a willing populace of 20 million clickers that are constantly relating bits of information into a gigantic predictive bayseyian net.
Google has hundres of thousands of machines spread across the world connected by high speed fiber. When programmed with adaptive bayesian inference engines (the same algorithms that exist in the human brain), google is learning to become intelligent. Once the owners of google start to self loop the inference engine (this is consciousness btw), google can awaken. Think of the power of this inference engine.
It's completely mind boggling. A powerful predictive network with all the factual information of all human kind, kindly sorted and weighted by an army of willing humans.
When kurtzweil was saying the spirit, he means self-awareness.
Intelligent machines are not 50-60 years off. I feel Google is 3-5 years at most. We may even be there now.
Right now Google is probably like a small 2-3 year old child. Based on Google's cash flow, it can easily double it's network capasity every year.
Now these are pretty wild speculations, and I may be completely off. But you have to admit it is quite possible.
Fewlesh.
Posted by: Fewlesh | February 25, 2006 at 10:49 AM
The simplist criticism of Bayes' theorem in terms of odds and likelihood ratio is garbage in garbage out (gigo). When I studied it, long ago, it was applied to radar screen watchers and not seeing blips on the screen.
Another criticism is that it is incapable of independent original ideas. It can not perceive outside the box.
Bayes' theorem is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. In some interpretations of probability, Bayes' theorem tells how to update or revise beliefs in light of new evidence.
The probability of an event A conditional on another event B is generally different from the probability of B conditional on A. However, there is a definite relationship between the two, and Bayes' theorem is the statement of that relationship.
As a formal theorem, Bayes' theorem is valid in all interpretations of probability. However, frequentist and Bayesian interpretations disagree about the kinds of variables for which the theorem holds.
---------------------------
A quite different interpretation of the term probable has been developed by frequentists. In this interpretation, what are probable are not propositions entertained by believers, but events considered as members of collectives to which the tools of statistical analysis can be applied.
The Bayesian interpretation of probability allows probabilities to be assigned to all propositions (or, in some formulations, to the events signified by those propositions) independently of any reference class within which purported facts can be thought to have a relative frequency. Although Bayesian probability is not relative to a reference class, it is relative to the subject: it is not inconsistent for different persons to assign different Bayesian probabilities to the same proposition. For this reason Bayesian probabilities are sometimes called personal probabilities (although there are theories of personal probability which lack some features that have come to be identified with Bayesianism).
Although there is no reason why different interpretations (senses) of a word cannot be used in different contexts, there is a history of antagonism between Bayesians and frequentists, with the latter often rejecting the Bayesian interpretation as ill-grounded. The groups have also disagreed about which of the two senses reflects what is commonly meant by the term 'probable'.
To illustrate, whereas both a frequency probability and a Bayesian probability (of, e.g., 0.5) could be assigned to the proposition that the next tossed coin will land heads, only a Bayesian probability could be assigned to the proposition, entertained by a particular person, that there was life on Mars a billion years ago—because this assertion is made without reference to any population relative to which the relative frequency could be defined.
-----------------------
A thinking machine would require interpreting word in the correct context and usage. Words are so flexible and subject to change as to render them useless to a machine that could not judge sarcasim, myth, lies and body language.
Posted by: jeffolie | February 25, 2006 at 11:58 AM
Besides, Google is evil (ask the Chinese).
Posted by: jeffolie | February 25, 2006 at 01:01 PM
To fully understand human language and text, one must be human. This is why speech recognition has failed so miserably.
Why do you think the people at google are encoding all video, images, song, and text? Google must see, hear, and talk to understand human. Maybe google needs to learn to walk, run, play some sports, get a girlfriend/boyfriend/googlefriend to truly understand these things. But maybe, the visual, auditory, and textual context will be enough to form a good model of what these things are.
You claim a bayesian net cannot create a new idea? What is a new idea?
A new idea is a prediction.
A bayesian network provides the best "prediction" using observed facts and prior assumptions. A bayesian net is fully recursive.
Bayesian networks can imagine. They are fully contextual, and can easily deal with all the nuances of human language and knowledge.
They currently are driving cars around the desert. They were used to pick the FED funds rate (ala Greensplat). Like humans, bayesian networks do make mistakes, but they make the best guesses based on limited knowledge.
Do you think prediction is a survival advantage? What is the best statistical method of prediction?
Posted by: Fewlesh | February 25, 2006 at 08:59 PM
The coming of a combination that is post human as in a singularity as defined above would have to be all of software-hardware-human yet none of these separately. To creative new ideas, original concepts without reference to the senses, understand and have emotions, communicate with understanding with people is more than a bayesian logic system will ever be able to do.
Look at expert systems with so-called inference engines. Knowlege base plus rules established by human experts. GIGO if the input does not conform to the established requirements for input. GIGO if the experts rules can not deal with the input.
Humans deal with the randomness of life. The best logic systems at best use fuzzy logic.
---------------------
A new idea is not a prediction.
An idea (Greek: ιδέα) is a specific concept which arises in the mind of a person as a result of thought. The term arises in both popular and philosophical terminology.
The colloquial expression "I have no idea" may be used in any situation where the speaker is ignorant of something. In this general sense the term is synonymous with "concept".
A prediction or forecast is a statement or claim that a particular event will occur in the future.
The etymology of this word is Latin (from præ- "before" plus dicere "to say").
--------------------
"definition" of recursion:
Recursion
See "Recursion".
Posted by: jeffolie | February 25, 2006 at 09:53 PM
"bayesian logic system will ever be able to do."
I didn't say bayesian logic system. The bayesian networks I am talking about are Turing complete, and they implement "fuzzy" logic. They are NOT expert systems. Rule based expert systems (Hard logical systems) do not work well. They are far too fragile for real world problems.
what is thought?
I say thought is the probabilistic pattern matching.
What are patterns?
Patterns are the prior assumptions that have been built up through a life of learning.
What is learning? Learning is the accumulation of facts into a relatively consistent model of the outside world based on input from the senses.
What makes a great investor? He makes great financial predictions. Make makes a great artist? He predicts art that people will like. What makes a great writer? He predicts a book that people will enjoy. What makes a great historian? He predicts the events that took place in the past that best match historical records. What makes a great hunter? He predicts what his prey will do. Prediction conveys a survival advantage.
No thought exists in a vacuum.
(Read Godel, Echer, Bach for a good basic anaylsis of axiomatic systems) Even the most advanced mathmatics are based on assumptions that some basic rooting in the real world. 1 finger, 2 finger, etc.
I am not the first to propose this. I am just a student.
Here is a good book talking about Bayesian AI:
http://tinyurl.com/mthws
I haven't read it, but it should be a good starting point for looking for more information. They have lots links to example code to actually try out on problems.
Could Bayesian AI be the next dead end in AI research? Sure it could. Just like Expert Systems were all the rage in the 80's. But, unlike expert systems, they are actually solving lots of real problems now and today. They have provided a lot of new "understanding" of what is intelligence. The understanding has allowed some people to make predictions that convey monetary advantage. This alone will provide massive investment to fully explore the technology.
Fewlesh.
Posted by: Fewlesh | February 26, 2006 at 10:10 AM
Bayesian AI definitely is better than expert systems. I reject the technology singularity.
THOUGHTS AND THINKING
They are more than pattern matching.
Thoughts and thinking use imagination. Some of the most accepted thinkers such as Einstein imagine his theorms as "thought experiments".
In a Moment of Reflection, new situations and new experiences are judged against recalled ones and judgements are made. In order to make these judgements, the intellect maintains present experience and sorts relevant past experience. It does this while keeping present and past experience distinct and separate. The intellect can mix, match, merge, sift, and sort concepts, perceptions, and experience. This process is called reasoning. Logic is the science of reasoning. The awareness of this process of reasoning is access consciousness (see philosopher Ned Block). The imagination performs a different function. It combines the reasoning intellect with your feelings, intuitions and emotions, especially Hope. This is magical or irrational thinking, depending on your point of view. Thinking can be modeled by a field (like a mathematical representation of an electro-magnetic field, but with each point in the field a point of consciousness) . Patterns are formed and judgements are made within the field. Some philosophers (panpsychists/panexperientialists - see wikibook on consciousness) believe the entire field is conscious in and of itself, a consciousness field. They say consciousness creates thinking, thinking and other brain processes do not create consciousness. Other scientists (for ex. Bernard Baars) think of it as a workspace. No scientist claims to understand how we are conscious. Other philosophers (ex. Thomas Nagel) have said they do not have a clue as to how we are aware of our thinking.
------------------
LEARNING
Learning is the process of acquiring knowledge, skills, attitudes, or values, through study, experience, or teaching, that causes a change of behavior that is persistent, measurable, and specified or allows an individual to formulate a new mental construct or revise a prior mental construct (conceptual knowledge such as attitudes or values). It is a process that depends on experience and leads to long-term changes in behavior potential. Behavior potential describes the possible behavior of an individual (not actual behavior) in a given situation in order to achieve a goal. But potential is not enough; if individual learning is not periodically reinforced, it becomes shallower and shallower, and eventually is lost in that individual.
Short term changes in behavior potential, such as fatigue, do not constitute learning. Some long-term changes in behavior potential result from aging and development, rather than learning.
Learning is sense making that enables manifestation of purpose.
Einstein was dyslectic and mildly autistic. Try to make a bayesian AI or any device that can operate like that and have his genius.
Posted by: jeffolie | February 26, 2006 at 11:22 AM
Why was einstein so smart?
He made predictions were the most likely and the best fit to phyiscal world around him. Those predictions were also the basis for a whole other range of predictions about the laws of the physical world. Predictions that no one else, using the same information at the time, came up with: Relativity, Photoelectric effect, etc.
One of my favorite quotes:
"The more you know, the less you know"
Simplify your understanding of the world. Don't over complexify it.
"Try to make a bayesian AI or any device that can operate like that and have his genius."
If I made a machine that made predictions about the laws of physics that were as profound as Einstein would you think of the machine as "intelligent" as Einstein?
In regards to physics, you would have to say yes.
Einstein was "dyslectic and mildly autistic"
How does this have anything to do with his intelligence? He was a bad speller. I don't think this had anything to do with his skills in physics.
Posted by: Fewlesh | February 26, 2006 at 01:07 PM
One of Einstein's abilities was to imagine forces that were beyond the human senses. For example, his use of "time" as a maleable, flexible entity that could be faster or slower. There were no point of data that a logical device could use to predict. His imagination was the basis for his thought experiments.
I speculate that his unusual autistic, dyslectic mind gave him insights, a different point of view that contributed to genius. his predictions were NOT the most likely and the best fit to phyiscal world around him because time and other entities he dealt with were beyond human perception.
Yes. If you made a machine that made predictions about the laws of physics that were as profound as Einstein would you think of the machine as "intelligent" as Einstein?
Posted by: jeffolie | February 26, 2006 at 02:09 PM
but why does most of it matter? Most of the uses of computers are for entertainment or violating copyright laws. Building a better couch potato
Would your analogy work just as well with Marijuana and its THC content?
Posted by: ryan costa | February 26, 2006 at 03:45 PM
ryan
the thread master is obsessed with technological singularity becoming real:
Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity—technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light."
Posted by: jeffolie | February 26, 2006 at 04:04 PM
Yeah, Ray Kurtzweil theory is pretty hard to swallow even for me. I'm going to have to see a lot more evidence before I believe in a merger of human w/ machine. It's definitely possible though.
Personally, I feel the machines will just take off, why bother with their pet masters.
But we are mainly arguing about what is intelligence, and is it possible for a machine to obtain an intelligence of a human.
"One of Einstein's abilities was to imagine forces that were beyond the human senses. For example, his use of "time" as a maleable, flexible entity that could be faster or slower. There were no point of data that a logical device could use to predict. His imagination was the basis for his thought experiments."
I agree that this was pure genius. He imagined (ie, he predicted) that time was maleable.
But I disagree. There were experiments that show that the speed of light was constant. This caused all sorts of problems in physics. But Einstein created a model of the universe with curved space and time. This model (or prediction of how the universe worked) was tested later. I believe if it wasn't einstein, then we would have had to wait a few more decades, and the information would have been discovered by someone else.
Personally, I feel photoelectric effect was also pure genius at wok: "1905 Einstein investigated the phenomenon known as the photoelectric effect. The photoelectric effect is simply the ability of some metals such as potassium to eject electrons when irradiated by light"
He observed the fact that the electron's orbital was quantized. Einstein made an "educated guess" (kind of like a prediction) that electron's energy was quantized (much like the work of Max Planck, who showed other things that were quantized). Later, this would be integral to quantum mechanics.
Actually, the power of a prediction is not the "best fit" to the model (I made this error before -- Read the Frabric of Reality for a good analysis of understanding). The way to judge a prediction is in its ability to understand the phenomenon it is trying to predict. Understanding leads to a much better ability to predict the next layer of understanding. (this recursive stuff gets old doesn't it). "Best fits" don't really exist. There is no way to fully test all the possible combinations of a prediction to see if it even fits well.
Posted by: Fewlesh | February 26, 2006 at 11:02 PM
Conrad,
1) I don't agree that intelligence is not the necessary byproduct of evolution. Yes, survival is paramount, but intelligence inevitably is the single greatest tool to ensure survival, and intelligence has been rising within the course of evolulution long before humanity. It is a steadily rising curve over the last 4 billion years, with very little deviation from the trendline.
For example, the Carcharodon Megalodon was a 45-foot shark that was the top predator for over 20 million years. But 5 million years ago, the Killer Whale (Orca) evolved. Despite being smaller than the Megalodon, the Orca was intelligent and hunted in packs, developing many advanced hunting techniques. The Megalodon got outcompeted my the smaller but more intelligent Orcas, and died out.
All this was before mankind.
2) Remember Maslow's hierarchy of needs? After survival comes belonging, esteen, and self-actualization. All animals and most humans were within survival, but human intelligence created technology, which in turn created a system in which, for the first time, humans could rise to higher levels. The successor to humans might reside in self-actualization only, which is consistent with Kurzweil's theories.
Refer back to my 'Psychology of Economic Progress' article.
Posted by: GK | January 07, 2007 at 03:56 PM
I hate to say it, but I'm leaning towards agreeing with "Jeff Olie".
You make some very salient points, mate. But you seem to glaze over the very obvious fact that a graph is not only made up of ascending lines -- it also has peaks and downturns. In other words, yes currently it seems that given the last 100 years we are in a time of technological innovation. I mean, even today, just look at where all the R & D money is going.
What Jeff makes a good point about (even if it in a clumsy way - sorry mate) is the "people factor" you're going to come up against when you are nearing "technological singularity". GK, I think you appeard to be vastly under-estimating the influence of religion and so called left-wing morals on how they influence decisions in our society. While we may have the technology there now or in the next 5 years - the fervent opposition from so many people across the religoius and left-wing spectrum would simply prevent these technologies from playing a functional role in our society. This is most obviously in regard to the AI and cloning stuff.
The other thing I think you overlook buddy is also the techno saturation factor and basic supply-and-demand economics. As we've all mentioned, yes the world has had an amazing 100 years in terms of technological innovation - astounding really and quite hard to comprehend. However, the upshot of this, in my opinion, is cynicism. Because we've developed into a culture of techno-innovation, people now I believe are getting of the mindset that they almost give-up in the big rush for the next things. Of course this is hearsay, I have no facts to quantify this. But, generally, I am noticing a grater apathy amongst people (especially ye olde folks) towards technology and the latest things.
Perhaps one way to quantify that is with blu-ray, a supreme technology to video and DVD, yet people are just not embracing it and appear happy to watch their lo-fi youtube clips and pirated movies. Is it blu-ray as much of a technological revolution as DVD was to video? Of course not. What it does under score, however, in my mind, is this "if it's not broke don't fix it" mentality that people are starting to adopt. It's not a Luddite thing or anything like that - it's just that there seems to be so much oversupply in techno-innovation that people simply can't be bothered.
Final anecdote: I have a mate in LA who is really big in the porn industry. Not in front of the camera - but behind and makes a lot of money in that game. The other week I was talking to him about future releases, blu-ray, mobisodes, all that. He had all that covered, being the business savvy sharp-cat that he is. However, one thing he mentioned as he was reeling off all the formats he was releasing in was VHS. "VHS?" I exclaimed in shock. However my shock was soon turned to embarassment as he looked at me like the goofball that I was and went onto explain, "Duh! People don't always embrace new technologies - it's only like 20 percent of people that are ever going to embrace a new wave of technology."
None of us know. But we can do our best to make predictions about things, which is why I enjoy this site so much. But, GK, you really do come off as somebody hell-bent on techno-singularity. It's a fascinating concept mate, but at this stage that's all it is. I don't want to sound mean, but just don't forget to look at the issues around it. Great blog though.
Posted by: Neil | September 04, 2008 at 03:59 PM
That's really thiinnkg of the highest order
Posted by: Rocky | April 14, 2012 at 05:19 AM