(See the 10-yr update here). The Singularity. The event when the rate of technological change becomes human-surpassing, just as the advent of human civilization a few millennia ago surpassed the comprehension of non-human creatures. So when will this event happen?
There is a great deal of speculation on the 'what' of the Singularity, whether it will create a utopia for humans, cause the extinction of humans, or some outcome in between. Versions of optimism (Star Trek) and pessimism (The Matrix, Terminator) all become fashionable at some point. No one can predict this reliably, because the very definition of the singularity itself precludes such prediction. Given the accelerating nature of technological change, it is just as hard to predict the world of 2050 from 2009, as it would have been to predict 2009 from, say, 1200 AD. So our topic today is not going to be about the 'what', but rather the 'when' of the Singularity.
Let us take a few independent methods to arrive at estimations on the timing of the Singularity.
1) Ray Kurzweil has constructed this logarithmic chart that combines 15 unrelated lists of key historic events since the Big Bang 15 billion years ago. The exact selection of events is less important than the undeniable fact that the intervals between such independently selected events are shrinking exponentially. This, of course, means that the next several major events will occur within single human lifetimes.
Kurzweil wrote with great confidence, in 2005, that the Singularity would arrive in 2045. One thing I find about Kurzweil is that he usually predicts the nature of an event very accurately, but overestimates the rate of progress by 50%. Part of this is because he insists that computer power per dollar doubles every year, when it actually doubles every 18 months, which results in every other date he predicts to be distorted as a downstream byproduct of this figure. Another part of this is that Kurzweil, born in 1948, is famously taking extreme measures to extend his lifespan, and quite possibly may have an expectation of living until 100 but not necessarily beyond that. A Singularity in 2045 would be before his century mark, but herein lies a lesson for us all. Those who have a positive expectation of what the Singularity will bring tend to have a subconscious bias towards estimating it to happen within their expected lifetimes. We have to be watchful enough to not let this bias influence us. So when Kurzweil says that the Singularity will be 40 years from 2005, we can apply the discount to estimate that it will be 60 years from 2005, or in 2065.
2) John Smart is a brilliant futurist with a distinctly different view on accelerating change from Ray Kurzweil, but he has produced very little visible new content in the last 5 years. In 2003, he predicted the Singularity for 2060, +/- 20 years. Others like Hans Moravec and Vernor Vinge also have declared predictions at points in the mid/late 21st century.
3) Ever since the start of the fictional Star Trek franchise in 1966, they have made a number of predictions about the decades since, with impressive accuracy. In Star Trek canon, humanity experiences a major acceleration of progress starting from 2063, upon first contact with an extraterrestrial civilization. While my views on first contact are somewhat different from the Star Trek prediction, it is interesting to note that their version of a 'Singularity' happened to occur in 2063 (as per the 1996 film Star Trek : First Contact).
4) Now for my own methodology. We shall first take a look at novel from 1863 by Jules Verne, titled "Paris in the 20th Century". Set about a century in the future from Verne's perspective, the novel predicts innovations such as air conditioning, automobiles, helicopters, fax machines, and skyscrapers in detail. Such accuracy makes Jules Verne the greatest futurist of the 19th century, but notice how his predictions involve innovations that occured within 120 years of writing. Verne did not predict exponential growth in computation, genomics, artificial intelligence, cellular phones, and other innovations that emerged more than 120 years after 1863. Thus, Jules Verne was up against a 'prediction wall' of 120 years, which was much longer than a human lifespan in the 19th century.
But now, the wall is closer. In the 3.5 years since the inception of The Futurist, I have consistently noticed a 'prediction wall' on all long-term forecasts, that makes it very difficult to make specific predictions beyond 2040 or so. In contrast, it was not very hard to predict the state of technology in 1930 from the year 1900, just 30 years prior. Despite all the inventions between 1900 and 1930, the diffusion rate was very slow, and it took well over 30 years for many innovations to affect the majority of the population. The diffusion rate of innovation is much faster today, and the pervasive Impact of Computing is impossible to ignore. This 'event horizon' that we now see does not mean the Singularity will be as soon as 2040, as the final couple of decades before the Singularity may still be too fast to make predictions about until we get much closer. But the compression of such a wall/horizon from 120 years in Jules Verne's time to 30 years today gives us some idea of the second derivative in the rate of change, and many other top futurists have observed the same approaching phenomenon. By 2030, the prediction wall may thus be only 15 years away. By the time of the Singularity, the wall would be almost immediately ahead from a human perspective.
So we can return to the Impact of Computing as a driver of the 21st century economy. In the article, I have written about how about $700 Billion per year as of 2008, which is 1.5% of World GDP, comprises of products that improve at an average of 59% a year per dollar spent. Moore's Law is a subset of this, but this cost deflation applies to storage, software, biotechnology, and a few other industries as well.
If products tied to the Impact of Computing are 1.5% of the global economy today, what happens when they are 3%? 5%? Perhaps we would reach a Singularity when such products are 50% of the global economy, because from that point forward, the other 50% would very quickly diminish into a tiny percentage of the economy, particularly if that 50% was occupied by human-surpassing artificial intelligence.
We can thus calculate a range of dates by when products tied to the Impact of Computing become more than half of the world economy. In the table, the columns signify whether one assumes that 1%, 1.5%, or 2% of the world economy is currently tied, and the rows signify the rate at which this percentage share of the economy is increasing, whether 6%, 7%, or 8%. This range is derived from the fact that the semiconductor industry has a 12-14%% nominal growth trend, while nominal world GDP grows at 6-7% (some of which is inflation). Another way of reading the table is that if you consider the Impact of Computing to affect 1% of World GDP, but that share grows by 8% a year, then that 1% will cross the 50% threshold in 2059. Note how a substantial downward revision in the assumptions moves the date outward only by years, rather than centuries or even decades.
We see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates. Taking all of these points in combination, we can predict the timing of the Singularity. I hereby predict that the Technological Singularity will occur in :
2060-65 ± 10 years
Hence, the earliest that it can occur is 2050 (hence the URL of this site), and the latest is 2075, with the highest probability of occurrance in 2060-65. There is virtually no statistical probability that it can occur outside of the 2050-75 range.
So now we know the 'when' of the Singularity. We just don't know the 'what', nor can we with any certainty.
Related :
I don't agree with your methodology. It assumes the impact of computing as a percentage of GDP will increase at 6-8% a year, whereas the fact that products to which Moore's Law applies are becoming rapidly cheaper in real terms, will increasingly constrain and eventually reduce their share of the economy - paradoxically, despite vast, low-cost computing power becoming universally available.
Just as GDP was a pretty useless metric for measuring / classifying pre-industrial societies, it will probably become increasingly inadequate for measuring the expanding "virtual world".
Posted by: Sublime Oblivion | August 20, 2009 at 12:57 AM
S.O.,
Despite the price declines per unit performance, the dollar revenue of products that exhibit Moore's Law has still risen steadily. This is true of semis, storage, software, etc.
I don't think it will reduce their share of the economy. Rather, they will pervasely diffuse through the whole economy, reaching the 50% tipping point by the 2060s.
Posted by: The Futurist | August 20, 2009 at 01:19 AM
Hmmmm.
Not a bad way of going about the prediction, but.....
The singularity is likely to be an extremely disruptive event. So disruptive in fact that it may either accelerate or decelerate it's own appearance. At the end, the last 10 years or so, the changes will be happening so fast they will be hard to understand or assimilate.
Think of water in a pipe. The flow can increase without a problem. But beyond a certain threshold the flow of the water becomes turbulent. Turbulent flow, even though more energy is being applied, results in a lower throughput than laminar flow. Therefore there is an ideal velocity for water flow in a pipe.
However, there is one way to increase flow. Break the pipe completely. Water flows out in an uncontrolled manner, at a very high rate.
Now imagine society (us), we are the pipe. The water is change. We can tolerate an acceleration in the rate of change only up to a certain point before society itself can't handle the rapidity of the change, and the assimilation rate actually decreases (turbulent flow). If the rate of change keeps increasing, society may break apart, allowing more change to occur, but in all sorts of crazy directions.
Somewhere as soon as 2030 things are likely to get pretty crazy. Whether this helps or hurts the singularity is hard to predict. We could see either a rapid acceleration, or a sudden deceleration in the rate of change.
Heck, who am I kidding? It is already crazy. China and India, after thousands of years, are modernizing in seemingly a blink of an eye. Third world countries are getting nukes. Financial systems are crashing. It is already all around us. The pipe is already showing cracks.
Expect more, much more.
Posted by: Geoman | August 20, 2009 at 10:28 AM
Geoman,
Yes. The point about the prediction wall/horizon at 2040 would support what you are saying. Try as I might, it is very hard to gain visibility beyond 2040, and other futurists are seeing the same wall. The period from 2040-60 could be described as 'turbulent flow'.
That is why I steer clear of any guesses about 'what' it may be like. 'When' is easy, but 'what', by definition, is hard.
Posted by: The Futurist | August 20, 2009 at 11:07 AM
Are you really claiming that the radio (not wireless telegraph but sound) and the plane could be predicted in 1900 for 1930 (or is it 1940, you use both dates)
250 years ago China was the most developed country in the world
China and the UK have had nukes for the last 50 years so it isn't really recently that third world countries got nukes
Posted by: Charly | August 20, 2009 at 05:28 PM
Charly
1) Yes, futurists of the time predicted these items quite well in advance. Plus, you are cherry-picking anecdotes rather than see the entire change in the world from 1900 to 1930 (which was much, much less than from 1979 to 2009). Plus, you ignored the important diffusion point. The diffusion of the plane and AM radio by 1930 was still very minimal.
2) 250 years ago China was the most developed country in the world
Wrong. China had the largest GDP due to having the largest population. But in per-capita terms, it was far behind Britain, France, and Spain.
I see you don't think citing a source is important in making a claim (an easily debunked one, in fact).
Posted by: The Futurist | August 20, 2009 at 06:46 PM
The Futurist:
This is OT, but maybe in a future post you could look the likely trends in life-extension tech and the costs thereof over time. Although Kurzweil might not make it to the Singularity, I'm hoping that someone in good shape in their early-30s with access to a fair amount of discretionary spending might. ;)
Posted by: JAM | August 20, 2009 at 06:51 PM
JAM,
A couple of older articles here are about that.
Actuarial escape velocity.
Posted by: The Futurist | August 20, 2009 at 08:29 PM
The Futurist,
Thanks for the link. I suppose it's too early to actually predict what-money-buys-what-in-what-year at this point in time. But making lots of money and ensuring that it grows is never a bad idea in a capitalistic society in any case.
Posted by: JAM | August 21, 2009 at 05:55 AM
Charly,
The Futurist already busted your earlier, points, but I'd add that the UK is not/has never been considered a third world country. I was referring to places like North Korea, Pakistan and Iran.
China was given the bomb by Russia, and probably had very little to do with development of the weapon. Korea, Iran, and Pakistan seem to be (largely) home grown efforts.
Posted by: Geoman | August 21, 2009 at 01:58 PM
Geoman,
Actually, North Korea, Pakistan, and Iran are all with nukes due to China.
China have nuclear weapons to North Korea and Pakistan in order to support China's goals of threatening South Korea, Japan, and India respectively.
North Korea then proceeded to help Iran (and Saddam's Iraq) gain nuclear weapons, which is what make North Korea part of the 'Axis of Evil'.
So all three : North Korea, Pakistan, and Iran are downstream recipients from China. Iraq would have been too if Saddam were still there. Iraq's Osirak nuclear reactor was making weapons, but was bombed by Isreal in 1981.
Posted by: The Futurist | August 21, 2009 at 02:25 PM
The only difference between now and 1979 are the fall of the USSR, AIDS, Internet, mobile phones and the Walkman.
Aids is a force of nature and has nothing to do with humans
USSR is not technology.
Mobile phones is an idea that everybody had since the invention of the radio, if not before.
The Walkman (and the subsequent disk, hard disk and flash music players) came on the market in 1979
Internet precursor minitel went into planning mode in 1977.
Between 1900 and 1930 you had the invention of the airplanes, radio, the assembly line, Relativity, quantummechanics and WWI. Compared wit those the last 30 years look very meagre.
About China, you are talking about GDPpp per capita. Kuwait GDP per capita is also higher than South Korea but is it more developed? There is also the question of GDPpp and if it can even be calculated meaningful. There are those like Bairoch that claim that China had a higher GDPpp than Europe.
Geoman, i was joking about the UK. Their nuke is more American than British. Do they even have the launch codes? A better example would be South Africa and the 200 man which made their bomb.
Posted by: Charly | August 21, 2009 at 06:07 PM
Now Iraq has to buy them like everybody else form North Korea. Actually the building of nukes isn't that hard. It is getting the nuclear material that is hard. And Pakistan got them from the Dutch because the Americans wanted them to have it.
Posted by: Charly | August 21, 2009 at 06:11 PM
And Pakistan got them from the Dutch because the Americans wanted them to have it.
More Charly nonsense. Provide a proper source, otherwise your fantasies are exactly that - fantasies.
Posted by: The Futurist | August 21, 2009 at 08:10 PM
I always tell nonsense ;-)
http://www.expatica.com/nl/news/local_news/cia-asked-us-to-let-nuclear-spy-go-lubbers-22629.html
Posted by: Charly | August 22, 2009 at 05:54 AM
Hang on, Star Trek is optimistic?
Posted by: Private Joker | August 26, 2009 at 06:53 PM
Hang on, Star Trek is optimistic?
Regarding the future of humanity, yes. Do you have reason to think otherwise?
Given that many other science fiction future-oriented franchises are dystopian, Star Trek stands out.
Posted by: The Futurist | August 27, 2009 at 11:59 AM
The discussions I've seen of the timing of the singularity give scant attention to two of the biggest variables: Economics & religion. I don't think you can take economic progress for granted insofar as much of it's performance is based on the political winds. In the decades to come, will we have more, or less government intervention in the world's economies? And will this help or hinder the singularity? What about religion? One can't help but surmise that most religions will be hostile to the idea of the singularity, as it effectively will render one of their most powerful recruiting mechanisms - namely, promises of deliverance from death (in one way or another - reincarnation, heaven, etc.) moot.
Posted by: Dave | August 31, 2009 at 06:30 PM
Dave,
1) Political interference merely causes economic gains to quickly more to the more favorable economic zone. Look at how the leftward drift of the US has quickly caused a transfer to Asia. The Asian economies have bounced back from this recession very quickly as well. Even within the US, the flow of capital out of high-tax California and into low tax Nevada, Arizona, and Texas is visible.
2) Religions have already been trying to thwart changes, and have not succeeded. It certainly will be no easier for them now. Furthermore, many religions have their own version of an event analogous to the Singularity. Some Catholics say there will be only 2 more Popes after this one (a few decades more, in other words). Hinduism has elements that are compatible with the Singularity.
Posted by: The Futurist | September 01, 2009 at 10:41 PM
What does it matter when this "Singularity" happens if you have no idea what it might consist of? Such soothsaying reminds me of cults who always warn of armageddon or rapture that's 50 years away, ;) as you admit of catholics and hindus. Since the charted trend has continued unabated for millenia, why couldn't it keep growing exponentially for millenia more? Given the giant size of the universe, we could just keep expanding with space travel for a long time. Two further mathematical related points, a trend continues until it doesn't. There is no reason that an exponential curve can't slow down or reverse, merely noticing an exponential pattern in the past is not an argument for why it will continue: there is no mathematical law of historic exponential growth and only crazies would posit one because the concept itself is stupid. The second mathematical point is that a singularity is a mathematical construction that has no analog in reality, although some make unsubstantiated about black holes, etc. To me, it sounds like the Singularity crowd is merely using the mathematical ignorance of most people to dress up the old armageddon/rapture concept in more modern clothes. All that said, the pace of change is no doubt accelerating and extinction or some breakthrough is a real possibility, but trying to apply such mathematical extrapolation to time its arrival is mostly silly, almost as bad as the idiotic "technical analysis" of markets that many use or astrology.
Posted by: Ajay | September 05, 2009 at 03:37 PM
Ajay,
The prominently displayed chart on paradigm shifts should answer your questions.
There is no reason that an exponential curve can't slow down or reverse,
Why would that happen now, if it has never happened in the past?
You certainly have not given any compelling analysis to support your claims that a Singularity is 'silly', 'stupid', etc., when the article provides a number of different avenues that provide a similar result.
You will have to do better than that.
Posted by: The Futurist | September 05, 2009 at 03:52 PM
I see, so the chart answers my questions of why an undescribed Singularity would matter and why exponential growth can't continue for millenia more? You must be able to read a lot more into that chart than most. ;) If you really think a trend cannot slow down or stop, you're positing an iron law of exponential growth for human progress, which takes you into the realm of the loons. I wrote a very specific analysis about the incorrect use of mathematical language by you and other Singularists, I can't help it if even such simple mathematical verbiage flew over your head. For example, you do realize that an exponential curve has no singularity? Therefore, charting an exponential curve and then positing a singularity is mathematical twittery at best. The reason I posted my comment is to see if you could counter any of these specific claims, which is what makes it so funny when you can't and then try to excuse your inability by claiming I haven't made much of an argument. XD
Posted by: Ajay | September 05, 2009 at 04:28 PM
Ajay,
All you said is that an exponential trend 'can stop'. I countered that it has never stopped in 4 billion years, so why is it about to stop now? For this, you have no answer.
An exponential curve has a Singularity based on the limits of human perception. Just like human civilization was a singularity that surpassed the comprehension of all non-human creatures. This is clearly explained in the first sentence of the article.
You claim to have offered a 'very specific analyssis', when you have done nothing of the sort. In fact, you have contradicted yourself, since your misconceptions is that space travel is incompatible with a Singularity, even while the article clearly states that it is (given Star Trek as an example).
There are ways that make a strong counterargument against accelerating change, and I could show you how, but you don't appear to be able to do it.
Since all you can do is start off with works like 'loon', 'crazy', 'stupid', 'silly', etc. right off the bat in a forum where cordial respect is the norm. That is classic insecurity and immaturity.
BTW, are you the same Ajay as the 'micropayments Ajay'?
Posted by: The Futurist | September 06, 2009 at 02:48 PM
I was wondering what would happen to your calculation if, instead of using the % of world GDP, you used the % of the OECD (alternatively, +BRIC)?
Posted by: Hervé Musseau | September 07, 2009 at 09:24 AM
Herve,
It doesn't change much. Note that the consumption of semiconductors in Asia exceeds that of the OECD even now.
Whether regions like Africa can ever modernize or not, however, remains an open question.
Posted by: The Futurist | September 07, 2009 at 01:32 PM
I don't have to say why an exponential might stop, you have to say why it won't and simply extrapolating from the past is not good enough. Fundamentally, I'm making the point that the future is too uncertain, perhaps the exponential continues for millenia, perhaps a world-wide virus hits and it's reversed for centuries, we can't know what will happen. Your answer about an exponential having a singularity based on human perception is meaningless, more evidence that that term was chosen for religio-mathematical mumbo jumbo. In fact, if there have been several singularities, which is a new use of the term to me that there are several of them, why even bother using that term at all? We could have exponential growth for millenia more, dotted with another "singularity" every so often. My specific analysis was based on the misuse of mathematical modeling and terms because the rest of the argument is obvious: human progress has been accelerating for centuries, with various setbacks along the way, and everybody knew that long before Kurzweil came along. Funny how you claim that the exponential can't stop initially then say you can make a better case for how it might, :) which is it? As for my use of negative words, I have no problem labeling crazy ideas as such, perhaps you should focus on showing that they aren't rather than obsessing over that. Yep, I'm the micropayments guy.
Posted by: Ajay | September 14, 2009 at 10:40 PM
The Singularity consists of machine intelligence that is far beyond human intelligence. This creates change that is extraordinarily rapid, which produces outcomes that can't be foreseen.
"Since the charted trend has continued unabated for millenia, why couldn't it keep growing exponentially for millenia more?"
If the charted trend is "major" technological change, and the rate of change becomes on the order of months, days, hours, minutes, or seconds, then there is no way to predict the future.
For example, suppose in 1900 you had tried to predict what the world would be like in 1902 and 1950. It would probably have been pretty easy to predict what the world would be like in 1902, but unless you were extraordinarily gifted, there would be no way you could predict that in 1950 nuclear bombs would exist; there would be no way you could predict that airplanes would be flying across the ocean, there would be no way you could predict automobiles would be so ubiquitous, etc.
Now suppose *all* those technologies came to be in 1901 (instead of being spread out over the 50 years). Then in 1900 you couldn't even predict what 1902 would be like. That's all the Singularity is; it's technological change that's so rapid that even predictions of the near future become impossible.
Posted by: Mark Bahner | September 15, 2009 at 09:43 AM
Mark,
You should read the Jule Verne book "Paris in the 20th Century" Written in 1863 it tries to predict the world of 1960. It discusses air conditioning, sky scrapers, gasoline powered cars, high speed trains, the Internet, calculators, televisions, elevators, and fax machines. His other works predicted helicopters, airplanes, submarines, and the Apollo program.
HG Wells had similar success in predicting the future.
I think there is more to it than what you are saying. In the past the predictions were all about more...stuff. The hidden assumption was always that people will still be people. That they will have the same wants and desires, the same basic human needs.
Post singularity this no longer is the case. Humans will change. We do not know, nor can we predict, what will have value, or even what will or will not constitute a human being. Since we cannot predict the nature of humanity, predictions on the stuff end of things tend to flounder. Spaceships that travel at the speed of light? Sure. But will anyone even want to go that fast?
I'd say science fiction, as a literary genre, is dying right now for this very reason. There is no where for writers left to go.
I'd say, that, in the nearish future (next 100 years), everything that can be known about the universe will be known. Every experiment will be complete, every test confirmed.
Every profession will change. What will be the meaning of a journalist, a politician, professional athlete, plumber, actor, a priest, a social worker? Not in 100 years. In 10 years. In 50 years.
Everyone will be an expert at everything, will literally know everything there is to know. What then will be left?
I suspect that there will be vast levels of unemployment, but it won't matter since almost everything you want or need will be free, or as close to free as to be irrelevant.
We shall feast from the horn of plenty until we choke.
Maybe in the future we all blog all day every day. That, in fact, is my best guess. we have one, long, unending discussion.
Posted by: Geoman | September 17, 2009 at 11:59 AM
"You should read the Jule Verne book "Paris in the 20th Century" Written in 1863 it tries to predict the world of 1960. It discusses air conditioning, sky scrapers, gasoline powered cars, high speed trains, the Internet, calculators, televisions, elevators, and fax machines."
I will get it. It sounds pretty amazing.
"I think there is more to it than what you are saying. In the past the predictions were all about more...stuff. The hidden assumption was always that people will still be people."
Yes, I agree completely. The fundamental aspect of the Singularity is that machine intelligence equals and then exceeds human intelligence.
Never since the neaderthals have humans shared the planet with beings of equal intelligence or very similar intelligence. And back then, technology was changing pretty slowly. ;-)
Based on Ray Kurzweil's calculations for computer intelligence and the number of person computers sold, I've calculated the number of "human brain equivalents" (HBEs) added to the population each year.
In 2000, the value was only about 10 HBEs. Even in 2010, it's only about 10,000. But in 2020 it's about 50,000,000 (about the same number of humans added). Then in 2030, it's 100 billion. And in 2040, it's 1 quadrillion human brain equivalents added:
http://markbahner.typepad.com/random_thoughts/2005/11/why_economic_gr.html#comments
That's why change is likely to be so fast that it will be hard to predict even the near future.
Posted by: Mark Bahner | September 17, 2009 at 07:38 PM
Ajay,
I don't have to say why an exponential might stop, you have to say why it won't and simply extrapolating from the past is not good enough.
No, I don't. If something has never slowed in 4 billion years, it is not incumbent on me to prove why it would stop now for the first time.
if there have been several singularities, which is a new use of the term to me that there are several of them, why even bother using that term at all?
Because this is the first technological singularity. Also, the first one for which the existing species would be aware of it.
Funny how you claim that the exponential can't stop initially then say you can make a better case for how it might, :)
There are ways to make a counterargument far better than you are doing. That was the point, which was obvious to anyone else.
Yep, I'm the micropayments guy.
Well, you have a track record of multiple bizarre opinions prior to this thread. Among them, the notion that having a $300M net worth is middle class in Silicon Valley.
Posted by: The Futurist | September 17, 2009 at 11:08 PM
Mark Bahner,
When are you going to append the economic growth forecasts with the latest decade of data? Your projections are based on data until 2000 but not after that.
Posted by: The Futurist | September 17, 2009 at 11:11 PM
Geoman,
In 'Paris in the 20th century', Jules Verne correctly predicts the developments of the next 110 years, but not after that. He did not predict 'Moore's Law'-type computational advancement rates, etc. Thus, from 1863 onwards, the prediction wall for even the best acceleration-aware futurists was 110 years. At least this was much longer than a human lifetime, especially at the time.
Similarly, there is prediction wall today, but it is only 30 years. All the top Futurists in the world have noticed this same prediction wall coming up, and it is within the lifetimes of most people alive today. This is depicted by the lower right-hand corner of the chart in the article. Things get crazy when as get close to that corner.
It is a shame that 'Paris in the 20th Century' is so much less famous relative to 'Around the World in 80 days', given how much more profound the former is. Jules Verne was the spiritual predecessor to both Gene Roddenberry and Ray Kurzweil. HG Wells, by contrast, was not a good Futurist. A good sci-fi writer, but not a good Futurist.
Posted by: The Futurist | September 18, 2009 at 02:45 PM
The Futurist, you absolutely have to give a reason why the trend will continue, because as I already said, you're otherwise positing a law of exponential progress for humanity, which only fools would suggest and you apparently seem to be. First off, the historical trend is not quite as neat as your graph purports: there have been setbacks where the black plague and other diseases slowed down the curve, obstacles which can be obfuscated in that chart in many ways, such as carefully choosing data points to evade that reality or because an exponential graph covers so many orders of magnitude that detail is lost. Second, any historical trend can end, you yourself pointed out a couple posts ago that the aviation trend flattened out 50 years ago. Perhaps the most famous recent exponential trend, Moore's law, is hitting real limits, such as transistors increasingly approaching the size of molecules and clock speeds having flattened out this decade because of heat problems, Moore's "law" may peter out in the next decade. This decline can be mitigated somewhat by better microarchitectures and simply building bigger chips as they're doing now, but it's very likely that the Moore's exponential will flatten out long before your suggested "singularity." The fact that you cannot give any actual reasons why such limits will not apply in the coming century suggests that you're ignorant of the real issues involved with this highly complex topic.
You can talk around the use of the word singularity all you want but it's clear why it was chosen, to play on the word singular, meaning unrepeatable, and the connotation of infinity, similar to how religions always warn of an upcoming rapture that's always 50 years away. I would have no problem with the singularists if they used a more reasonable term like "phase shift," which is actually reasonable, but that would belie their whole purpose of talking up the well-known acceleration of human progress but combining it with some sort of transcendent quasi-religous rapture, and all the mumbo jumbo that entails.
As for the quality of my arguments, considering that you're incapable of making an argument for your own case, pretty funny that you say you can make my case, perhaps cuz my case is stronger? ;) As for my previous statement about Silicon Valley, in a place where $10 million means you're a nobody, $300 mil is middle class, particularly when Elon has basically shown he was lucky the first time cuz he's now running around looking for govt bailouts for his space and car companies.
Posted by: Ajay | September 18, 2009 at 08:05 PM
One can't help but surmise that most religions will be hostile to the idea of the singularity..."
Actually, I find that the vast majority of religious people really aren't that religious. It is more a social and death-fear thing. When the prospect of true immortality in front of the masses, can you guess which choice they will eagerly devour? I know I can: the one that abandons religion. It will be a very good day for humanity.
Posted by: Jonathan | September 21, 2009 at 02:24 PM
"When are you going to append the economic growth forecasts with the latest decade of data? Your projections are based on data until 2000 but not after that."
Oh brother. Not again! :-(
The Futurist, isn't it obvious that the time to append the economic growth forecasts with the latest decade of data is AFTER the decade finishes???
Specifically, the data I present in my analysis are for the years 1980, 1990, 2000, 2010 etc. Therefore, the EARLIEST that it would be appropriate to "append the economic growth forecasts with the latest decade of data" would be AFTER the 2010 data have been published. That will obviously not be until 2011.
Further, your comment about my projections being based on data until 2000 but not after that indicates that you don’t really understand my projections and the basis for them.
Here were my predictions in 2003 and 2004:
Time..........Annual P/C GDP growth...Annual P/C GDP growth
Period..........Oct 2004 prediction......Dec. 2003 prediction
2000-2010................3.0...........................2.5
2010-2020................3.5...........................3.0
2020-2030................4.5...........................3.5
2030-2040................6.0...........................4.0
2040-2050................8.0...........................4.5
2050-2060...............11.0...........................6.0
2060-2100........the differences keep growing...!
Even if the value from 2000-2010 is less than 2.5 percent per year, I still wouldn't reduce the prediction for 2010 to 2020. In fact, based on my 2005 calculations of the number of human brain equivalents added, I think the "knee" in the curve is going to be even sharper. I wouldn't be surprised if the per capita growth rate for 2020 to 2030 is more like 6 percent per year, rather than the 4.5 percent per year I predicted in 2005.
Posted by: Mark Bahner | September 21, 2009 at 06:58 PM
"Jules Verne was the spiritual predecessor to both Gene Roddenberry and Ray Kurzweil. HG Wells, by contrast, was not a good Futurist. A good sci-fi writer, but not a good Futurist."
Gene Roddenberry was in no way a "futurist." In Gene Roddenberry's "future," all human beings have hydrocarbon brains and hydrocarbon bodies, even in the 23rd and 24th centuries.
Posted by: Mark Bahner | September 21, 2009 at 07:11 PM
"Perhaps the most famous recent exponential trend, Moore's law, is hitting real limits, such as transistors increasingly approaching the size of molecules and clock speeds having flattened out this decade because of heat problems, Moore's "law" may peter out in the next decade."
In 1997, Hans Moravec predicted that $1000 personal computers would have the hardware capability of a human brain (which he estimated at 100 trillion instructions per second) circa 2022:
Moravec's 1997 prediction
In 2005, Ray Kurzweil predicted that $1000 personal computers would have the hardware capability of the human brain (which he estimated at 20 quadrillion instructions per second) circa 2020.
So there isn't much time that Moore's Law needs to last. Even at $1000 per (hardware) human brain equivalent, that means that 10 percent of the world's GDP (which circa 2020 should be about 70 trillion) would be $7 trillion. That would add 7 billion human brain equivalents--as many as the current population--every single year.
That would be a tremendous amount of brainpower attempting to overcome any (hypothetical) slowdown in Moore's Law.
Posted by: Mark Bahner | September 21, 2009 at 07:38 PM
Mark Bahner,
The Futurist, isn't it obvious that the time to append the economic growth forecasts with the latest decade of data is AFTER the decade finishes???
There are 2010 consensus projections from the IMF, World Bank, etc. which at most would vary by far too little to throw off your analysis, and should thus be plugged into your analysis.
The point is, an update once per decade is too infrequent, particularly given that your projections for this decade were far too optimistic.
Further, your comment about my projections being based on data until 2000 but not after that indicates that you don’t really understand my projections and the basis for them.
I understand them all too well. The actual GDP results for this decade were already far lower than your highly optimistic projections, which alone is enough reason for you to do an update.
I wouldn't be surprised if the per capita growth rate for 2020 to 2030 is more like 6 percent per year, rather than the 4.5 percent per year I predicted in 2005.
Now THAT is far too quick. Arnold Kling backed away from his super-optimistic forecasts from before, but you are revising them upwards, when the 2000-2009 actuals (+2010 consensus estimates) indicate the clear need to do the opposite?
So there isn't much time that Moore's Law needs to last. Even at $1000 per (hardware) human brain equivalent, that means that 10 percent of the world's GDP (which circa 2020 should be about 70 trillion) would be $7 trillion.
Where on Earth do you get the notion that semiconductors will be 10% of world GDP in 2020?? So in 11 years, the semiconductor industry will grow from $250B today to $7000B? Absurd. It is 1.5% or so of world GDP today, and will not get to 10% until the 2040s (cracking 50% by the 2060-65 period, as per the table).
Mark and Ajay represent opposite extremes - the former assumes that incredibly optimistic upticks in growth are about to happen within a decade, while the latter insists that the rate of acceleration will slow down and that there will be no Singularity. Both are ignoring the basic evidence that shows them to be very far off from the rational mean.
Posted by: The Futurist | September 21, 2009 at 10:42 PM
Mark, first let's assume AI is even possible, which I don't concede but I'll assume it for the sake of argument. Then, the real problem becomes software. Regardless of how capable the hardware is, it's taken a long time to get even rudimentarily "intelligent" software going, so you underestimate our human stupidity in not being able to figure out how to create anything even approaching AI. Also, you assume that once AI is here, they can somehow find ways around physical limits for Moore's exponential like the ones I pointed out, I don't. If AI is possible, progress will be fantastically faster than human beings but not infinitely so, just another much faster speed that is still bounded by the speed of light and other real constraints. The world will just settle into this faster pace, just as when we humans started thinking up stuff faster than any previous mammal. It still took us millenia to get where we are today.
Uh, no, that's not what I said. I said that the concept of a singularity is stupid and that it's very hard to predict if we get hit by a world-ending virus or dirty bomb first or if some kind of phase shift happens at some point far into the future.
Posted by: Ajay | September 22, 2009 at 02:33 PM
OK, so why don't you tell us all what the world per-capita GDP, purchasing power parity adjusted, is going to be in the year 2010 (in year 2000 dollars)? And using the same source(s) tell us what the value was in 2000 (again, PPP adjusted, in year 2000 dollars). Then tell us what growth rate you calculate for the period.
My projection in 2004 was 2.5 percent per year for 2000-2010, and in 2005 I raised that projection to 2.0 percent. In the 10 years from 1996 to 2005, in how many years did the growth in world capita GDP (purchasing power parity adjusted) exceed 2.75 percent per year (the midpoint of the two projections)? Also, what is your calculation for the actual growth rate in per-capita GDP for the period 2000-2010?
If you think my so, you should provide your calculation for the actual growth rate for 2000-2010, plus your estimate of the number of years in that period that were above my projections, versus below my projections.
What do YOU project rate of increase of the world per-capita GDP from 2020 to 2030?
Luckily, there’s a scientific way to figure out if you’re right. Provide your estimates of world per-capita GDP, purchasing power parity adjusted, in year 2000 dollars, for the years 2000, 2010, 2020, 2030, 2040, and 2050. Or alternatively, provide your estimates for per capita growth rates for each decade to 2050 (again, adjusted for purchasing power parity, in year 2000 dollars). Then we'll see what you know or don't know.
Posted by: Mark Bahner | September 22, 2009 at 06:07 PM
Does it take intelligence to: play chess? drive a car? fly an airplane (including taking off and landing)? mow a lawn? build a house? diagnose an illness? play soccer? play the TV game show Jeopardy (against human beings, on TV)? design and build faster computers?
Of those items above that you think require intelligence, which of them do you think a computer will never be able to do?
Of all the brains in the world, integrated circuits are the only ones in only two dimensions. So going to three dimensions seems logical. As I mentioned, both Hans Moravec and Ray Kurzweil think $1000 computers with with human brain hardware capabilities are within 10-15 years from being reality. An analysis from Intel in 2008 estimated they can go to 2029 even with the current lithography (2 dimentional) techniques.
Yes, because the human population didn't even get to 6 billion people until ~2000. What do you think the progress would have been if we'd been adding 6 billion people a year for 50 years? (People who never sleep, and only eat electricity.)
Posted by: Mark Bahner | September 22, 2009 at 06:55 PM
Mark, interesting question about what intelligence those various activities require, I'll note that only chess is done well by computers today and in such a limited way that the "intelligent" software used is not even applicable to the other activities. I'll also note that many of those activities depend on vision recognition and mobility, technology that will have to be invented separately and will take longer. It's not particularly useful to say which of those a computer can't do, nor do I have an argument against any one of those. However, to be a true AI, it'd have to be able to do ALL of those, or at least the intellectual parts. As for expanding circuits into three dimensions, that's simply the alternative I mentioned earlier of building bigger chips, as we're seeing with larger die sizes and multiple cores already. However, there are real fundamental limits such as electron velocities in transistors peak out at around .01-.001 times the speed of light and of course optical transistors would be limited by the speed of light. One can use these fundamental limits, including the thermal limits from clock speeds mentioned earlier (how do you get heat off a 3D chip when we can barely get it off a planar chip today?) and the fundamental size limit of a molecule (1 nm-1A), to extrapolate real limits on the amount of computation possible within a block of material. Finally, as I mentioned before software is way behind the hardware, so that's the real bottleneck. As for adding 6 billion AIs, I'll worry about that when we can do one, ;) if it's even possible. I don't think we'll get to it anytime soon, meaning the next couple of decades or more.
Posted by: Ajay | September 22, 2009 at 10:25 PM
Mark Bahner,
What do YOU project rate of increase of the world per-capita GDP from 2020 to 2030?
I have already done this, in very high detail. A tiny amount of effort on your part would have found this prominently displayed article.
Until 2020, not many people will differ - even Arnold Kling and Jesse Ausubel are not that far apart. My problem is with your extremely steep spike after 2020 (particularly since you are talking of making that as high as 6%, with the assumption that 10% of world GDP would computation, vs. 1% in 2009). Your December 2003 analysis was more reasonable. It is after that where you got too extreme.
The presumption that all the human-level AIs will work towards producing knowledge to boost the human metric of GDP is also a very big assumption.
Then we'll see what you know or don't know.
What makes you a qualified arbiter of this, when you will not even update your own analysis, which was the single big analysis on your whole blog, that is over 5 years old?
Posted by: The Futurist | September 23, 2009 at 01:02 AM
The Futurist,
I did an analysis in October 2004 that started with a world per capita GDP value in 2000, purchasing power parity adjusted, of $6,539 (in year 1990 dollars). That's from Brad DeLong's work.
Then I assumed that the world per capita GDP would grow by 3.0% per year from 2000 to 2010. That would produce a per-capita GDP, PPP adjusted, of $8,790 (year 1990 dollars).
You, on the other hand, started from 2007 with a world per capita GDP of $10,000 (year 2007 dollars) and assumed a growth rate of 3.5% per year from 2007 to 2010.
So why aren't you pestering yourself to update your analysis for the year 2010? You're obviously going to be too high in 2010.
And for the period 2010 to 2020, we both have the exact same value of 3.5 percent per year. So again, why in the world are you pestering me to update my analysis? You and I are both probably going to be too high in 2010, and we both have the EXACT SAME RATE from 2010 to 2020!
Also, I wrote, "Then we'll see what you know or don't know."
You responded, "What makes you a qualified arbiter of this, when you will not even update your own analysis, which was the single big analysis on your whole blog, that is over 5 years old?"
I wrote, "we," not "I." By "we" I meant that everyone in the world can see whether you know what you're talking about (and whether I know what I'm talking about), simply from observing the actual trend in world per-capita GDP growth (PPP adjusted) over the 21st century.
Here are our predictions (mine in October 2004, yours in July 2007) for world per-capita GDP percentage growth rates, PPP parity adjusted, in constant dollars:
Years.........MB_rate.....Fut_rate(%)
2010-2020......3.5%.........3.5%
2020-2030......4.5%.........3.75%
2030-2040......6.0%.........4.5%
2040-2050......8.0%.........5.5%
Obviously, mine starts to be higher than yours beginning in 2020. But we both predict a trend of increasing growth rate as the decades pass.
Posted by: Mark Bahner | September 24, 2009 at 03:13 PM
Mark,
Again, I think your December 2003 prediction is the one I essentially agree with (as well as with the unspoken conclusion that after 2060 or so, it is hard to make estimations). The October 2004 upward revision, followed by a still further increase of the 2020s to 6.0% a year, is too much.
The reason it is too much post-2020 is because you are not only estimating human-level AI too soon, but you are assuming that these AIs will happily work towards increasing human wealth. The basis for this belief is unclear.
So both the timing and the utility of these human-level AIs is far too optimistic in your analysis. The notion that 10% of world GDP in 2020 would be in computing is off by a huge amount. It will be 3% at most, by then.
Posted by: The Futurist | September 25, 2009 at 07:31 PM
Bullshit
Posted by: Mino | September 29, 2009 at 02:16 AM
It rains axioms.
AI research basically endend in the late 70's. The hardware is becoming faster and faster, enabling things like pattern recognition on normal PCs. But there is really not a single algorithm that mimics human intelligence, yet. Only very specific things, like playing chess or ping pong.
I can even admit that a computer with the same raw power as an human brain will appear in the next decades, but that doesn't mean that it can replace a human brain with all its functions.
We don't really know very much about the brain. An autistic brain doesn't differ from a "computational power" view from a normal brain, but results are vastly different.
BTW, don't singularity futurists find this concept a bit frightening?
I mean, imagine a thinking machine, wired through the internet with billions of others. What will happen? They will produce another culture or a new religion? There will be wars beetween computers, or beetween computers and humans? Or computers will obey the 3 laws of robotics?
To put it mildly this singularity thing seems a bit bogus.
BTW, anyone remeber when a weekend on the moon was considered feasible by the year 2000. Yea right. Or the man on Mars, nuclear fusion, or a gazillion of other things.
Posted by: Mino | September 29, 2009 at 02:21 AM
We still rely on fossil fuels, but we can access porn from a smartphone. And we still need to go to work on the morning with a car not dissimilar from that of our father.
Really guys, look how our father and grandfathers lived. They lived like us basically. Only slower. Grandpa used a car, a tractor, a phone. He didn't have access to internet porn unfortunately.
All the singularity thing is based on the assuption that computers will be capable of emulating a human brain, and there is not certainty in that. And in the next few year we will have a supercomputer with the same power of our brain. And I could bet my ass that it won't become self-aware.
Now I'll tell you something. The best candidate for a radical change in humanity wouldn't be a computer, but a clean, cheap source of energy. And it would not be solar, wind, or even "hot" nuclear fusion (it's simply too complicated too work).
Posted by: Mino | September 29, 2009 at 02:21 AM
The October 2004 revision upward, and the still further increase to 6.0% per year in the 2020s that I guesstimated a few days ago, are much more supported by the analysis of human brain equivalents that I performed in 2005.
As I noted in when I was performing that analysis, the cause of economic growth is (free) human minds. So the more (free) human minds there are the faster economic growth will be.
My analysis (based on hardware, ignoring software) estimated the following number of human brain equivalents (HBEs) added each year in the following years:
2015: 1 million
2024: 1 billion
2033: 1 trillion
In my mind it is simply not credible to be talking about adding more than 1 trillion human brain equivalents every year and not have absolutely spectacular economic growth.
1) So what are your estimates for the years in which computers add the equivalent of 1 million, 1 billion, and 1 trillion human brain equivalents per year?
2) By "working happily towards increasing human wealth" do you mean that the computers might actually rebel and attempt to destroy humanity? If so, I've always made it clear in my economic predictions that they did not include the possibility that computers would attempt to hurt humans. That would be an incredibly serious (even apocalyptic) problem, if self-replicating computers decided that they wanted to hurt humans. Otherwise, it seems to be a reasonable assumption that computers will be used to improve human welfare (e.g. drive cars and planes, take and serve fast food orders, build houses and roads, etc.), rather than to harm it.
Posted by: Mark Bahner | September 30, 2009 at 09:50 AM
Mino,
You clearly do not comprehend the accelerating rate of change, and how the next X years will have far more change than the previous X years.
Hence, your comments are not enlightened.
Mark Bahner,
In my mind it is simply not credible to be talking about adding more than 1 trillion human brain equivalents every year and not have absolutely spectacular economic growth.
The big assumption here is that these 'human brain equivalents' will function in the same economic way as present humans. There is no basis to support this assumption.
based on hardware, ignoring software
Aha! But you cannot ignore software. Computer power has risen 100X from 1999 to 2009, but productivity has not risen as much. Why? Software is one of the reasons, and software continues to lag hardware.
you mean that the computers might actually rebel and attempt to destroy humanity?
Maybe not directly, but they could take the world in a direction that precludes human survival. Much like habitat destruction is killing off elephants indirectly. Except that this could happen very quickly (years) with humans being unable to adapt quickly enough to survive.
Again, I am generally in agreement with your December 2003 numbers, so whatever you thought until then has more corroboration that what you think now.
Posted by: The Futurist | September 30, 2009 at 12:37 PM
The problem is not accelerating change. I know how exponentials work.
The problem is that singularity people take strong AI as granted.
And we don't have good speech recognition yet.
I don't know if true AI is possible, in theory is not impossible, but really, Kurzweil predicts the singularity by 2045.
Ten years prediction are already hard, and despite what true believers say, Kurzweil fails hard at predicting events, even in a 10 year frame.
So I'm quite confident that his prediction for 2045 is bogus.
Regarding the singularity, to be honest, it scares the hell out of me, even if I'm pretty sure that I wont see it in action.
Maybe it's the heaven for the nerds, but people like me don't see how a future when people will live as cyborgs or within computers could be good.
Finally, if we could obtain AI smarter than us, the outcome is uncertain. There are possible scenarios, one of which is that described by Kurzweil and co.
Also, if you tink about it, the singularity scenario seems mutually exclusive with other civilizations in the Universe.
Posted by: Mino | September 30, 2009 at 05:29 PM
Mino,
I take the view that The Singularity could be through AI inducing an extreme acceleration of human evolution (much like the emergence of successively more advanced humans happened in times that were orders of magnitude shorter than the evolution of, say, bears or mice).
So I'm quite confident that his prediction for 2045 is bogus.
I agree. Note my methodology for the much later 2060-65 prediction.
Also, if you tink about it, the singularity scenario seems mutually exclusive with other civilizations in the Universe.
Aha! Then you will really like my SETI and the Singularity article.
Posted by: The Futurist | October 01, 2009 at 02:00 PM
The Futurist --
You need to look at RAM. The overall revenue of ram drops now and then.
The Dec. 16 Gartner report found that the world’s semiconductor companies will collect $219.2 billion in revenue in 2009, a decline of more than 16 percent compared to 2008. Last week, Gartner published another report that found preliminary semiconductor revenue for 2008 is estimated at $261.9 billion, a 4.4 percent decline from 2007.
Cs
Posted by: ces | October 10, 2009 at 01:58 AM
"""But there is really not a single algorithm that mimics human intelligence, yet"""
Malarky. Whether or not one mimics a human isn't important. Whether the computer achieves equivalent or better results is what matters. Google has at least one human brain equivalent. Go google something. That computer plays "who wants to be a millionaire" or "jeopardy" a heck of a lot better than any human.
What we've basically figured out is that statistical analyses of huge quantities of data allow computers to produce human intelligence. It works for web search, it works for chess, it works for language translation.
Posted by: ces | October 10, 2009 at 02:07 AM
ces.
The trendlines are important, not the individual spikes of boom and bust years.
Posted by: The Futurist | October 11, 2009 at 03:11 PM
What if tech surpasses humans and it isn't a problem or disruptive or anything we are all so anxious about? What if it is anti climatic? Will it be a let down?
Posted by: sg | October 19, 2009 at 10:47 PM
I recently came across a video on youtube that I thought was very interesting. I also thought that it would tie in nicely to your blog about the Singularity.
http://www.youtube.com/watch?v=4Q75KhAeqJg&feature=fvw
The world is already starting to change. The path to the singularity is upon us, and it would seem we are in for one hell of a ride! What an interesting time to be alive!
Posted by: Zach | December 29, 2009 at 12:19 AM
So, the Singularity is due in 2060, but possibly as late as 2075.
When did you say you were born?
Posted by: Jean-Louis Trudel | February 25, 2010 at 08:44 PM
"Part of this is because he insists that computer power per dollar doubles every year, when it actually doubles every 18 months"
How do you know it doubles every 18 months?
Posted by: Mel | July 24, 2010 at 11:29 AM
Mel,
Surely this is not the first time you have heard about Moore's Law...
Posted by: The Futurist | July 24, 2010 at 01:08 PM
I was actually asking if you have a source that shows that computer power per dollar doubles every 18 months instead of per year like Kurzweil says.
Posted by: Mel | July 24, 2010 at 01:43 PM
Google 'Moore's Law' for sources. Sometimes they even say a slower doubling, of 2 years.
Also, anecdotally, refer back to every PC you have ever owned, and see how much RAM it came with at purchase. Plot the RAM on one axis, and the year on the other. You will see an 18-month doubling.
Posted by: The Futurist | July 24, 2010 at 02:31 PM
I need to chime in,
Moore's law will end by 2020 and begin a new paradigm shift. We will be seeing 3-D chips and a doubling of every year. Ive read alot about this amazing futurist Kurzweil and assume you have done the same. He points out that technology is growing at a double exponential meaning there's an exponential growth in the exponential itself. In the past, it took 5 years for computers to double in speed, now it's 15-18 months. The doubling will shrink to 9 then 8, then 7 then 6, etc.
Ray points out that many people think linearly(10, 20, 30, 40, etc) while others think computers will never double at a rate faster than 18 months and soon slow down in that rate. They have been wrong before and are wrong now. I think the singularity will happen by 2050. That's also when a $1000 computer will be as smart as all human brains.
I am also optimistic about intelligent life. Gray goo and unfriendly AI are two major concerns. Humans will merge with machines, making unfriendly AI alot less likley because we will be part or completely machine. It's those humans who choose to stay biological that possibly have to worry. Will they be forced to assimilate or simply be allowed to go extinct?
As for gray goo, that's the worst of the two concerns. It won't be so bad if it does happen after intelligent life has already begun it's expansion across the universe. Then all matter in the universe will become smart. If gray goo also expands across the universe, all matter will remain dumb with little or no chance of ever becoming intelligant unless the gray goo itself evolves.
Posted by: Savethemales | October 21, 2010 at 12:12 AM
Savethemales,
Uhh, no. Processing speed has been doubling every months almost completely consistently since integrated circuits were invented in 1958. In fact, after a certain point, the 18 month doubling time prediction revealed by Moore in his 1968 paper, was shown to be so accurate, that it was adopted by the semiconductor industry as a guideline for technology development. That's what the "roadmap" white papers, outlining short term technology goals for CPU design, are based on. So if Ray said that the computer power is increasing at a "double exponential" as you claim, then he was being completely dishonest.
In fact, we're actually starting to reach limits in terms of shrinking lithography, so processing speed growth rate will start to slow down, if we continue using the same planar chip design. Technologies like 3D chips and other improvements will merely ensure the continuation of Moore's Law, or maybe cause a slight improvement in the growth rate. But there won't be any growth in the growth rate, not for any meaningful time anyways.
Grey goo is not a realistic concern.
AI is more of a concern. I don't understand people who claim we won't be able to produce AI in silico. There are only two alternatives to this possibility: consciousness, or even merely human intelligence, is based on specific quantum mechanical effects, not on our brain's neural networks. In other words, the exact substrate in which the brain is implemented in is critical. This view is held by physicist Roger Penrose, but few other people (especially most physicists and neurobiologists) take it seriously, and I myself find it highly doubtful. But even if it is the case, it doesn't necessarily preclude AI, just greatly complicates things.
The second possibility is that intelligence is routed in some sort of spiritualistic dualism, ie: a thinking soul. This crowd doesn't strike me as the type that would believe in this possibility.
If the reality of the mind is neither of these possibilities, then intelligence is the product of neural connections and processing. The actual material implementation is unimportant. In that case, we may be able to simulate a human brain or a greater intelligence by simply coding the neurological network of brains in silico. We understand the basic properties of neurons and synapses, and part of the structure of the brain. Even if we never understand the actual cause of intelligence, we can produce intelligence by copying the structure of the brain.
In the unlikely possibility that copying the structure of the brain on the cellular/neural level doesn't produce an intelligence, we can accomplish the task by simulating the brain at a molecular level. Simulating each molecule of each organelle of each cell, and then each cell, etc, up to the entire brain macrostructure. I don't see how any possibilities exist out of these cases. Either intelligence is non material or based on the specific implementation, or we can produce AI by simulating a brain, regardless of if we know how intelligence actually functions. Of course, if we have to simulate at the molecular level, then we probably won't have AI for many decades or centuries due to the astounding computing power required, and in all likelihood the brains will be less efficient and larger than human brains, but nevertheless.
Posted by: Anymoose | August 16, 2011 at 12:49 PM
More important than the increase brought about by Moore's Law,however, is the offset caused by the difficulty in developing more complex software for the faster processors. (some people here already alluded to this) I quote:
In a 2008 article in InfoWorld, Randall C. Kennedy,[32] formerly of Intel, introduces this term [The Great Moore's Law Compensator (TGMLC), or bloat] using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.
Maybe a word processor isn't the best metric on developments in the software field, but it serves as an example that maybe things aren't so simple as "the gazillinth derivative of developments of every technological field everywhere is increasing superfast"
Posted by: Anymoose | August 16, 2011 at 12:56 PM
There's more to Singularity studies than Kurzweil though. One of the most interesting takes on it is the "Intelligence Explosion" concept being promoted by
Posted by: Michael howell | July 12, 2012 at 08:12 AM
There's another approach toward Timing the Singularity. You can look at worldwide aggregate processing power, combining human processing (brain cells) and transistors. Moravec provides a rough conversion factor between transistors and brain tissue: about 100 million mips per human brain. Hilbert and Lovec surveyed world computational capacity
http://www.uvm.edu/~pdodds/files/papers/others/2011/hilbert2011a.pdf
and arrived at a figure of 1e13 mips worldwide, or 1e5 human brains worth of compute capacity in 2007. They also showed that worldwide computer processing capacity is growing by about 60% per year.
We won't hit a singularity until digital computing capacity is about as large as human computing capacity, and probably not until digital computing capacity dominates. So it should take about 1 million times as much digital computing capacity to hit the singularity: 2037.
Posted by: cesium62 | June 25, 2013 at 04:08 PM
First of all, I like to congratulate you on your very intelligent blog. I am very impressed. ( just between you and me, are some of the people posted your alter-egos? You and the people have too short of time intervals--I am talking about the time stamps) But either way, it's a stimulating conversation--either as an imagined intellectual exercise or real in cyberspace term (which is virtual)--It's all the same on the internet.
Now, I am in my 20s, I've been aware of the technological singularity since high school. Barring accident, I will most likely see the unfolding of the encroaching event horizon whether by Singularity occurs by Kurzweil's estimate or yours. Of course, I was in the very minority of my age group who constantly thought about the upcoming singularity(which is a "fun" conversation topic, I assure you, judging by the number of blind stares I get).
Now 10 years has gone by since my mind was blown by the idea, to me, the ideal scenario involves upping our ability to see beyond the "prediction wall" because although the for the fish, the human society is incomprehensible, but imagine an AI fish, augmented with sensors of high technology, will able to some extent understand the world and predict the world a few years ahead. Thus the AI fish has broken the prediction wall of the real fish.
It's quite conceivable that similar "cognitive enhancement" will elevate the ability of a minority of cognitively enhanced individual ( by no hubristic assumption, do I assume that minority member to be myself, as I turn out to be quite a bit of disappointment after college), But the point is, there will be a minority of enhanced individuals who catch up cognitively with the ever shortening-event horizon. (again I want to emphasize that not all enhanced individual will have this ability, as not all tool users are of similar efficiency in the use of the same tool)
In other words, there will be a spectrum of prediction wall for different people, both among the unenhanced (as predictive capacity varies among the people over the centuries) and enhanced (different individuals still possess different skills when it comes to the same tools--not all rocket scientists are the calibre). For majority of Jules Verne's contemporary, prediction wall was much shorter than 120 years. But you cannot say for certainty either that at Vernes' time, the prediction wall for humanity as whole was 120 years because it's entirely conceivable that there are individuals who have forecasted predictions that came true about computing, but were not widely noted. (thinking of Babbage, Samuel Johnson, [and I forget his name, kurzweilai recommended his book, this guy apparently predicted modern age with remarkable accuracy 100 years ago])
Anyway, always be conscious of the spectrum. Let's hope that cognition will not be static for humans--or the singularity will be very ugly when I am grandpa.
Posted by: cybernoetic man | September 12, 2014 at 08:01 AM
Just to be clear, I was aware of your blog probably about 6 years ago, so not really a newcomer. But the first time that I bothered to post a comment after soaking in the wonderful analysis. I do hope that you can perhaps reveal some more information about yourself as you are strikingly intelligent--it's rare to encounter an individual like yourself in real life. It would also be fantastic, in your future blog post, if you could reveal a little bit about your own intellectual history when you encountered the idea of technological singularity--has your idea changed at all over the years (try to be as objective as possible, treating the ego as another person), and what do you do(if anything at all) to prepare for the upcoming singularity?
about the the last question, I personally see that it's very likely that humanity may branch off into effectively many different strata/species. Overall, I am a rational optimist(that is I believe with technology, all socio-economic classes will benefit in absolute terms--"better angels of our nature" to use Pinker's words), but "computing power" will become a form of leveraging capital, which will enable the netocrats to perform near superhuman tasks. I am no where near as optimistic as Kurzweil on the social question of inequality--with encroaching event horizon comes greater relative inequality. Kurzweil can be sometimes a little naive on the social issues--though I do maintain that on the whole he is correct.(80/20 rule for us all) (as I speak/understand 5-6 languages and have extensive 1st hand experience witnessing abject poverty in the countries I've travelled to).
To sum up for my own response to my own 3rd question, I think there will still be singularities after singularities ( I do not believe intelligence explosion will be infinite, it will taper off at some point) but I worry that it will be accompanied by "godly-vast" inequality--I am not a materialistic person, but I think this idea has come across a lot of acceleration-aware people's minds--in order to be "maintained" by nanobots for longer life extension will likely require a vast financial fortune.(you can't rely on Moore's law on everything since biological life is finite) So by the time that the commenters on this blog are 70s, 80s, 90s, 100s--it still will be unknown how expensive life-extension will be. So my question to you is, do you actively think how you can offset the risks of (a very expensive life maintenance procedure), how to you prepare for the singularity? Or do you by definition, think that singularity cannot be prepped for?
Posted by: cybernoetic man | September 12, 2014 at 08:22 AM