The Futurist

"We know what we are, but we know not what we may become"

- William Shakespeare

Endings and Reincarnations

Today is exactly 15 years after the start of this website.  Since that time, just about every blog that was active then, has been discontinued, unless it is the blogger's full-time career now.   On top of that, futurism is a very exclusive niche, as surprisingly few people are curious about the future beyond superficial information.  However, the accelerating rate of change, and ever-growing size and power of the ATOM, mean that the trends we discuss here are appearing in more mainstream topics.  Among the many technological changes since then, that was an era of 4:3 screens instead of 16:9, and of blogging rather than social media (let alone video-based social media).  Even converting this website to 16:9 messed up the old articles, and when faced with a choice of reformatting all of them manually, or leaving them as is as a relic of their time, I chose the latter.  Note that the more recent ATOM publication is 16:9.  

A lot has come and gone in the elapsed 15 years.  Consider the case of Singularity University.  It began in 2009 on the reputation of Ray Kurzweil, who at the time was easily the most high-profile Futurist in the world, with a respectable track record of predictions.  However, detached from Kurzweil's direct involvement, the institution quickly migrated away from any actual futurism, and built itself around a model of low-tech, in-person learning, where a single one-week program costs $15,000 (which is even more expensive than nearby Stanford University).  A few famous critiques outline how that operation practices virtually none of what it preaches, and even this was before a spate of scandals did further damage to its reputation.

Hence, there is now a vacuum in this niche, even though the rate of change continues to rise, which naturally means more organic demand for futurism.  Furthermore, if the ATOM has enabled a medium that is more flexible, far reaching, and monetized to reach full maturity, it would be appropriate for me to produce content there, under simple ATOM principles (again, in contrast to Singularity University, which is in fact the whole point of embodying one's futurism).  High-traffic YouTubers have an income stream that is very ATOM-efficient in terms of taxes, location independence, affiliate marketing, and network effects among YouTubers.  This, combined with the structural decline in the university model of learning, as we often discuss here, makes the window of opportunity obvious.  

I believe I have the material here, as well as through my experience teaching at Stanford University and speaking at large conferences worldwide, to singlehandedly create the premier YouTube channel devoted to this subject.  Over the recent holiday period, I just brainstormed a list of video topics for which I already have 10-20 minutes of tight content in my head, and the list quickly grew to over 300 titles.  The list is a superset of what is written here, since there will be more current, mainstream topics interspersed along the way.  On YouTube, all existing content is either very simple and caters to average people, or is just a recording of an esoteric talk at a conference, and not made for YouTube.  The chasm in between is unpopulated.  The YouTube channel has a better chance of being synergistic with my other professional activities, which this website was not able to rise to, traffic-wise.  

This is one of those things where I am surprised and bit annoyed that this did not occur to me years sooner.  When this website began, I was 32, whereas now I am 47.  While YouTube wasn't established in 2006, suffice it to say that for video purposes, a 32-year-old is always going to be more telegenic than a 47-year-old.  Or at least I should have began in 2015 when I was writing the ATOM publication, or in early 2016 when I posted it.  Then again, almost all of the best instructional videos on YouTube about how to undertake each aspect of creating a channel all seem to have been posted in the last 12 months, and I might not have thought it was easy to do, until recently.  It is definitely not too late, certainly for this niche. 

My goals for the channel are simple.  Among those are that I would like to eventually reach 300,000 subscribers, and 1 million hours of cumulative view time.  There will be little traffic until after the first 100 videos are posted, after which traffic rises exponentially.  I will have to put in over a thousand hours of work, and $4000+ in sunk costs before seeing anything, but as this replaces many other activities, including a lot of to-and-from transit, perhaps the net is less.  

The other factor is the greater 'immortality' of video.  Since we expect a Technological Singularity in about ~41 years (give or take), a large body of video has a greater chance of still being watched by people around at that time, even if I am gone, and even if Google/YouTube is not the platform then.  I absolutely think it is essential that people at that time know who the early leaders in this subject were.  

I still have to learn the first thing about editing videos, which is necessary even for hiring other editors, but I am to begin uploading videos in February.  

The future of this website will be determined at a later date.  To keep comments active, Typepad charges $60/yr, which is trivial, but I have to see how relevant this website remains in relation to the YouTube channel.  The high-quality, page-length comments that some contribute are more suitable here than on YouTube.  

image from futurist.typepad.comLet's see what happens.  I hope all readers over here migrate to the channel when I post the details of it.  

Until that time, see you in the future!

 

January 24, 2021 in About, Accelerating Change, The ATOM, The Singularity | Permalink | Comments (9)

Tweet This! |

Timing the Singularity, v2.0

Exactly 10 years ago, I wrote an article presenting my own proprietary method for estimating the timeframe of the Technological Singularity. Since that time, the article has been cited widely as one of the important contributions to the field, and a primary source of rebuttal to those who think the event will be far sooner.  What was, and still is, a challenge is that the mainstream continues to scoff at the very concept, whereas the most famous proponent of this concept persists with a prediction that will prove to be too soon, which will inevitably court blowback when his prediction does not come to pass.  Now, the elapsed 10-year period represents 18-20% of the timeline since the publication of the original article, albeit only ~3% of the total technological progress expected within the period, on account of the accelerating rate of change.  Now that we are considerably nearer to the predicted date, perhaps we can narrow the range of estimation somewhat, and provide other attributes of precision.  

In order to see if I have to update my prediction, let us go through updates on each of the four methodologies one by one, of which mine is the final entry of the four.  

1) Ray Kurzweil, the most famous evangelist for this concept, has estimated the Technological Singularity for 2045, and, as far as I know, is sticking with this date.  Refer to the original article for reasons why this appeared incorrect in 2009, and what his biases leading to a selection of this date may be.  As of 2019, it is increasingly obvious that 2045 is far too soon of a prediction date for a Technological Singularity (which is distinct from the 'pre-singularity' period I will define later).  In reality, by 2045, while many aspects of technology and society will be vastly more advanced than today, there will still be several aspects that remain relatively unchanged and underwhelming to technology enthusiasts.  Mr. Kurzweil is currently writing a new book, so we shall see if he changes the date or introduces other details around his prediction.  

2) John Smart's prediction of 2060 ± 20 years from 2003 is consistent with mine.  John is a brilliant, conscientious person and is less prone to let biases creep into his predictions than almost any other futurist.  Hence, his 2003 assessment appears to be standing the test of time.  See his 2003 publication here for details.  

3) The 2063 date in the 1996 film Star Trek : First Contact portrays a form of technological singularity triggered from the effect that first contact with a benign, more advanced extraterrestrial civilization had on changing the direction of human society within the canon of the Star Trek franchise.  For some reason, they chose 2063 rather than a date earlier or later, answering what was the biggest open question in the Star Trek timeline up to that point.  This franchise, incidentally, does have a good track record of predictions for events 20-60 years after a particular Star Trek film or television episode is released.  Interestingly, there has been exactly zero evidence of extraterrestrial intelligence in the last 10 years despite an 11x increase in the number of confirmed exoplanets.  This happens to be consistent with my separate prediction on that topic and its relation to the Technological Singularity.  

4) My own methodology, which also gave rise to the entire 'ATOM' set of ideas, is due for an evaluation and update.  Refer back to the concept of the 'prediction wall', and how in the 1860s the horizon limit of visible trends was a century away, whereas in 2009 it was in perhaps 2040, or 31 years away.  This 'wall' is the strongest evidence of accelerating change, and in 2019, it appears that the prediction wall has not moved 10 years further out in the elapsed interval.  It is still no further than 2045, or just 26 years away.  So in the last 10 years, the prediction wall has shrunk from 31 years to 26 years, or approximately 16%.  As we get to 2045 itself, the prediction wall at that time might be just 10 years, and by 2050, perhaps just 5 years.  As the definition of a Technological Singularity is when the prediction wall is almost zero, this provides another metric through which to arrive at a range of dates.  These are estimations, but the prediction wall's distance has never risen or stayed the same.  The period during which the prediction wall is under 10 years, particularly when Artificial Intelligence has an increasing role in prediction, might be termed as the 'pre-Singularity', which many people will mistake for the actual Technological Singularity.  

SingularityThrough my old article, The Impact of Computing, which was the precursor of the entire ATOM set of ideas, we can estimate the progress made since original publication.  In 2009, I estimated that exponentially advancing (and deflation-causing) technologies were about 1.5% of World GDP, allowing for a range between 1% and 2%.  10 years later, I estimate that number to be somewhere between 2% and 3.5%.  If we allow a newly updated range of 2.0-3.5% in the same table, and an estimate of the net growth of this diffusion in relation to the growth of the entire economy (Nominal GDP) as the same range between 6% and 8% (the revenue growth of the technology sector above NGDP), we get an updated table of when 50% of the World economy comprises of technologies advancing at Moore's Law-type rates.  

We once again see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates.  Taking all of these points in combination, we can predict the timing of the Singularity.  I hereby predict that the Technological Singularity will occur in :

 

2062 ± 8 years

 

This is a much tighter range than we had estimated in the original article 10 years ago, even as the median value is almost exactly the same.  We have effectively narrowed the previous 25-year window to just 16 years.  It is also apparent that by Mr. Kurzweil's 2045 date, only 14-17% of World GDP will be infused with exponential technologies, which is nothing close to a true Technological Singularity.     

So now we know the 'when' of the Singularity.  We just don't know what happens immediately after it, nor can anyone with any certainty. 

 

Related :

Timing the Singularity, v1.0

The Impact of Computing

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

SETI and the Singularity

 

Related ATOM Chapters :

2 : The Exponential Trendline of Economic Growth

3 : Technological Disruption is Pervasive and Deepening

4 : The Overlooked Economics of Technology

 

 

August 20, 2019 in Accelerating Change, Artificial Intelligence, Computing, Core Articles, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (66)

Tweet This! |

ATOM Award of the Month, November 2017

image from upload.wikimedia.orgFor this month, the ATOM AotM goes outward.  Much like the September ATOM AotM, this is another dimension of imaging.  But this time, we focus on the final frontier.  Few have noticed that the rate of improvement of astronomical discovery is now on an ATOM-worthy trajectory, such that this merited an entire chapter in the ATOM publication.  

Here at The Futurist, we have been examining telescopic progress for over a decade.  In September of 2006, I estimated that telescope power was rising at a compounding rate of 26%/year, and that this trend has been ongoing for decades.  26%/year happens to be the square root of Moore's Law, which is precisely what is to be expected, since to double resolution by halving the size of a pixel, one pixel has to be divided into four.  This is also why video game and CGI resolution rises at 26%/year.  

Rising telescope resolution enabled the first exoplanet to be discovered in 1995, and then a steady stream after 2005.  This estimated rate led me to correctly predict that the first Earth-like planets would be discovered by 2010-11, and that happened right on schedule.  But as with many such thresholds, after initial fanfare, the new status quo manifests and people forget what life was like before.  This leads to an continuous underestimation of the rate of change by the average person.

Histogram_Chart_of_Discovered_Exoplanets_as_of_2017-03-08Then, in May 2009, I published one of the most important articles ever written on The Futurist : SETI and the Singularity.  At that time, only 347 exoplanets were known, almost all of which were gas giants much larger than the Earth.  That number has grown to 3693 today, or over ten times as many.  Note how we see the familiar exponential curve inherent to every aspect of the ATOM.  Now, even finding Earth-like planets in the 'life zone' is no longer remarkable, which is another aspect of human psychology towards the ATOM - that a highly anticipated and wondrous advance quickly becomes a normalized status quo and most people forget all the previous excitement.   

1280px-KeplerHabitableZonePlanets-20170616The rate of discovery may soon accelerate further as key process components collapse in cost.  Recent computer vision algorithms have proven themselves to be millions of times faster than human examiners.  A large part of the cost of exoplanet discovery instruments like the Kepler Space Observatory is the 12-18 month manual analysis period.  If computer vision can perform this task in seconds, the cost of comparable future projects plummets, and new exoplanets are confirmed almost immediately rather than every other year.  This is another massive ATOM productivity jump that removes a major bottleneck in an existing process structure.  A new mission like Kepler would cost dramatically less than the previous one, and will be able to publish results far more rapidly.  

Given the 26%/year trendline, the future of telescopic discovery becomes easier to predict.  In the same article, I made a dramatic prediction about SETI and the prospects of finding extraterrestrial intelligence.  Many 'enlightened' people are certain that there are numerous extraterrestrial civilizations.  While I too believed this for years (from age 6 to about 35), as I studied the accelerating rate of change, I began to notice that within the context of the Drake equation, any civilization even slightly more advanced than us would be dramatically more advanced.  In terms of such a civilization, while their current activities might very well be indistinguishable from nature to us, their past activities might still be visible as evidence of their existence at that time.  This led me to realize that while there could very well be thousands of planets in our own galaxy that are slightly less advanced that us, it becomes increasingly difficult for there to be one more advanced than us that still manages to avoid detection.  Other galaxies are a different story, simply because the distance between galaxies is itself 10-20 times more than the diameter of the typical galaxy.  Our telescopic capacity is rising 26%/year after all, and the final variable of the Drake equation, fL, has risen from just 42 years at the time of Carl Sagan's famous clip in 1980, to 79 years now, or almost twice as long.  

Hence, the proclamation I had set in 2009 about the 2030 deadline (21 years away at the time) can be re-affirmed, as the 2030 deadline is now only 13 years away.  

2030

Despite the enormity of our galaxy and the wide range of signals that may exist, even this is eventually superseded by exponential detection capabilities.  At least our half of the galaxy will have received a substantial examination of signal traces by 2030.  While a deadline 13 years away seems near, remember that the extent of examination that happens 2017-30 will be more than in all the 400+ years since Galileo, for Moore's Law reasons alone.  The jury is out until then.  

(all images from Wikipedia or Wikimedia).  

 

Related Articles :

New Telescopes to Reveal Untold Wonders

SETI and the Singularity

Telescope Power - Yet Another Accelerating Technology

 

Related ATOM Chapters :

12. The ATOM's Effect on the Final Frontier

  

 

November 20, 2017 in Accelerating Change, ATOM AotM, Space Exploration, The Singularity | Permalink | Comments (122)

Tweet This! |

Recent TV Appearances for The ATOM

I have recently appeared on a couple of television programs.  The first was Reference Point with Dave Kocharhook, as a two-part Q&A about The ATOM.


The next one was FutureTalk TV with Martin Wasserman, that included a 10-minute Q&A about The ATOM.

Inch-by-inch, we will get there.  The world does not have to settle for our current substandard status quo.

As always, all media coverage is available here.  

 

 

June 05, 2017 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (24)

Tweet This! |

Google Talk on the ATOM

Kartik Gada had a Google Talk about the ATOM :  

 

December 26, 2016 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (25)

Tweet This! |

The TechnoSponge

After years of thinking about this, I have come up with a term that can describe the thoughts I have had about the new, 'good' type of deflation that is evading the notice of almost all of the top economists in the world today.  This changes many of the most fundamental assumptions about economics, even as most economic thought is far behind the curve. 

First, let us review some events that transpired over the last 2 years.  To stave off the prospect of a deflationary spiral that could lead to a depression, the major governments of the world followed 20th-century textbook economics, and injected colossal amounts of liquidity into the financial system.  In the US, not only was the Fed Funds rate lowered to nearly zero (for now 18 months and counting), but an additional $1 Trillion was injected in. 

However, now that a depression has been averted, and the recession has ended, we were supposed to experience inflation even amidst high unemployment, just like we did in the 1970s, to minimize debt burdens.  But alas, there is still no inflation, despite a yield curve with more than 3% steepness, and a near-0% FF rate for so long.  How could this be?  What is absorbing all the liquidity?   

In The Impact of Computing, I discussed how 1.5% of World GDP today comprises of products where the same functionality can be purchased for a price that halves every 18 months.  'Moore's Law' applies to semiconductors, but storage, software, and some biotech are also on a similar exponential curve.  This force makes productivity gains higher, and inflation lower, than traditional 20th century economics would anticipate.  Furthermore, the second derivative is also increasing - the rate of productivity gains itself is accelerating.  1.5% of World GDP may be small, but what about when this percentage grows to 3% of World GDP?  5%?  We may only be a decade away from this, and the impact of this technological deflation will be more obvious. 

Most high-tech companies have a business model that incorporates a sort of 'bizarro force' that is completely the opposite of what old-economy companies operate under : The price of the products sold by a high-tech company decreases over time.  Any other company will manage inventory, pricing, and forecasts under an assumption of inflationary price increases, but a technology company exists under the reality that all inventory depreciates very quickly (at over 10% per quarter in many cases), and that price drops will shrink revenues unless unit sales rise enough to offset it (and assuming that enough unit inventory was even produced).  This results in the constant pressure to create new and improved products every few months just to occupy prime price points, without which revenues would plunge within just a year.  Yet, high-tech companies have built hugely profitable businesses around these peculiar challenges, and at least 8 such US companies have market capitalizations over $100 Billion.  6 of those 8 are headquartered in Silicon Valley. 

Now, here is the point to ponder : We have never had a significant technology sector while also facing the fears (warranted or otherwise) of high inflation.  When high inflation vanished in 1982, the technology sector was too tiny to be considered a significant contributor to macroeconomic statistics.  In an environment of high inflation combined with a large technology industry, however, major consumer retail pricepoints, such as $99.99 or $199.99, become more affordable.  The same also applies to enterprise-class customers.  Thus, demand creeps upwards even as cost to produce the products goes down on the same Impact of Computing curve.  This allows a technology company the ability to postpone price drops and expand margins, or to sell more volume at the same nominal dollar price.  Hence, higher inflation causes the revenues and/or margins of technology companies to rise, which means their earnings-per-share certainly surges.

So what we are seeing is the gigantic amount of liquidity created by the Federal Reserve is instead cycling through technology companies and increasing their earnings.  The products they sell, in turn, increase productivity and promptly push inflation back down.  Every uptick in inflation merely guarantees its own pushback, and the 1.5% of GDP that mops up all the liquidity and creates this form of 'good' deflation can be termed as the 'Technosponge'.  So how much liquidity can the Technosponge absorb before saturation? 

At this point, if the US prints another $1 Trillion, that will still merely halt deflation, and there will be no hint of inflation at all.  It would take a full $2 Trillion to saturate the techno-sponge, and temporarily push consumer inflation to even the less-than-terrifying level of 4% while also generating substantial jumps in productivity and tech company earnings.  In fact, the demographics of the US, with baby boomers reaching their geriatric years, are highly deflationary (and this is the bad type of deflation), so the US would have to print another $1 Trillion every year for the next 10 years just to offset demographic deflation, and keep the Technosponge saturated. 

A Technosponge that is 1.5% of GDP might be keeping CPI inflation at under 2%, but when the techno-sponge is 3% of GDP, even trillions of dollars of liquidity won't halt deflation.  Deflation may become normal, even as living standards and productivity rise at ever-increasing rates.  The people who will suffer are holders of debt, particularly mortgage debt.  Inflating away debt will no longer be a tool available to rescue people (and governments) from their errors.  The biggest beneficiaries will be technology companies, and those who are tied to them. 

But to keep prosperity rising, productivity has to rise at the maximum possible rate.  This requires the Technosponge to be kept full at all times - the 'new normal'.  Thus, the printing press has to start on the first $1 Trillion now, and printing has to continue until we see inflation.  Economists will be surprised at how much can be printed without seeing any inflation, and will not be able to draw the connection about why the printed money is boosting productivity. 

Related :

The Impact of Computing

Timing the Singularity

 

 

July 01, 2010 in Accelerating Change, Computing, Economics, Technology, The Singularity | Permalink | Comments (104)

Tweet This! |

Timing the Singularity

(See the 10-yr update here).  The Singularity.  The event when the rate of technological change becomes human-surpassing, just as the advent of human civilization a few millennia ago surpassed the comprehension of non-human creatures.  So when will this event happen?

There is a great deal of speculation on the 'what' of the Singularity, whether it will create a utopia for humans, cause the extinction of humans, or some outcome in between.  Versions of optimism (Star Trek) and pessimism (The Matrix, Terminator) all become fashionable at some point.  No one can predict this reliably, because the very definition of the singularity itself precludes such prediction.  Given the accelerating nature of technological change, it is just as hard to predict the world of 2050 from 2009, as it would have been to predict 2009 from, say, 1200 AD.  So our topic today is not going to be about the 'what', but rather the 'when' of the Singularity. 

Let us take a few independent methods to arrive at estimations on the timing of the Singularity.

1) Ray Kurzweil has constructed this logarithmic chart that combines 15 unrelated lists of key historic events since the Big Bang 15 billion years ago.  The exact selection of events is less important than the undeniable fact that the intervals between such independently selected events are shrinking exponentially.  This, of course, means that the next several major events will occur within single human lifetimes. 

772px-ParadigmShiftsFrr15Events_svg

Kurzweil wrote with great confidence, in 2005, that the Singularity would arrive in 2045.  One thing I find about Kurzweil is that he usually predicts the nature of an event very accurately, but overestimates the rate of progress by 50%.  Part of this is because he insists that computer power per dollar doubles every year, when it actually doubles every 18 months, which results in every other date he predicts to be distorted as a downstream byproduct of this figure.  Another part of this is that Kurzweil, born in 1948, is famously taking extreme measures to extend his lifespan, and quite possibly may have an expectation of living until 100 but not necessarily beyond that.  A Singularity in 2045 would be before his century mark, but herein lies a lesson for us all.  Those who have a positive expectation of what the Singularity will bring tend to have a subconscious bias towards estimating it to happen within their expected lifetimes.  We have to be watchful enough to not let this bias influence us.  So when Kurzweil says that the Singularity will be 40 years from 2005, we can apply the discount to estimate that it will be 60 years from 2005, or in 2065. 

2) John Smart is a brilliant futurist with a distinctly different view on accelerating change from Ray Kurzweil, but he has produced very little visible new content in the last 5 years.  In 2003, he predicted the Singularity for 2060, +/- 20 years.  Others like Hans Moravec and Vernor Vinge also have declared predictions at points in the mid/late 21st century. 

3) Ever since the start of the fictional Star Trek franchise in 1966, they have made a number of predictions about the decades since, with impressive accuracy.  In Star Trek canon, humanity experiences a major acceleration of progress starting from 2063, upon first contact with an extraterrestrial civilization.  While my views on first contact are somewhat different from the Star Trek prediction, it is interesting to note that their version of a 'Singularity' happened to occur in 2063 (as per the 1996 film Star Trek : First Contact). 

4) Now for my own methodology.  We shall first take a look at novel from 1863 by Jules Verne, titled "Paris in the 20th Century".  Set about a century in the future from Verne's perspective, the novel predicts innovations such as air conditioning, automobiles, helicopters, fax machines, and skyscrapers in detail.  Such accuracy makes Jules Verne the greatest futurist of the 19th century, but notice how his predictions involve innovations that occured within 120 years of writing.  Verne did not predict exponential growth in computation, genomics, artificial intelligence, cellular phones, and other innovations that emerged more than 120 years after 1863.  Thus, Jules Verne was up against a 'prediction wall' of 120 years, which was much longer than a human lifespan in the 19th century. 

But now, the wall is closer.  In the 3.5 years since the inception of The Futurist, I have consistently noticed a 'prediction wall' on all long-term forecasts, that makes it very difficult to make specific predictions beyond 2040 or so.  In contrast, it was not very hard to predict the state of technology in 1930 from the year 1900, just 30 years prior.  Despite all the inventions between 1900 and 1930, the diffusion rate was very slow, and it took well over 30 years for many innovations to affect the majority of the population.  The diffusion rate of innovation is much faster today, and the pervasive Impact of Computing is impossible to ignore.  This 'event horizon' that we now see does not mean the Singularity will be as soon as 2040, as the final couple of decades before the Singularity may still be too fast to make predictions about until we get much closer.  But the compression of such a wall/horizon from 120 years in Jules Verne's time to 30 years today gives us some idea of the second derivative in the rate of change, and many other top futurists have observed the same approaching phenomenon.  By 2030, the prediction wall may thus be only 15 years away.  By the time of the Singularity, the wall would be almost immediately ahead from a human perspective. 

So we can return to the Impact of Computing as a driver of the 21st century economy.  In the article, I have written about how about $700 Billion per year as of 2008, which is 1.5% of World GDP, comprises of products that improve at an average of 59% a year per dollar spent.  Moore's Law is a subset of this, but this cost deflation applies to storage, software, biotechnology, and a few other industries as well. 

If products tied to the Impact of Computing are 1.5% of the global economy today, what happens when they are 3%? 5%?  Perhaps we would reach a Singularity when such products are 50% of the global economy, because from that point forward, the other 50% would very quickly diminish into a tiny percentage of the economy, particularly if that 50% was occupied by human-surpassing artificial intelligence.   

Singularity We can thus calculate a range of dates by when products tied to the Impact of Computing become more than half of the world economy.  In the table, the columns signify whether one assumes that 1%, 1.5%, or 2% of the world economy is currently tied, and the rows signify the rate at which this percentage share of the economy is increasing, whether 6%, 7%, or 8%.  This range is derived from the fact that the semiconductor industry has a 12-14%% nominal growth trend, while nominal world GDP grows at 6-7% (some of which is inflation).  Another way of reading the table is that if you consider the Impact of Computing to affect 1% of World GDP, but that share grows by 8% a year, then that 1% will cross the 50% threshold in 2059.  Note how a substantial downward revision in the assumptions moves the date outward only by years, rather than centuries or even decades. 

We see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates.  Taking all of these points in combination, we can predict the timing of the Singularity.  I hereby predict that the Technological Singularity will occur in :

 

2060-65 ± 10 years

 

Hence, the earliest that it can occur is 2050 (hence the URL of this site), and the latest is 2075, with the highest probability of occurrance in 2060-65.  There is virtually no statistical probability that it can occur outside of the 2050-75 range. 

So now we know the 'when' of the Singularity.  We just don't know the 'what', nor can we with any certainty. 

Related :

The Impact of Computing

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

SETI and the Singularity

August 20, 2009 in Accelerating Change, Core Articles, The Singularity | Permalink | Comments (69)

Tweet This! |

The Next Big Thing in Entertainment, A Half-Time Update

On April 1, 2006, I wrote a detailed article on the revolutionary changes that were to occur in the concept of home entertainment by 2012 (see Part I and Part II of the article).  Now, in 2009, half of the time within the six-year span between the original article and the prediction has elapsed.  Of course, given the exponential nature of progress, much more happens within the second half of any prediction horizon relative to the first half. 

The prediction issued in 2006 was:

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply partaken in that it will reduce the time spent on watching network television to half of what it is (in 2006), by 2012.

The basis of the prediction was detailed in various points from the original article, which in combination would lead to the outcome of the prediction.  The progress as of 2009 around these points is as follows :

1) Video game graphics continue to improve : Note the progress of graphics at 10-year intervals starting from 1976.  Projecting the same trend, 2012 will feature many games with graphics that rival that of CGI films, which itself can be charted by comparing Pixar's 'Toy Story' from 1995 to 'Up' from 2009.  See this demonstration from the 2009 game 'Heavy Rain', which arguably exceeds the graphical quality of many CGI films from the 1990s.   

The number of polygons per square inch on the screen is a technology that is closely tied to The Impact of Computing, and can only rise steadily.  The 'uncanny valley' is a hurdle that designers and animators will take a couple of years to overcome, but overcoming this barrier is inevitable as well. 

2) Flat-screen HDTVs reach commodity prices : This has already happened, and prices will continue to drop so that by 2012, 50-inch sets with high resolution will be under $1000.  A thin television is important, as it clears the room to allow more space for the movement of the player.  A large size and high resolution are equally important, in order to create an immersive visual experience. 

We are rapidly trending towards LED and Organic LED (OLED) technologies that will enable TVs to be less than one centimeter thick, with ultra-high resolution. 

3) Speech and motion recognition as control technologies : When the original article was written on April 1, 2006, the Nintendo Wii was not yet available in the market.  But as of June 2009, 50 million units of the Wii have sold, and many of these customers did not own any game console prior to the Wii. 

The traditional handheld controllers are very limited in this regard, despite being used by hundreds of millions of users for three decades.  If the interaction that a user can have with a game is more natural, the game becomes more immersive to the human senses.  See this demonstration from Microsoft for their 'Project Natal' interface technology, due for release in 2010. 

Furthermore, haptic technologies have made great strides, as seen in the demonstration videos over here.  Needless to say, the possibilities are vast. 

4) More people are migrating away from television, and towards games :  Television viewership is plummeting, particularly among the under-50 audience, as projected in the original 2006 article.  Fewer and fewer television programs of any quality are being produced, as creative talent continues to leak out of television network studios.  At the same time, World of Warcraft has 11 million subscribers, and as previously mentioned, the Wii has 50 million units in circulation. 

There are only so many hours of leisure available in a day, and Internet surfing, movies, and video games are all more compelling than the ever-declining quality of television offerings.  Children have already moved away from television, and the trend will creep up the age scale.

5) Some people can earn money through games : There are an increasing number of ways where avid players can earn real money from activities within a Game.  From trading of items to selling of characters, this market is estimated at over $1 billion in 2008, and is growing. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a group of people who are able to earn a full-time living through these VR worlds.  This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today. 

Taking all 5 of these points in combination, the original 2006 prediction appears to be on track.  By 2012, hours spent on television will be half of what they were in 2006, with sports and major live events being the only forms of programming that retain their audience. 

Overall, the prediction seems to be well on track.  Disruptive technologies are in the pipeline, and there is plenty of time for each of these technologies to combine into unprecedented new applications.  Let us see what the second half of the time interval, between now and 2012, delivers. 

July 19, 2009 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (20)

Tweet This! |

SETI and the Singularity

Planetalignment_whiteThe Search for Extra-Terrestrial Intelligence (SETI) seeks to answer one of the most basic questions of human identity - whether we are alone in the universe, or merely one civilization among many.  It is perhaps the biggest question that any human can ponder. 

The Drake Equation, created by astronomer Frank Drake in 1960, calculates the number of advanced extra-terrestrial civilizations in the Milky Way galaxy in existence at this time.  Watch this 8-minute clip of Carl Sagan in 1980 walking the audience through the parameters of the Drake Equation.  The Drake equation manages to educate people on the deductive steps needed to understand the basic probability of finding another civilization in the galaxy, but as the final result varies so greatly based on even slight adjustments to the parameters, it is hard to make a strong argument for or against the existence of extra-terrestrial intelligence via the Drake equation.  The most speculative parameter is the last one, fL, which is an estimation of the total lifespan of an advanced civilization.  Again, this video clip is from 1980, and thus only 42 years after the advent of radio astronomy in 1938.  Another 29 years, or 70%, have since been added to the age of our radio-astronomy capabilities, and the prospect of nuclear annihilation of our civilization is far lower today than in was in 1980.  No matter how ambitious or conservative of a stance you take on the other parameters, the value of fL in terms of our own civilization, continues to rise.  This leads us to our first postulate :

The expected lifespan of an intelligent civilization is rising.       

Carl Sagan himself believed that in such a vast cosmos, that intelligent life would have to emerge in multiple locations, and the cosmos was thus 'brimming over' with intelligent life.  On the other side are various explanations for why intelligent life will be rare.  The Rare Earth Hypothesis argues that the combination of conditions that enabled life to emerge on Earth are extremely rare.  The Fermi Paradox, originating back in 1950, questions the contradiction between the supposed high incidence of intelligent life, and the continued lack of evidence of it.  The Great Filter theory suggests that many intelligent civilizations self-destruct at some point, explaining their apparent scarcity.  This leads to the conclusion that the easier it is for civilization to advance to our present stage, the bleaker our prospects for long-term survival, since the 'filter' that other civilizations collide with has yet to face us.  A contrarian case can thus be made that the longer we go without detecting another civilization, the better. 

Exochart But one dimension that is conspicuously absent from all of these theories is an accounting for the accelerating rate of change.  I have previously provided evidence that telescopic power is also an accelerating technology.  After the invention of the telescope by Galileo in 1609, major discoveries used to be several decades apart, but now are only separated by years.  An extrapolation of various discoveries enabled me to crudely estimate that our observational power is currently rising at 26% per year, even though the first 300 years after the invention of the telescope only saw an improvement of 1% a year.  At the time of the 1980 Cosmos television series, it was not remotely possible to confirm the existence of any extrasolar planet or to resolve any star aside from the sun into a disk.  Yet, both were accomplished by the mid-1990s.  As of May 2009, we have now confirmed a total of 347 extrasolar planets, with the rate of discovery rising quickly.  While the first confirmation was not until 1995, we now are discovering new planets at a rate of 1 per week.  With a number of new telescope programs being launched, this rate will rise further still.  Furthermore, most of the planets we have found so far are large.  Soon, we will be able to detect planets much smaller in size, including Earth-sized planets.  This leads us to our second postulate :

Telescopic power is rising quickly, possibly at 26% a year.  

Extrasolar_Planets_2004-08-31This Jet Propulsion Laboratory chart of exoplanet discoveries through 2004 is very overdue for an update, but is still instructive.  The x-axis is the distance of the planet from the star, and the y-axis is the mass of the planet.  All blue, red, and yellow dots are exoplanets, while the larger circles with letters in them are our own local planets, with the 'E' being Earth.  Most exoplanet discoveries up to that time were of Jupiter-sized planets that were closer to their stars than Jupiter is to the sun.  The green zone, or 'life zone' is the area within which a planet is a candidate to support life within our current understanding of what life is.  Even then, this chart does not capture the full possibilities for life, as a gas giant like Jupiter or Saturn, at the correct distance from a Sun-type star, might have rocky satellites that would thus also be in the life zone.  In other words, if Saturn were as close to the Sun as Earth is, Titan would also be in the life zone, and thus the green area should extend vertically higher to capture the possibility of such large satellites of gas giants.  The chart shows that telescopes commissioned in the near future will enable the detection of planets in the life zone.  If this chart were updated, a few would already be recorded here.  Some of the missions and telescopes that will soon be sending over a torrent of new discoveries are :

Kepler Mission : Launched in March 2009, the Kepler Mission will continuously monitor a field of 100,000 stars for the transit of planets in front of them.  This method has a far higher chance of detecting Earth-sized planets than prior methods, and we will see many discovered by 2010-11.

COROT : This European mission was launched in December 2006, and uses a similar method as the Kepler Mission, but is not as powerful.  COROT has discovered a handful of planets thus far. 

New Worlds Mission : This 2013 mission will build a large sunflower-shaped occulter in space to block the light of nearby stars to aid the observation of extrasolar planets.  A large number of planets close to their stars will become visible through this method. 

Allen Telescope Array : Funded by Microsoft co-founder Paul Allen, the ATA will survey 1,000,000 stars for radio astronomy evidence of intelligent life.  The ATA is sensitive enough to discover a large radio telescope such as the Arecibo Observatory up to a distance of 1000 light years.  Many of the ATA components are electronics that decline in price in accordance with Moore's Law, which will subsequently lead to the development of the..... 

Square Kilometer Array : Far larger and more powerful than the Allen Telescope Array, the SKA will be in full operation by 2020, and will be the most sensitive radio telescope ever.  The continual decline in the price of processing technology will enable the SKA to scour the sky thousands of times faster than existing radio telescopes. 

These are merely the missions that are already under development or even under operation.  Several others are in the conceptual phase, and could be launched within the next 15 years.  So many methods of observation used at once, combined with the cost improvements of Moore's Law, leads us to our third postulate, which few would have agreed with at the time of 'Cosmos' in 1980 :

Thousands of planets in the 'life zone' will be confirmed by 2025. 

Now, we will revisit the under-discussed factor of accelerating change.  Out of 4.5 billion years of Earth's existence, it has only hosted a civilization capable of radio astronomy for 71 years. But as our own technology is advancing on a multitude of fronts, through the accelerating rate of change and the Impact of Computing, each year, the power of our telescopes increases and the signals of intelligence (radio and TV) emitted from Earth move out one more light year.  Thus, the probability for us to detect someone, and for us to be detected by them, however small, is now rising quickly.  Our civilization gained far more in both detectability, and detection-capability, in the 30 years between 1980 and 2010, relative to the 30 years between 1610 and 1640, when Galileo was persecuted for his discoveries and support of heliocentrism, and certainly relative to the 30 years between 70,000,030 and 70,000,000 BC, when no advanced civilization existed on Earth, and the dominant life form was Tyrannosaurus. 

Nikolai Kardashev has devised a scale to measure the level of advancement that a technological civilization has achieved, based on their energy technology.  This simple scale can be summarized as follows :

Type I : A civilization capable of harnessing all the energy available on their planet.

Type II : A civilization capable of harnessing all the energy available from their star.

Type III : A civilization capable of harnessing all the energy available in their galaxy.

The scale is logarithmic, and our civilization currently would receive a Kardashev score of 0.72.  We could potentially achieve full Type I status by the mid-21st century due to a technological singularity.  Some have estimated that our exponential growth could elevate us to Type II status by the late 22nd century.  

This has given rise to another faction in the speculative debate on extra-terrestrial intelligence, a view held by Ray Kurzweil, among others.  The theory is that it takes such a short time (a few hundred years) for a civilization to go from the earliest mechanical technology to reach a technological singularity where artificial intelligence saturates surrounding matter, relative to the lifetime of the home planet (a few billion years), that we are the first civilization to come this far.  Given the rate of advancement, a civilization would have to be just 100 years ahead of us to be so advanced that they would be easy to detect within 100 light years, despite 100 years being such a short fraction of a planet's life.  In other words, where a 19th century Earth would be undetectable to us today, an Earth of the 22nd century would be extremely conspicuous to us from 100 light years away, emitting countless signals across a variety of mediums. 

A Type I civilization within 100 light years would be readily detected by our instruments today.  A Type II civilization within 1000 light years will be visible to the Allen or the Square Kilometer Array.  A Type III would be the only type of civilization that we probably could not detect, as we might have already been within one all along.  We do not have a way of knowing if the current structure of the Milky Way galaxy is artificially designed by a Type III civilization.  Thus, the fourth and final postulate becomes :

A civilization slightly more advanced than us will soon be easy for us to detect.

The Carl Sagan view of plentiful advanced civilizations is the generally accepted wisdom, and a view that I held for a long time.  On the other hand, the Kurzweil view is understood by very few, for even in the SETI community, not that many participants are truly acceleration aware.  The accelerating nature of progress, which existed long before humans even evolved, as shown in Carl Sagan's cosmic calendar concept, also from the 1980 'Cosmos' series, simply has to be considered as one of the most critical forces in any estimation of extra-terrestrial life.  I have not yet migrated fully to the Kurzweil view, but let us list our four postulates out all at once :

The expected lifespan of an intelligent civilization is rising.  

Telescopic power is rising quickly, possibly at 26% a year. 

Thousands of planets in the 'life zone' will be confirmed by 2025. 

A civilization slightly more advanced than us will soon be easy for us to detect.

As the Impact of Computing will ensure that computational power rises 16,000X between 2009 and 2030, and that our radio astronomy experience will be 92 years old by 2030, there are just too many forces that are increasing our probabilities of finding a civilization if one does indeed exist nearby.  It is one thing to know of no extrasolar planets, or of any civilizations.  It is quite another to know about thousands of planets, yet still not detect any civilizations after years of searching.  This would greatly strengthen the case against the existence of such civilizations, and the case would grow stronger by year.  Thus, these four postulates in combination lead me to conclude that :

2030

 

 

 

 

Most of the 'realistic' science fiction regarding first contact with another extra-terrestrial civilization portrays that civilization being domiciled relatively nearby.  In Carl Sagan's 'Contact', the civilization was from the Vega star system, just 26 light years away.  In the film 'Star Trek : First Contact', humans come in contact with Vulcans in 2063, but the Vulcan homeworld is also just 16 light years from Earth.  The possibility of any civilization this near to us would be effectively ruled out by 2030 if we do not find any favorable evidence.  SETI should still be given the highest priority, of course, as the lack of a discovery is just as important as making a discovery of extra-terrestrial intelligence. 

If we do detect evidence of an extra-terrestrial civilization, everything about life on Earth will change.  Both 'Contact' and 'Star Trek : First Contact' depicted how an unprecedented wave of human unity swept across the globe upon evidence that humans were, after all, one intelligent species among many.  In Star Trek, this led to what essentially became a techno-economic singularity for the human race.  As shown in 'Contact', many of the world's religions were turned upside down upon this discovery, and had to revise their doctrines accordingly.  Various new cults devoted to the worship of the new civilization formed almost immediately. 

If, however, we are alone, then according to many Singularitarians, we will be the ones to determine the destiny of the cosmos.  After a technological singularity in the mid-21st century that merges our biology with our technology, we would proceed to convert all matter into artificial intelligence, make use of all the elementary particles in our vicinity, and expand outward at speeds that eventually exceed the speed of light, ultimately saturating the entire universe with out intelligence in just a few centuries.  That, however, is a topic for another day.   

May 23, 2009 in Accelerating Change, Core Articles, Space Exploration, The Singularity | Permalink | Comments (28) | TrackBack (0)

Tweet This! |

The Impact of Computing : 78% More per Year, v2.0

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months.  But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years.  To not internalize this more deeply is to miss financial opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.  Hence, it is time to update the first version of this all-important article that was written on February 21, 2006.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 12% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let us say that the trend growth of both industries is 12% a year for the next couple of decades.

So, we can conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78

The Impact of Computing grows at a scorching pace of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves.  Consider the most popular television shows of the 1970s, where the characters had all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

We can take the concept even closer to the present.  Among 1990s sitcoms, how many plot devices would no lon ger exist in the age of cellphones and Google Maps?  Consider the episode of Seinfeld entirely devoted to the characters not being able to find their car, or each other, in a parking structure (1991).  Or this legendary bit from a 1991 episode in a Chinese restaurant.  These situations are simply obsolete in the era of cellphones.  This situation (1996) would be obsolete in the era of digital cameras, while the 'Breakfast at Tiffany's' situation would be obsolete in an era of Netflix and YouTube. 

In the 1970s, there was virtually no household product with a semiconductor component.  In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.  In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.  In the late 1990s, the PC was joined by the Internet connection and the DVD player. 

Now, I want everyone reading this to tally up all the items in their home that qualify as 'Impact of Computing' devices, which is any hardware device where a much more powerful/capacious version will be available for the same price in 2 years.  You will be surprised at how many devices you now own that did not exist in the 80s or even the 90s.

Include : Actively used PCs, LCD/Plasma TVs and monitors, DVD players, game consoles, digital cameras, digital picture frames, home networking devices, laser printers, webcams, TiVos, Slingboxes, Kindles, robotic toys, every mobile phone, every iPod, and every USB flash drive.  Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Do not include : Tube TVs, VCRs, film cameras, individual video games or DVDs, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year. 

How many 'Impact of Computing' Nodes do you currently own?
Under 10
11-15
16-20
21+
  
Free polls from Pollhost.com

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0-1

1980s : 1-2

1990s : 3-4

2000s : 6-12

2010s : 15-30

2020s : 40-80

The average home of 2020 will have multiple ultrathin TVs hung like paintings, robots for a variety of simple chores, VR-ready goggles and gloves for advanced gaming experiences, sensors and microchips embedded into clothing, $100 netbooks more powerful than $10,000 workstations of today, surface computers, 3-D printers, intelligent LED lightbulbs with motion-detecting sensors, cars with features that even luxury models of today don't have, and at least 15 nodes on a home network that manages the entertainment, security, and energy infrastructure of the home simultaneously. 

At the industrial level, the changes are even greater.  Just as telephony, photography, video, and audio before them, we will see medicine, energy, and manufacturing industries become information technology industries, and thus set to advance at the rate of the Impact of Computing.  The economic impact of this is staggering.  Refer to the Future Timeline for Economics, particularly the 2014, 2024, and 2034 entries.  Deflation has traditionally been a bad thing, but the Impact of Computing has introduced a second form of deflation.  A good one. 

Plasma It is true that from 2001 to 2009, the US economy has actually shrunk in size, if measured in oil, gold, or Euros.  To that, I counter that every major economy in the world, including the US, has grown tremendously if measured in Gigabytes of RAM, TeraBytes of storage, or MIPS of processing power, all of which have fallen in price by about 40X during this period.  One merely has to select any suitable product, such as a 42-inch plasma TV in the chart, to see how quickly purchasing power has risen.  What took 500 hours of median wages to purchase in 2002 now takes just 40 hours of median wages in 2009.  Pessimists counter that computing is too small a part of the economy for this to be a significant prosperity elevator.  But let's see how much of the global economy is devoted to computing relative to oil (let alone gold).

Oil at $50/barrel amounts to about $1500 Billion per year out of global GDP.  When oil rises, demand falls, and we have not seen oil demand sustain itself to the extent of elevating annual consumption to more than $2000 Billion per year.

Semiconductors are a $250 Billion industry and storage is a $200 Billion industry.  Software, photonics, and biotechnology are deflationary in the same way as semiconductors and storage, and these three industries combined are another $500 Billion in revenue, but their rate of deflation is less clear, so let's take just half of this number ($250 Billion) as suitable for this calculation.

So $250B + $200B + $250B = $700 Billion that is already deflationary under the Impact of Computing.  This is about 1.5% of world GDP, and is a little under half the size of global oil revenues. 

The impact is certainly not small, and since the growth rate of these sectors is higher than that of the broader economy, what about when it becomes 3% of world GDP?  5%?  Will this force of good deflation not exert influcence on every set of economic data?  At the moment, it is all but impossible to get major economics bloggers to even acknowledge this growing force.  But over time, it will be accepted as a limitless well of rising prosperity. 

12% more dollars spent each year, and each dollar buys 59% more power each year.  Combine the two and the impact is 78% more every year. 

Related :

A Future Timeline for Economics

Economic Growth is Exponential and Accelerating

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

The Technological Progression of Video Games

 

April 20, 2009 in Accelerating Change, Computing, Core Articles, Technology, The Singularity | Permalink | Comments (41) | TrackBack (0)

Tags: computing, future, Moore's Law

Tweet This! |

Nanotechnology : Bubble, Bust, ....Boom?

All of us remember the dot-com bubble, the crippling bust that eventually was a correction of 80% from the peak, and the subsequent moderated recovery.  This was easy to notice as there were many publicly traded companies that could be tracked daily.

I believe that nanotechnology underwent a similar bubble, peaking in early 2005, and has been in a bust for the subsequent four years.  Allow me to elaborate.

Nanotech By 2004, major publications were talking about nanotech as if it was about to surge.  Lux Capital was publishing a much-anticipated annual 'Nanotech Report'.  There was even a company by the name of NanoSys that was preparing for an IPO in 2004.  BusinessWeek even had an entire issue devoted to all things nanotech in February 2005.  We were supposed to get excited. 

But immediately after the BusinessWeek cover, everything seemed to go downhill.  Nanosys did not conduct an IPO, nor did any other company.  Lux Capital only published a much shorter report by 2006, and stopped altogether in 2007 and 2008.  No other major publication devoted an entire issue to the topic of nanotechnology.  Venture capital flowing to nanotech ventures dried up.  Most importantly, people stopped talking about nanotechnology altogether.  Not many people noticed this because they were too giddy about their home prices rising, but to me, this shriveling of nano-activity had uncanny parallels to prior technology slumps. 

The rock bottom was reached at the very end of 2008.  Regular readers will recall that on January 3, 2009, I noticed that MIT Technology Review conspicuously omitted a section titled 'The Year in Nanotech' among their year-end roundup of innovations for the outgoing year.  I could not help but wonder why they stopped producing a nanotech roundup altogether, and I subsequently concluded that we were in a multi-year nanotech winter, and that the MIT Technology Review omission marked the lowest point.

Forest But there are signs that nanotech is on the brink of emerging from its chrysalis.  The university laboratories are humming again, promising to draw the genie out of its magic lamp.  In just the first 12 weeks of 2009, carbon nanotubes, after staying out of the news for years, have suddenly been making headlines.  Entire 'forests' of nanotubes are now being grown (image from MIT Tech Review) and can be used for a variety of previously unrelated applications.  Beyond this, there is suddenly activity in nanotube electronics, light-sensitive nanotubes, nanotube superbatteries, and even nanotube muscles that are as light as air, flexible as rubber, but stronger than steel.  And all this is just nanotubes.  Nanomedicine, nanoparticle glue, and nanosensors are also joining the party.  All this bodes well for the prospect of catching up to where we currently should be on the trendline of molecular engineering, and enabling us to build what was previously impossible. 

The recovery out of the four-year nanotech winter could not be happening at a better time.  Nanotech is thus set to be one of the four sectors of technology (the others being solar energy, surface computing, and wireless data) that pull the global economy into its next expansion starting in late 2009. 

Related :

Milli, Micro, Nano, Pico

March 23, 2009 in Accelerating Change, Nanotechnology, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: nanotech

Tweet This! |

A Future Timeline for Economics

The accelerating rate of change in many fields of technology all manifest themselves in terms of human development, some of which can be accurately tracked within economic data.  Contrary to what the media may peddle and despite periodic setbacks, average human prosperity is rising at a rate faster than any other time in human history.  I have described this in great detail in prior articles, and I continue to be amazed at how little attention is devoted to the important subject of accelerating economic growth, even by other futurists.

The time has thus come for making specific predictions about the details of future economic advancement.  I hereby present a speculative future timeline of economic events and milestones, which is a sibling article to Economic Growth is Exponential and Accelerating, v2.0. 

2008-09 : A severe US recession and global slowdown still results in global PPP economic growth staying positive in calendar 2008 and 2009.  Negative growth for world GDP, which has not happened since 1973, is not a serious possibility, even though the US and Europe experience GDP contraction in this period.  The world GDP growth rate trendline resides at growth of 4.5% a year.

2010 : World GDP growth rebounds strongly to 5% a year.  More than 3 billion people now live in emerging economies growing at over 6% a year.  More than 80 countries, including China, have achieved a Human Development Index of 0.800 or higher, classifying them as developed countries. 

2011 : Economic mismanagement in the US leads to a tax increase at the start of 2011, combined with higher interest rates on account of the budget deficit.  This leads to a near-recession or even a full recession in the US, despite the recovery out of the 2008-09 recession still being young. 

2012 : Over 2 billion people have access to unlimited broadband Internet service at speeds greater than 1 mbps, a majority of them receiving it through their wireless phone/handheld device. 

2013 : Many single-family homes in the US, particularly in California, are still priced below the levels they reached at the peak in 2006, as predicted in early 2006 on The Futurist.  If one adjusts for cost of capital over this period, many California homes have corrected their valuations by as much as 50%. 

2014 : The positive deflationary economic forces introduced by the Impact of Computing are now large and pervasive enough to generate mainstream attention.  The semiconductor and storage industries combined exceed $800 Billion in size, up from $450 Billion in 2008.  The typical US household is now spending $2500 a year on semiconductors, storage, and other items with rapidly deflating prices per fixed performance.  Of course, the items puchased for $2500 in 2014 can be purchased for $1600 in 2015, $1000 in 2016, $600 in 2017, etc. 

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015). This is the result of combined advances in energy, lighter nanomaterials, and computerized systems.

2016 : Medical Tourism introduces $100B/year of net deflationary benefit to healthcare costs in the US economy.  Healthcare inflation is slowed, except for the most advanced technologies for life extension. 

2017 : China's per-capita GDP on a PPP basis converges with the world average, resulting in a rise in the Yuan exchange rate.  This is neither good nor bad, but very confusing for trade calculations.  A recession ensues while all the adjustments are sorted out. 

2018 : Among new cars sold, gasoline-only vehicles are now a minority.  Millions of vehicles are electrically charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $3000 a year in 2008.  Some electrical vehicles cost as little as 1 cent/mile to operate. 

2019 : The Dow Jones Industrial Average surpasses 25,000.  The Nasdaq exceeds 5000, finally surpassing the record set 19 years prior in early 2000. 

2020 : World GDP per capita surpasses $15,000 in 2008 dollars (up from $8000 in 2008).  Over 100 of the world's nations have achieved a Human Development Index of 0.800 or higher, with the only major concentrations of poverty being in Africa and South Asia.  The basic necessities of food, clothing, literacy, electricity, and shelter are available to over 90% of the human race. 

Trade between India and the US touches $400 Billion a year, up from only $32 Billion in 2006. 

2022 : Several millon people worldwide are each earning over $50,000 a year through web-based activities.  These activities include blogging, barter trading, video production, web-based retail ventures, and economic activites within virtual worlds.  Some of these people are under the age of 16.  Headlines will be made when a child known to be perpetually glued to his video game one day surprises his parents by disclosing that he has accumulated a legitimate fortune of more than $1 million. 

2024 : The typical US household is now spending over $5000 a year on products and services that are affected by the Impact of Computing, where value received per dollar spent rises dramatically each year.  These include electronic, biotechnology, software, and nanotechnology products.  Even cars are sometimes 'upgraded' in a PC-like manner in order to receive better technology, long before they experience mechanical failure.  Of course, the products and services purchased for this $5000 in 2024 can be obtained for $3200 in 2025, $2000 in 2026, $1300 in 2027, etc. 

2025 : The printing of solid objects through 3-D printers is inexpensive enough for such printers to be common in upper-middle-class homes.  This disrupts the economics of manufacturing, and revamps most manufacturing business models. 

2027 : 90% of humans are now living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960).  Many Asian nations have achieved per capita income parity with Europe.  Only Africa contains a major concentration of poverty. 

2030 : The United States still has the largest nominal GDP among the world's nations, in excess of $50 Trillion in 2030 dollars.  China's economy is a close second to the US in size.  No other country surpasses even half the size of either of the two twin giants. 

The world GDP growth rate trendline has now surpassed 5% a year.  As the per capita gap has reduced from what it was in 2000, the US now grows at 4% a year, while China grows at 6% a year. 

10,000 billionaires now exist worldwide, causing the term to lose some exclusivity. 

2032 : At least 2 TeraWatts of photovoltaic capacity is in operation worldwide, generating 8% of all energy consumed by society.  Vast solar farms covering several square miles are in operation in North Africa, the Middle East, India, and Australia.  These farms are visible from space. 

2034 : The typical US household is now spending over $10,000 a year on products and services that are affected by the Impact of Computing.  These include electronic, biotech, software, and nanotechnology products.  Of course, the products and services purchased for this $10,000 in 2034 can be obtained for $6400 in 2035, $4000 in 2036, $2500 in 2037, etc. 

2040 : Rapidly accelerating GDP growth is creating astonishing abundance that was unimaginable at the start of the 21st century.  Inequality continues to be high, but this is balanced by the fact that many individual fortunes are created in extremely short times.  The basic tools to produce wealth are available to at least 80% of all humans. 

Greatly increased lifespans are distorting economics, mostly for the better, as active careers last well past the age of 80. 

Tourism into space is affordable for upper middle class people, and is widely undertaken. 

________________________________________________________

I believe that this timeline represents a median forecast for economic growth from many major sources, and will be perceived as too optimistic or too pessimistic by an equal number of readers.  Let's see how closely reality tracks this timeline.

September 28, 2008 in Accelerating Change, China, Computing, Core Articles, Economics, Energy, India, The Singularity | Permalink | Comments (56)

Tags: Accelerating, China, Economics, Economy, Event Horizon, Future, GDP, Moore's Law, Singularity

Tweet This! |

Pre-Singularity Abundance Milestones

I am of the belief that we will experience a Technological Singularity around 2050 or shortly thereafter. Many top futurists all arrive at prediction dates between 2045 and 2075. The bulk of Singularity debate revolves not so much around 'if' or even 'when', but rather 'what' the Singularity will appear like, and whether it will be positive or negative for humanity.

To be clear, some singularities have already happened.  To non-human creatures, a technological singularity that overhauls their ecosystem already happened over the course of the 20th century.  Domestic dogs and cats are immersed in a singularity where most of their surroundings surpass their comprehension.  Even many humans have experienced a singularity - elderly people in poorer nations make no use of any of the major technologies of the last 20 years, except possibly the cellular phone.  However, the Singularity that I am talking about has to be one that affects all humans, and the entire global economy, rather that just humans that are marginal participants in the economy.  By definition, the real Technological Singularity has to be a 'disruption in the fabric of humanity'. 

In the period between 2008 and 2050, there are several milestones one can watch for in order to see if the path to a possibile Singularity is still being followed.  Each of these signifies a previously scarce resource becoming almost infinitely abundant (much like paper today, which was a rare and precious treasure centuries ago), or a dramatic expansion in human experience (such as the telephone, airplane, and Internet have been) to the extent that it can even be called a transhuman experience.  The following are a random selection of milestones with their anticipated dates. 

Technological :

Hours spent in videoconferencing surpass hours spent in air travel/airports : 2015

Video games with interactive, human-level AI : 2018

Semi-realistic fully immersive virtual reality : 2020

Over 5 billion people connected to the Internet (mostly wirelessly) at speeds greater than 10 Mbps : 2022

Over 30 network-connected devices in the average household worldwide : 2025

1 TeraFLOPS of computing power costs $1 : 2026

1 TeraWatt of worldwide photovoltaic power capacity : 2027

1 Petabyte of storage costs $1 : 2028

1 Terabyte of RAM costs $1 : 2031

An artificial intelligence can pass the Turing Test : 2040

Biological :

Complete personal genome sequencing costs $1000 : 2020

Cancer is no longer one of the top 5 causes of death : 2025

Complete personal genome sequencing costs $10 : 2030

Human life expectancy achieves Actuarial Escape Velocity for wealthy individuals : 50% chance by 2040

Economic :

Average US household net worth crosses $2 million in nominal dollars : 2024

90% of humans living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960) : 2025

10,000 billionaires worldwide (nominal dollars) : 2030

World GDP per Capita crosses $50,000 in 2008 dollars : 2045

_________________________________________________________________

Each of these milestones, while not causing a Singularity by themselves, increase the probability of a true Technological Singularity, with the event horizon pulled in closer to that date.  Or, the path taken to each of these milestones may give rise to new questions and metrics altogether.  We must watch for each of these events, and update our predictions for the 'when' and 'what' of the Singularity accordingly. 

Related : The Top 10 Transhumanist Technologies

September 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (24) | TrackBack (0)

Tags: Acceleration, Future, Futurist, Moore's Law, Prosperity, Singularity

Tweet This! |

Can Buildings be 'Printed'?

I have discussed the possibility of 3-D printing of solid objects before, in this article where company #5, Desktop Factory, is detailed.  However, the Desktop Factory product can only produce objects that have a maximum size of 5 X 5 X 5 inches, and it can only use one type of material. 

On the Next Big Future blog, the author quite frequently profiles a future product capable of 'printing' entire buildings.  This technology, known as 'Contour Crafting', can supposedly construct buildings at greater than 10 times the speed, yet at just one-fifth the cost of traditional construction processes.  It is claimed that the first commercial machines will be available in 2008 itself. 

Despite my general optimism, this particular machine does not pass my 'too good to be true' test, at least before 2020.  A machine that could construct homes and commercial buildings at such a speed and cost would cause an unprecedented economic disruption across the world.  There would be a steep but brief depression, as existing real estate loses 90% or more of its value, followed by a huge boom as home ownership becomes affordable to several times as many people as today.  I don't think that we are on the brink of such a revolution.

For me to be convinced, I would have to see :

1) Articles on this device in mainstream publications like The Economist, BusinessWeek, MIT Technology Review, or Popular Mechanics.

2) The ability to at least print simple constructs like concrete perimeter walls or sidewalks at a rate and cost several times superior to current methods.  Only then can more complex structures be on the horizon. 

I will revisit this technology if either of these two conditions is solidly met. 

(crossposted on TechSector). 

September 02, 2008 in Accelerating Change, Economics, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: Construction, Contour Crafting, Printing, Real Estate

Tweet This! |

Surfaces : The Next Killer Ap in Computing

Computing, once seamlessly synonymous with technological progress, has not grabbed headlines in recent memory. We have not had a 'killer ap' in computing in the last few years.  Maybe you can count Wi-fi access to laptops in 2002-03 as the most recent one, but if that is not a sufficiently important innovation, we then have to go all the way back to the graphical World Wide Web browser in 1995.  Before that, the killer ap was Microsoft Office for Windows in 1990.  Clearly, such shifts appear to occur at intervals of 5-8 years. 

I can, without hesitation, nominate surface computing as the next great generational augmentation in the computing experience.  This is because surface computing entirely transforms the human-computer interaction in a matter that is more suitable for the human body than the mouse/keyboard model is. In accordance with the Impact of Computing, rapid drops in the costs of both high-definition displays and tactile sensors are set to bring this experience to consumers by the end of this decade.

Surface

BusinessWeek has a slideshow featuring several different products for surface computing. Over ten major electronics companies have surface computing products available. The most visible is the Microsoft Surface, which sells for about $10,000, but will probably drop to $3000 or less within 3-4 years, enabling household adoption.

As far as early applications of surface computing, a fertile imagination can yield many prospects. For example, a restaurant table may feature a surface that displays the menu, enabling patrons to order simply by touching the picture of the item they choose.  The information is sent to the kitchen, and this saves time and reduces the number of waiters needed by the restaurant (as waiters would only be needed to deliver the completed orders).  Applications for classroom and video game settings also readily present themselves. 

Watch for demonstrations of various surface computers at your local electronics store, and keep an eye on the price drops.  After seeing a demonstration, do share at what pricepoint you might purchase one.  The next generation of computing beckons. 

Related :

The Impact of Computing

(Crossposted on TechSector)

July 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (7) | TrackBack (0)

Tweet This! |

The Solar Revolution is Near

I have long been optimistic about Solar Energy (whether photovoltaic or thermal) becoming our largest energy source within a few decades.  Earlier articles on the subject include :

A Future Timeline for Energy

Solar Energy Cost Curve

Several recent events and developments have led me to reinforce this view.  First of all, consider this article from Scientific American, detailing a Solar timeline to 2050. The article is not even Singularity-aware, yet details many steps that will enable Solar energy to expand by orders of magnitude above the level that it is today.  Secondly, two of the most uniquely brilliant people alive today, Ray Kurzweil and Elon Musk (who I recently chatted with), have both provided compelling cases on why Solar will be our largest energy source by 2030.  Both Kurzweil and Musk reside in significantly different spheres, yet have arrived at the same prediction.

However, the third point is the one that I find to be the most compelling. There are a number of publicly traded companies selling solar energy products, many of which had IPOs in just the last three years.  Some of these companies, and their market capitalizations, are :

Solar1

Now consider that the companies on this list alone amount to about $50 Billion in capitalization.  There are, additionally, many smaller companies not included on this list, many companies like Applied Materials (AMAT) and Cypress Semiconductor (CY) for which solar products comprise only a portion of their business, and large private companies like NanoSolar (which I have heavily profiled here) and SolFocus that may have valuations in the billions.  Thus, the market cap of the 'solar sector' is already between $60B and $100B, depending on what you include within the total.  This immense valuation has accumulated at a pace that has taken many casual observers by surprise.  A 2-year chart of some of the stocks listed above tells the story. 

Solar2

While FirstSolar (FSLR) has been the brightest star, all the others have trounced the S&P500 to a degree that would put even Google or Apple to shame over this period.  Clearly, a dramatic ramp in Solar energy is about to make mainstream headlines very soon, even if the present valuations are too high. 

Is this a dot-com-like bubble?  Yes, in the near-term, it is.  However, after a sharp correction, the long term growth will resume for the companies that emerge as leaders.  I won't recommend a specific stock among this cluster just yet, as there are a wave of private companies with new technologies that could render any of these incumbents obsolete.  Specific company profiles will follow soon, but in the meantime, for more detail on the long-term trends in favor of Solar, refer to these additional articles of mine :

Why I Want Oil to Hit $120 per Barrel

Terrorism, Oil, Globalization, and the Impact of Computing

(crossposted on TechSector)

May 20, 2008 in Energy, Nanotechnology, Stock Market, Technology, The Singularity | Permalink | Comments (13) | TrackBack (0)

Tweet This! |

Ten Biotechnology Breakthroughs Soon to be Available

Popular Mechanics has assembled one of those captivating lists of new technologies that will improve our lives, this time on healthcare technologies (via Instapundit).� Just a few years ago, these would have appeared to be works of science fiction.� Go to the article to read about the ten technologies shown below.�

Biotech10_2

Most of these will be available to average consumers within the next 7-10 years, and will extend lifespans while dramatically lowering healthcare costs (mostly through enhanced capabilities of early detection and prevention, as well as shorter recovery times for patients).� This is consistent with my expectation that bionanotechnology is quietly moving along established trendlines despite escaping the notice of most people.� These technologies will also move us closer to Actuarial Escape Velocity, where the rate of lifespan increases exceed that of real time.�

Another angle that these technologies effect is the globalization of healthcare.� We have previously noted the success of 'medical tourism' in US and European patients seeking massive discounts on expensive procedures.� These technologies, given their potential to lower costs and recovery times, are even more suitable for medical offshoring than their predecessors, and thus could further enhance the competitive position of the countries that are quicker to adopt them.� If the US is at the forefront of using the 'bloodstream bot' to unclog arteries, the US thus once again becomes more attractive than getting a traditional procedure done in India or Thailand.� But if the lower cost destinations also adopt these technologies faster than the heavily regulated US, then even more revenue migrates overseas and the US healthcare sector would suffer further deserved blows, and be under even greater pressure to conform to market forces.� As technology once again acts as the great leveler, another spark of hope for reforming the dysfunctional US healthcare sector has emerged.�

These technologies are near enough to availability that you may even consider showing this article to your doctor, or writing a letter to your HMO.� Plant the seed into their minds...

Related :

Actuarial Escape Velocity

How Far Can 'Medical Tourism' Go?

Milli, Micro, Nano, Pico

May 09, 2008 in Accelerating Change, Biotechnology, Computing, Nanotechnology, Technology, The Singularity | Permalink | Comments (11) | TrackBack (0)

Tweet This! |

Actuarial Escape Velocity

Every now and then, an obscure concept is so brilliantly encapsulated in a compact yet sublime term that it leaves the audience inspired enough to evangelize it. 

I have felt that way ever since I heard the words 'Actuarial Escape Velocity'.

For some background, please refer to an older article from early 2006, 'Are You Prepared to Live to 100?".  Notice the historical uptrend in human life expectancy, and the accelerating rate of increases.  For more, do also read the article "Are You Acceleration Aware?".

In analyzing the rate at which life expectancy is increasing in the wealthiest nations, we see that US life expectancy is now increasing by 0.2 years, every year.  Notably, the death rates from heart disease and cancer have been dropping by a rapid 2-4% each year, and these two leading causes of death are quickly falling off, despite rising obesity and a worsening American diet over the same period.  Just a few decades ago, the rate on increase in life expectancy was slower than 0.2 years per year.  In the 19th century, even the wealthiest societies were adding well under 0.1 years per year.  But how quickly can the rate of increase continue to rise, and does it eventually saturate as each unit of gain becomes increasingly harder to achieve?

Two of the leading thinkers in the field of life extension, Ray Kurzweil and Aubrey de Grey, believe that by the 2020s, human life expectancy will increase by more than one year every year (in 2002 Kurzweil predicted that this would happen as soon as 2013, but this is just another example of him consistently overestimating the rate of change).  This means that death will approach the average person at a slower rate than the rate of technology-driven lifespan increases.  It does not mean all death suddenly stops, but it does mean than those who are not close to death do have a possibility of indefinite lifespan after AEV is reached.  David Gobel, founder of the Methuselah Foundation, has termed this as Actuarial Escape Velocity (AEV), essentially comparing the rate of lifespan extension to the speed at which a spacecraft can surpass the gravitiational pull of the planet it launches from, breaking free of the gravitational force.  Thus, life expectancy is currently, as of 2007 data, rising at 20% of Actuarial Escape Velocity.

I remain unconvinced that such improvements will be reached as soon as Ray Kurzeil and Aubrey de Grey predict.  I will be convinced after we clearly achieve 50% of AEV in developed countries, where six months are added to life expectancy every year.  It is possible that the interval between 50% and 100% of AEV comprises less than a decade, but I'll re-evaluate my assumptions when 50% is achieved. 

Serious research efforts are underway.  The Methuselah Mouse Prize will award a large grant to researchers that can demonstrate substantial increases in the lifespan of a mouse (more from The Economist).  Once credible gains can be demonstrated, funding for the research will increase by orders of magnitude. 

The enormous market demand for lifespan extension technologies is not in dispute.  There are currently 95,000 individuals in the world with a net worth greater than $30 million, including 1125 billionaires.  Accelerating Economic Growth is already growing the ranks of the ultrawealthy at a scorching pace.  If only some percentage of these individuals are willing to pay a large portion of their wealth in order to receive a decade or two more of healthy life, particularly since money can be earned back in the new lease on life, then such treatment already has a market opportunity in the hundreds of billions of dollars.  The reduction in the economic costs of disease, funerals, etc. are an added bonus.  Market demand, however, cannot always supercede the will of nature. 

This is only the second article on life extension that I have written on The Futurist, out of 154 total articles written to date.  While I certainly think aging will be slowed down to the extent that many of us will surpass the century mark, it will take much more for me to join the ranks of those who believe aging can be truly reversed.  To track progress in this field, keep one eye on the rate of decline in cancer and heart disease deaths, and another eye on the Methuselah Mouse Prize.  That such metrics are even advancing on a yearly basis is already remarkable, but monitoring anything more than these two measures, at this time, would be premature. 

So let's find out what the group prediction is, with a poll.  Keep in mind that most people are biased towards believing this date will fall within their own lifetimes (poll closed 7/1/2012) :

AEV

March 25, 2008 in Accelerating Change, Biotechnology, Economics, The Singularity | Permalink | Comments (16) | TrackBack (0)

Tweet This! |

Is Technology Diffusion in a Lull?

There are minor but growing elements of evidence that the rate of technological change has moderated in this decade.  Whether this is a temporary trough that merely precedes a return to the trendline, or whether the trendline itself was greatly overestimated, will not be decisively known for some years.  In this article, I will attempt to examine some datapoints to determine whether we are at, or behind, where we would expect to be in 2008. 

There is overwhelming evidence that many seemingly unrelated technologies are progressing at an accelerating rate.  However, the exact magnitude to the accelerating gradient - the second derivative - is difficult to measure with precision.  Furthermore, there are periods where advancement can be significantly above or below any such trendline. 

This brings us to the chart below from Ray Kurzweil (from Wikipedia) :

752pxpptmassuseinventionslogprint_2

This chart appears prominently in many of Kurzweil's writings, and brilliantly conveys the concept of how each major consumer technology reached the mainstream (as defined by a 25% US household penetration rate) in successively shorter times.  The horizontal axis represents the year in which the technology was invented. 

This chart was produced some years ago, and therein lies the problem.  If we were to update the chart to the present day, which technology would be the next addition after 'The Web'? 

Many technologies can claim to be the ones to occupy the next position on the chart.  IPods and other portable mp3 players, various Web 2.0 applications like social networking, and flat-panel TVs all reached the 25% level of mainstream adoption in under 6 years in accordance with an extrapolation of the chart through 2008.  However, it is debatable that any of these are 'revolutionary' technologies like the ones on the chart, rather than merely increments above incumbent predecessors.  The iPod merely improved upon the capacity and flexibility of the walkman, the plasma TV merely consumed less space than the tube TV, etc.  The technologies on the chart are all infrastructures of some sort, and it is clear that after 'The Web', we are challenged to find a suitable candidate for the next entry. 

Thus, we either are on the brink of some overdue technology emerging to reach 25% penetration of US households in 6 years or less, or the rapid diffusion of the Internet truly was a historical anomaly, and for the period from 2001 to 2008 we were merely correcting back to a trendline of much slower diffusion (where it take 10-15 years for a technology to each 25% penetration in the US).  One of the two has to be true, at least for an affluent society like the US.

This brings us to the third and final dimension of possibility.  This being the decade of globalization, with globalization itself being an expected natural progression of technological change, perhaps a US-centric chart itself was inappropriate to begin with.  Landline telephones and television sets still do not have 25% penetration in countries like India, but mobile phones jumped from zero to 10% penetration in under 7 years.  The oft-cited 'leapfrogging' of technologies that developing nations can benefit from is a crucial piece of technological diffusion, which would thus show a much smaller interval between 'telephones' and 'mobile phones' than in the US-based chart above.  Perhaps '10% Worldwide Household Penetration' is a more suitable measure than '25% US Household Penetration', which would then possibly show that there is no lull in worldwide technological adoption at all. 

I may try to put together this new worldwide chart.  The horizontal axis would not change, but the placement of datapoints along the vertical axis would.  Perhaps Kurzweil merely has to break out of US-centricity in order to strengthen his case and rebut most of his critics. 

The future will disclose the results to us soon enough.

(crossposted on TechSector)

Related :

Are You Acceleration Aware?

The Impact of Computing

These are the Best of Times

February 19, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (28) | TrackBack (0)

Tweet This! |

Nine Tantalizing Small Companies

In scouring the startup universe for the companies and technologies that can reshape human society and create entirely new industries, one has to play the role of a prospective Venture Capitalist, yet not be constrained by the need for a financial exit 3-6 years hence. 

Therefore, I have assembled a list of nine small companies, each with technologies that have the potential to create trillion-dollar economic disruptions by 2020, disruptions that most people have scarcely begun to imagine today.  Note that the emphasis is on the technologies rather than the companies themselves, as a startup requires much more than a revolutionary technology in order to prosper.  Management skills, team synergy, and execution efficiency are all equally important.  I predict that out of this list of nine companies, perhaps one or two will become titans, while the others will be acquired by larger companies for modest sums, enabling the technology to reach the market through the acquiring company. 

1) NanoSolar : NanoSolar produces low-cost solar cells that are manufactured by a process analogous to 'printing'.  The company's technology was selected by Popular Mechanics as the 'Innovation of the Year' for 2007, and Nanosolar's solar cells are significantly ahead of the Solar Energy Cost Curve.  The flexible, thin nature of Nanosolar's cells may enable them to be quickly incorporated onto the surfaces of many types of commercial buildings.  Nanosolar's first shipments have already occurred, and if we see several large deployments in the near future, this might just be the company that finally makes solar energy a mass-adopted consumer technology.  Nanosolar itself calls this the 'third wave' of solar power technology. 

2) Tesla Motors : I wrote about Tesla Motors in late 2006.  Tesla produces fully electric cars that can consume as little as 1 cent of electricity per mile.  They are about to deliver the first few hundred units of the $98,000 Tesla Roadster to customers, and while the Roadster is not a car that can be marketed to average consumers, Tesla intends to release a 4-door $50,000 sedan named 'WhiteStar' in 2010, and a $30,000 sedan by 2013.  The press coverage devoted to Tesla Motors has been impressive, but until the WhiteStar sedan successfully sells at least 10,000 units, Tesla will not have silenced critics who say the technology cannot be brought down to mass-market costs. 

Aptera_33) Aptera Motors : When I first wrote about Tesla Motors, it was before I had heard about Aptera Motors.  While Tesla is aiming to produce a $30,000 sedan for 2013, Aptera already has an all-electric car due for late 2008 that is priced at just $27,000, while delivering the equivalent of between 200 and 330 mpg.  The fact that the vehicle has just three wheels may reduce mainstream appeal to some degree, but the futuristic appearance of the car will attract others.  Aptera Motors is a top candidate for winning the Automotive X-Prize in 2010. 

The simultaneous use of Nanosolar's solar panels with the all-electric cars from Tesla and Aptera may enable automotive driving to be powered by solar generated electricity for the average single-family household.  The combination of these two technologies would be the 'killer ap' of getting off of oil and onto fully renewable energy for cars. 

Related : Why I Want Oil to Hit $120/Barrel.

4) 23andMe : This company gets some press due to the fact that co-founder Anne Wojcicki is married to Sergey Brin, even as Google has poured $3.9M into 23andMe.  Aside from this, what 23andMe offers is an individual's personal genome for just $1000.  What a personal genome provides is a profile of which health conditions the customer is more or less susceptible to, and thus enables the customer to provide this information to his physician, and make the preventive lifestyle adjustments well in advance.  Proactive consumers will be able to extend their lifespans by systematically reducing their risks of ailments they are genetically predisposed to.  As the service is a function of computational power, the price of a personal genome will, of course, drop, and might become an integral part of the average person's medical records, as well as an expense that insurance covers. 

5) Desktop Factory : In 2008, Desktop Factory will begin to sell a $5000 device that functions as a 3-D printer, printing solid objects one layer at a time.  A user can scan almost any object (including a hand, foot, or head) and reproduce a miniature model of it (up to 5 X 5 X 5 inches).  The material used by the 3-D printer costs about $1 per cubic inch. 

The $5000 printer is a successor to similar $100,000 devices used in mechanical engineering and manufacturing firms.  Due to the Impact of Computing, consumer-targeted devices costing under $1000 will be available no later than 2014.  I envision an ecosystem where people invent their own objects (statuettes, toys, tools, etc.) and share the scanned templates of these objects on social networking sites like MySpace and Facebook.  People can thus 'share' actual objects over the Internet, through printing a downloaded template.  The cost of the printing material will drop over time as well.  A lot of fun is to be had, and expect an impressive array of brilliant ideas to come from people below the age of 16. 

6) Zazzle : Welcome to the age of the instapreneur.  Zazzle enables anyone to design their own consumer commodities like T-shirts, mugs, calendars, bumper stickers, etc. on demand.  If you have an idea, you can produce it on Zazzle with no start-up costs, and no inventory risks.  You profit even from the very first unit you sell, with no worries about breakeven thresholds.  You can produce an infinite number of products, limited only by your imagination.  At this point, those of you reading this are probably in the midst of an avalanche of ideas of products you would like to produce. 

While the bulk of Zazzle users today are would merely be vanity users who manage to sell under ten units of their creations, this new paradigm of low-cost customization will inevitably creep up to major industrial supply chains.  Even more interesting, think about #5 on this list, Desktop Factory, combining with Zazzle's application, into an amazing transformation of the very economics of manufacturing and mass-production. 

7) A123 Systems : Read here about how battery technology is finally set to advance after decades of stagnation.  A123 Systems is at the forefront of these advances, and has already received over $148 Million in private funding, as well as an article from the prestigious MIT Technology Review.  A123 is a supplier for GM's upcoming Volt, and has already has begun to sell a module to convert a Toyota Prius into a plug-in hybrid.  For choices beyond those offered by the #2 and #3 companies on this list, A123 Systems is poised to enable the creation of many new electric or plug-in hybrid vehicles, greatly increasing the the choices available to consumers seeking the equivalent of more than 50 mpg.  A123 may just become the Intel of batteries.  Combine A123's batteries with Nanosolar's cells, and the possibilities become even more interesting. 

8) Luxim : Brightness of light is measured in Lumens, not Watts, which is a measure of power consumption.  Consumers are learning that CFL and LED bulbs offer the same Lumens with just a fifth or a tenth of the Watts consumed by a traditional incandescent bulb, and billions of tons of coal are already being saved by the adoption of CFLs and LEDs.  Luxim, however, aims to take this even further.  Luxim makes tiny bulbs that deliver 8 times as many Lumens per Watt as incandescent bulbs.  The bulbs are too expensive for home use, but are already going into projection TVs.  With $61 Million in funding to date, Luxim's main hurdle will be to reduce the cost of their products enough to penetrate the vast home and office lighting market, which consumes tens of billions of bulbs each year.   

9) Ugobe : Ugobe sells a robotic dinosaur toy known as the Pleo.  A mere toy, especially a $350 toy, would not normally be on a list of technologies that promise to crease the fabric of human society.  However, a closer look at the Pleo reveals many impressive increments in the march to make inexpensive robots more lifelike.  The skin of the Pleo covers the joints, the Pleo has more advanced 'learning' abilities than $2500 robots from a few years ago, and the Pleo even cries when tortured, to the extent that it is difficult to watch this. 

The reason Ugobe is on this list is that I am curious to see what is the next product on their roadmap, so that I can gauge how quickly the technology is advancing.  The next logical step would be an artificial mammal of some sort, with greater intelligence and realistic fur.  The successful creation of this generation of robot would provide the datapoints to enable us to project the approximate arrival of future humanoid robots, for better or for worse.  Another company may leapfrog Ugobe in the meantime, but they are currently at the forefront of the race to create low-priced robotic toys. 

This concludes the list of nine companies that each could greatly alter our lives within the next several years.  Of these nine, at least three, Nanosolar, Tesla Motors, and 23andMe, have Google or Google's founders as investors.  The next 24 months have important milestones for each of these companies to cross (by which time I might have a new list of new companies).  For those that clear their respective near-term bars, there might just be a chance of attaining the dizzy heights that Google, Microsoft, or Intel has. 

Related :

The Impact of Computing

A Future Timeline for Automobiles

A Future Timeline for Energy

The Imminent Revolution in Lighting

Batteries Set to Advance, Finally

(crossposted on TechSector)

February 17, 2008 in Accelerating Change, Biotechnology, Economics, Energy, Nanotechnology, Technology, The Singularity | Permalink | Comments (6) | TrackBack (0)

Tweet This! |

The Top Ten Transhumanist Technologies

The Lifeboat Foundation has a special report detailing their view of the top ten transhumanist technologies that have some probability of 25 to 30-year availability.  Transhumanism is a movement devoted to using technologies to transcend biology and enhance human capabilities. 

I am going to list out each of the ten technologies described in the report, provide my own assessment of high, medium, or low probability or mass-market availability by a given time horizon, and link to prior articles written on The Futurist about the subject.

10. Cryonics : 2025 - Low, 2050 - Moderate

I can see the value in someone who is severely maimed or crippled opting to freeze themselves until better technologies become available for full restoration.  But outside of that, the problem with cryonics is that very few young people will opt to risk missing their present lives to go into freezing, and elderly people can only benefit after revival when or if age-reversal technologies become available.  Since going into cryonic freezing requires someone else to decide when to revive you, and any cryonic 'will' may not anticipate numerous future variables that could complicate execution of your instructions, this is a bit too risky, even if it were possible.

9. Virtual Reality : 2012 - Moderate, 2020 - High

The Technological Progression of Video Games

The Next Big Thing in Entertainment, Part I, II, and III

The Mainstreaming of Virtual Reality

8. Gene Therapy : 2015 - Moderate, 2025 - High

The good news here is that gene sequencing techniques continue to become faster due to the computers used in the process themselves benefiting from Moore's Law.  In the late 1980s, it was thought that the human genome would take decades to sequence.  It ended up taking only years by the late 1990s, and today, would take only months.  Soon, it will be cost-effective for every middle-class person to get their own personal genome sequenced, and get customized medicines made just for them. 

Are you Prepared to Live to 100?

7. Space Colonization : 2025 - Low, 2050 - Moderate

While this is a staple premise of most science fiction, I do not think that space colonization may ever take the form that is popularly imagined.  Technology #2 on this list, mind uploading, and technology #5, self-replicating robots, will probably appear sooner than any capability to build cities on Mars.  Thus, a large spaceship and human crew becomes far less efficient than entire human minds loaded into tiny or even microscopic robots that can self-replicate.  A human body may never visit another star system, but copies of human minds could very well do so.

Nonetheless, if other transhumanist technologies do not happen, advances in transportation speed may enable space exploration in upcoming centuries.

6. Cybernetics : 2015 - High

Artificial limbs, ears, and organs are already available, and continue to improve.  Artificial and enhanced muscle, skin, and eyes are not far. 

5. Autonomous Self-Replicating Robots : 2030 - Moderate

This is a technology that is frightening, due to the ease at which humans could be quickly driven to extinction through a malfunction that replicates rouge robots.  Assuming a disaster does not occur, this is the most practical means of space exploration and colonization, particular if the robots contain uploads of human minds, as per #2.

4. Molecular Manufacturing : 2020 - Moderate, 2030 - High

This is entirely predictable through the Milli, Micro, Nano, Pico curves. 

3. Megascale Engineering (in space) : 2040 - Moderate

From the Great Wall of China in ancient times to Dubai's Palm Islands today, man-made structures are already visible from space.  But to achieve transhumanism, the same must be done in space.  Eventually, elevators extending hundreds of miles into space, space stations much larger than the current ISS (240 feet), and vast orbital solar reflectors will be built.  But, as stated in item #7, I don't think true megascale projects (over 1000 km in width) will happen before other transhumanist technologies render the need for them obsolete.

2. Mind Uploading : 2050 - Moderate

This is what I believe to be the most important technology on this list.  Today, when a person's hardware dies, their software in the form of their thoughts, memories, and humor, necessarily must also die.  This is impractical in a world where software files in the form of video, music, spreadsheets, documents, etc. can be copied to an indefinite number of hardware objects. 

If human thoughts can reside on a substrate other than human brain matter, then the 'files' can be backed up.  That is all there is to it. 

1. Artificial General Intelligence : 2050 - Moderate

This is too vast of a subject to discuss here.  Some evidence of progress appears in unexpected places, such as when, in 1997, IBM's Deep Blue defeated Gary Kasparov in a chess game.  Ray Kurzweil believes that an artificial intelligence will pass the Turing Test (a bellwether test of AI) by 2029.  We will have to wait and see, but expect the unexpected, when you least expect it. 

August 25, 2007 in Accelerating Change, Biotechnology, Computing, Technology, The Singularity | Permalink | Comments (40) | TrackBack (0)

Tags: accelerating change, AI, cybernetics, gene therapy, singularity, transhuman, turing test

Tweet This! |

A Hornet-sized Robotic Insect Can Now Fly

A robotic insect, similar in size and weight to a wasp or hornet, has successfully taken flight at Harvard University (article and photo at MIT Technology Review).  This is an amazing breakthrough, because just a couple of years ago, such robots were pigeon-sized, and thus far less useful for detailed military and police surveillance. 

At the moment, the flight path is still only vertical, and the power source is external. Further advances in the carbon polymer materials used in this robot will reduce weight further, enabling greater flight capabilities.  Additional robotics advances will reduce size down to housefly or even mosquito dimensions.  Technological improvements in batteries will provide on-board power with enough flight time to be useful.  All of this will take 5-8 years to accomplish.  After that, it may take another 3 years to achieve the capabilities for mass-production.  Even then, the price may be greater than $10,000 per units.

Needless to say, by 2017-2020, this may be a very important military technology, where thousands of such insects are released across a country or region known to contain terrorists.  They could land on branches, light fixtures, and window panes, sending information to one another as well as to military intelligence.  Further into the future, if these are ever available for private use, than that could become quite complicated.

July 22, 2007 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (4) | TrackBack (0)

Tweet This! |

Economic Growth is Exponential and Accelerating, v2.0

If we were to make a list of subjects ranked by the gap between the civilizational importance of the topic and the lack of serious literature devoted to it, historical acceleration of economic growth would be very near the top of the list.  I wrote an article on the subject way back on January 29, 2006 (version 1.0), but now it is time for a much more substantial treatise. 

To whet your appetite, read the article "Are You Acceleration Aware?", which is the critical piece of any attempt at Futurism.

In the modern age, we take for granted that the US will grow at 3.5% a year, and that the world economy grows at 4% to 4.5% a year.  However, these are numbers that were unheard of in the 19th century, during which World GDP grew under 2% a year.  Prior to the 19th century, annual World GDP growth was so little that changes from one generation to the next were virtually zero.  Brad Delong has some data on World GDP from prehistoric times until 2000 AD. 

If I put historical per-capita GDP through 2000 in a logarithmic timescale, we see the following :

Ggp_3_2

The theme of acceleration readily presents itself here, and even disruptive events like the Greagt Depression still do not cause more than a temporary deviation from the long-term trendline.  A different representation of the data would be to notice the shrinking intervals that it takes for per-capita World GDP to double.

10000 BC to 1500 : 11500 years without doubling

1500 to 1830 : 330 years

1830 to 1880 : 50 years

1880 to 1915 : 35 years

1915 to 1951 : 36 years (Great Depression and World Wars in this period)

1951 to 1975 : 24 years (recovery to trendline)

1975 to 2003 : 28 years

2003 to 2024-2027? : 21-24 years (on current trends)

This not only further reveals acceleration, but also indicates that massively disruptive world events still result in merely temporary deviations from the long-term trendline. 

Additionally, we can take the more granular IMF data of recent World GDP growth, and plot a trendline on it.  Both nominal and PPP growth rates are available, and are diverging due to the increasing size and growth rates of India and China.  Unfortunately, the IMF data only goes back to 1980, and 28 years are not enough to plot an ideal trendline, but nonetheless, the upward slope is distinct, and recessions (which still do not push World GDP growth into negative territory) are invariably followed by steep recoveries. 

Ggp

It is also important to note that the standard deviation of the IMF data for World GDP growth rates is about 1% a year, for both the nominal and PPP series (1.07% and 1.14% respectively, to be exact).  The rules of standard deviations dictate that 68% of the time, a data point will be within one standard deviation of the mean, 95% will be between two standard deviations, and 99.7% will be within three. 

Thus, in a simple example, if the World GDP growth trendline is currently at 4% a year, there is a 68% chance that the next year will be between 3% and 5%, and there is only a 0.3% chance that the next year will be below 1% or above 7% growth.  This means that a worldwide recession with a year of negative growth is extremely improbable, just as improbable as a year with stupendous 8% growth.  There is not a single year in the 1980-2007 IMF data with negative GDP growth, and virtually none under 1% growth. 

Pessimists like to say that "the Great Depression will happen again", but not only was the Great Depression at a time when the trendline was at a lower annual growth rate than today, but the Great Depression comprised of 6 years of GDP falling below the trendline, simply because it followed a period of many years where GDP was substantially above the trendline.  Furthermore, this was for US GDP.  World GDP's deviations may have been even less severe, as some nations, such as France, Japan, and China, were left relatively unscathed by the Great Depression. 

Now, what happens if we project these trendlines through the 21st century?   The dotted red line represents the median trend assuming that nominal and PPP growth rates converge at some intermediate level. 

Gdp2_2 

I can apply this trendline for World GDP growth, make assumptions of total world population to arrive at per capita World GDP growth, and add it back to the first graph.  The assumed growth rates, by decade, in per capita income are :

2007-2020 : 3.5%

2020-2030 : 3.5-4.0%

2030-2040 : 4.0-5.0%

2040-2050 : 5.0-6.0%

This leads to estimates for per-capita GDP at PPP, in 2007 dollars, to be :

2007 : $10,000

2020 : $15,155

2030 : $22,400

2040 : $32,600 - $36,000

2050 : $53,200 - $64,500

Which, when plotted, provides the following :

Ggplinear_6

Or, when a longer view is taken, in terms of logarithmic periods going back from the year 2050, we see :

Gdplog_3   

Needless to say, this degree of acceleration in economic growth affects nearly every possible facet of the world in the 21st century.  From a continually rising stock market to the proliferation of millionaires to the rapid upliftment of all metrics of human development, massive abundance is a certainty.  The inevitable derivatives of wealth, such as the spread of democracy, the upliftment in the sophistication of human psychology, and thus a corresponding drop in warfare, will soon follow.  Resolving current problems, such as reducing poverty in developing regions, to funding sophisticated healthcare technologies, to increasing literacy, to funding ambitious space exploration, are merely just a matter of time. 

Inevitably, even the average citizen in the mid-21st century will have access to many material and psychological opportunities that even the wealthiest of today do not have.  Turn that frown upside down, for you are in for an exciting time as you ride the tsunami of prosperity that is about to immerse you.

This article is the inaugural entry into a new category here at The Futurist titled "Core Articles".  These are the articles which are designed to form the cornerstone of a comprehensive understanding of the future, and are suggested reading for anyone interested in the subject.  Additional articles will be upgraded to "Core" status as augmentations to them accumulate. 

Related :

These Are the Best of Times

The Stock Market is Exponentially Accelerating

The Psychology of Economic Progress

The Age of Democracy

The Winds of War, the Sands of Time

Are You Acceleration Aware?

July 12, 2007 in Accelerating Change, Core Articles, Economics, The Singularity | Permalink | Comments (28) | TrackBack (0)

Tweet This! |

The Semantic Web

The World Wide Web, after just 12 years in mainstream use, has become an infrastructure accessed by hundreds of millions of people every day, and the medium through which trillions of dollars a year are transacted.  In this short period, the Web has already been through a boom, a crippling bust, and a renewal to full grandeur in the modern era of 'Web 2.0'.

But imagine, if you could, a Web in which web sites are not just readable in human languages, but in which information is understandable by software to the extent that computers themselves would be able to perform the task of sharing and combining information.  In other words, a Web in which machines can interpret the Web more readily, in order to make it more useful for humans.  This vision for a future Internet is known as the Semantic Web. 

Why is this useful?  Suppose that scientific research papers were published in a Semantic Web language that enabled their content to be integrated with other research publications across the world, making research collaboration vastly more efficient.  For example, a scientist running an experiment can publish his data in Semantic format, and another scientist not acquainted with the first one could search for the data and build off of it in real time.  Tim Berners-Lee, as far back as 2001, said that this "will likely profoundly change the very nature of how scientific knowledge is produced and shared, in ways that we can now barely imagine." 

Some are already referring to the Semantic Web as 'Web 3.0'.  This type of labeling is a reliable litmus test of a technology falling into the clutches of emotional hype, and thus caution is warranted in assessing the true impact of it.  I believe that the true impact of the Semantic Web will not manifest itself until 2012 or later.  Nonetheless, the Semantic Web could do for scientific research what email did for postal correspondence and what MapQuest did for finding directions - eliminate almost all of the time wasted in the exchange of information. 

June 11, 2007 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (3) | TrackBack (0)

Tweet This! |

New Gadgets for the Digital Home - A BusinessWeek Slideshow

BusinessWeek has a slideshow revealing new electronic devices that a consumer could use to enhance (or complicate) certain aspects of daily life.  Among these is the very promising Sunlight Direct System, which I discussed back on September 5, 2006.  Others, such as the Lawnbott ($2500), cost far more than the low-tech solution of hiring people to mow your lawn for the entire expected life of the device, ensuring that mass-market adoption is at least 4-5 years away. 

All of this is a very strong and predictable manifestation of The Impact of Computing, which mandates that entirely new categories of consumer electronics appear at regular intervals, and that they subsequently become cheaper yet more powerful at a consistent rate each year.  Let us observe each of these functional categories, and the rate of price declines/feature enhancements that they experience.

May 28, 2007 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (7) | TrackBack (0)

Tweet This! |

A Future Timeline for Automobiles

Many streams of accelerating technological change, from energy to The Impact of Computing, will find themselves intersecting in one of the largest consumer product industries of all.  Over 70 million automobiles were produced worldwide in 2006, with rapid market penetration underway in India and China.  Indisputably, cars greatly affect the lives of consumers, the economies of nations, and the market forces of technological change. 

I thus present a speculative timeline of technological and economic events that will happen for automobiles.  This has numerous points of intersection with the Future Timeline for Energy. 

Tesla_roadster2007 : The Tesla Roadster emerges to not only bring Silicon Valley change agents together to sow the seeds of disruption in the automotive industry, but also to immediately transform the image of electrical vehicles from 'punishment cars' to status symbols of dramatic sex appeal.  Even at the price of $92,000, demand outstrips supply by an impressive margin. 

2009 : The Automotive X-Prize of $25 Million (or more) is successfully claimed by a car designed to meet the 100 mpg/mass-producable goal set by the X Prize Foundation.  Numerous companies spring forth out of prototypes tested in the contest. 

2010 : With gasoline at $4/gallon, established automobile companies simultaneously release plug-in hybrid vehicles.  Hybrid, plug-in hybrid, and fully electrical cars represent 5% of total new automobiles sold in the US, even if tax incentives have been a large stimulus.  The habit of plugging in a car overnight to charge it starts to become routine for homeowners with such cars, but apartment dwellers are at a disadvantage in this regard, not having an outlet near their parking spot. 

2011 : Two or more iPod ports, 10-inch flat-screen displays for back seat passengers, parking space detection technology, and embedded Wi-Fi adapters that wirelessly can transfer files into the vehicle's hard drive from up to 500 feet away are standard features for many new cars in the $40,000+ price tier. 

2012 : Over 100 million new automobiles are produced in 2012, up from 70 million in 2006.  All major auto manufacturers are racing to incorporate new nanomaterials that are lighter than aluminium yet stronger and more malleable than steel.  The average weight of cars has dropped by about 5% from what it was for the equivalent style in 2007. 

2013 : Tesla Motors releases a fully electric 4-door sedan that is available for under $40,000, which is only 33% more than the $30,000 that the typical fully-loaded gasoline-only V6 Accord or Camry sells for in 2013. 

2014 : Self-driving cars are now available in the luxury tier (priced $100,000 or higher).  A user simply enters in the destination, and the car charts out a path (similar to Google Maps) and proceeds on it, in compliance with traffic laws.  However, a software malfunction results in a major traffic pile-up that garners national media attention for a week.  Subsequently, self-driving technologies are shunned despite their superior statistical performance relative to human drivers. 

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015 under moderate assumptions for economic growth).  This is the result of combined advances in energy, lighter nanomaterials, and computerized systems. 

2016 : An odd change has occurred in the economics of car depreciation.  Between 1980 and 2007, annual car depreciation rates decreased due to higher quality materials and better engine design, reaching as little as 12-16% a year for the first 5 years of ownership.  Technology pushed back the forces of depreciation. 

However, by 2016, 40% of a car's initial purchase price is comprised of electronics (up from under 20% in 2007 and just 5% in 1985), which depreciate at a rate of 25-40% a year.  The entire value of the car is pulled along by the 40% of it that undergoes rapid price declines, and thus total car depreciation is now occuring at a faster rate of up to 20% a year for the first 5 years.  This is a natural progression of The Impact of Computing, and wealthier consumers are increasingly buying new cars as 'upgrades' to replace models with obsolete technologies after 5-7 years, much as they would upgrade a game console, rather than waiting until mechanical failure occurs in their current car.  Consumers also conduct their own upgrades of certain easily-replaced components, much as they would upgrade the memory or hard drive of a PC. Technology has thus accelerated the forces of depreciation. 

2018 : Among new cars sold, gasoline-only vehicles are now a minority.  Millions of electricity-only vehicles are charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $2000/year in 2007.  Even when sunlight is obscured and the grid is used, some electrical vehicles cost as little as 1 cent/mile to operate.

2020 : New safety technologies that began to appear in mainstream cars around 2012, such as night vision, lane departure correction, and collision-avoiding cruise control, have replaced the existing fleet of older cars over the decade, and now US annual traffic fatalities have dropped to 25,000 in 2020 from 43,000 in 2005.  Given the larger US population in 2020 (about 350 Million), this is a reduction in traffic deaths by half on a per-capita basis. 

2024 : Self-driving cars have overcome the stigma of a decade prior, and are now widely used.  But they still have not fully displaced manual driving, due to user preferences in this regard.  Certain highways permit only self-driven cars, with common speed limits of 100 mph or more. 

2025-30 : Electricity (indeed, clean electricity) now fuels nearly all passenger car miles driven in the US.  There is no longer any significant fuel consumption cost associated with driving a car, although battery maintenance is a new aspect of car ownership.  Many car bodies now include solar energy absorbant materials that charge a parked car during periods of sunlight.  Leaving such cars out in the sun has supplanted the practice of parking in the shade or in covered parking.

Pervasive use of advanced nanomaterials has ensured that the average car weighs only 60% as much as a 2007 counterpart, but yet is over twice as resistant to dents. 

______________________________________________________________

I believe that this timeline represents the the combination of median forecasts across all technological and economic trends that influence cars, and will be perceived as too optimistic or too pessimistic by an equal number of readers.  Let's see how closely reality matches this timeline. 

May 05, 2007 in Accelerating Change, Energy, Nanotechnology, Technology, The Singularity | Permalink | Comments (26) | TrackBack (0)

Tweet This! |

World and Asian Semiconductor Revenue Growth

I stumbled upon something while reading the Asian Development Bank's report on the world economy.  No big surprises here, but one tiny chart stood out.  The column chart of WW and Asian semiconductor sales from 2001 to 2006 indicates that while Asia accounted for just one third of semiconductor sales in 2001, they comprise half of it today. 

Apacsemicon_3This encompasses a number of the main topics I discuss on The Futurist.  From The Impact of Computing (which is thus higher in Asia than in the rest of the world) to the accelerating rate of GDP growth (which necessitates so many large Asian countries, totaling 3 billion people, to all grow at 6% or more per year, just to keep total world GDP at its trendline).  From cellphone dispersion to PC adoption to enterprise server and router usage, semiconductor sales are just about the best indicator of economic and technological progress. 

Let's see how big of a share of world seminconductor revenues Asia can ultimately consume before the relative maturity of the US market is emulated. 

Related :

Are You Acceleration Aware?

Economic Growth is Exponential and Accelerating

March 30, 2007 in Accelerating Change, China, Computing, Economics, India, Technology, The Singularity | Permalink | Comments (0) | TrackBack (0)

Tweet This! |

The Mainstreaming of Virtual Reality

BusinessWeek has an article and slideshow on the rapidly diversifying applications of advanced VR technology. 

This is a subject that has been discussed heavily here on The Futurist, through articles like The Next Big Thing in Entertainment, Parts I, II, and III, as well as Virtual Touch Brings VR Closer.  The coverage of this topic by BusinessWeek is a necessary and tantalizing step towards the creation of mass-market products and technologies that will enhance productivity, defense, healthcare, and entertainment. 

Technologically, these applications and systems are heavily encapsulated within The Impact of Computing with very few components that are not exponentially improving.  Thus, cost-performance improvements of 30-58% a year are guaranteed, and will result in stunningly compelling experiences as soon as 2012. 

To the extent that many people who seek reading material about futurism are primarily driven by the eagerness to experience 'new types of fun', this area, more than any other discussed here, will deliver the majority of new fun that consumers can experience in coming years. 

Update (4/7/07) : HP unveils a variety of new technologies, from screens to sensors, designed to augment the realism of games.   

March 25, 2007 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (8) | TrackBack (0)

Tweet This! |

2006 Technology Breakthrough Roundup

The MIT Technology Review has compiled a convenient list of the most significant technological advances of 2006.

The Year in Information Technology

The Year in Energy : Plug-in cars, batteries, solar energy.

The Year in Biotechnology : A cure for blindness, and more.

The Year in Nanotechnology : Displays, sensors, and nanotube computers.

Most of the innovations in the articles above are in the laboratory phase, which means that about half will never progress enough to make it to market, and those that do will take 5 to 15 years to directly affect the lives of average people (remember that the laboratory-to-market transition period itself continues to shorten in most fields).  But each one of these breakthroughs has world-changing potential, and that there are so many fields advancing simultaneously guarantees a massive new wave of improvement to human lives. 

This scorching pace of innovation is entirely predictable, however.  To internalize the true rate of technological progress, one merely needs to appreciate :

The Milli, Micro, Nano, Pico curves

The Impact of Computing

The Accelerating Rate of Change

We are fortunate to live in an age when a single calendar year will invariably yield multiple technological breakthroughs, the details of which are easily accessible to laypeople.  In the 18th century, entire decades would pass without any observable technological improvements, and people knew that their children would experience a lifestyle identical to their own.  Today, we know with certainty that our lives in 2007 will have slight but distinct and numerous improvements in technological usage over 2006. 

Into the Future we continue, where 2007 awaits..

December 30, 2006 in Accelerating Change, Biotechnology, Computing, Energy, Nanotechnology, Science, Technology, The Singularity | Permalink | Comments (3) | TrackBack (0)

Tweet This! |

Are You Acceleration Aware?

Sing The single most necessary component of any attempt to make predictions about the future is a deep internalized understanding of the accelerating, exponential rate of change.  So many supposed 'experts' merely project the rate of progress as a linear trend, or even worse, fail to recognize progress at all, and make predictions that end up being embarrassingly wrong.

For example, recall that in the early 1970s, everyone thought that by 2000, all of the Earth's oil would be used up.  It has not, and the average American spends fewer hours of wages on gasoline each week than in 1970.   

Equally simple-minded predictions are made today.  How often do we read things like :

"By 2080, Social Security will no longer be able pay benefits, leaving many middle Americans with insufficient retirement funds."

2080?!  By 2080, there will be no 'middle Americans'.  There will be no people in their current form, as per their own choice, as we shall see later in this article.

Or how about this one?  I see nonsense like this in Pat Buchanan's books. 

"Immigration to the US from third-world countries will make such people a majority of the US population by 2100, making the US a third-world country."

'Third-world'?  That term is already obsolete today as the Cold War has ended.  Plus, aren't a lot of these same isolationists worried that India and China are overtaking us economically and benefiting from 'outsourcing' of US jobs?  Isn't that mutually exclusive with a belief that the same countries will always be 'third-world'? 

In any event, the world of 2100 will be more different from 2006 than 2006 is different from 8000 BC.

Here is why :

The rate of change in many aspects of human society, and even some aspects of all life on Earth, moves on an exponential trend, not a simple linear one. 

Read Ray Kurzweil's essay on this topic first.  About 20% of his article is just too optimistic, but he does a good job of describing the evidence of accelerating change in multiple, unrelated areas.  The Wikipedia article is also useful. 

Additionally, right here on The Futurist, we have identified and discussed multiple accelerating trends across seemingly unrelated areas :

1) The Impact of Computing is a critically important concept, as it surrounds each of you in your homes every day.  Observation of the new gadgets you are adopting into your life reveals many things about what the future may hold, as simple acts such as upgrading your cellphone and buying a new iPod hold much deeper long-term significance.  This will cause major changes in entertainment, travel, autos, and business productivity in the very near future. 

2) Economic growth is exponential and accelerating.  World GDP grows at a trendline of 4.5% a year, as opposed to under 1% a year in the 18th century and under 0.1% a year before the 16th century.  More visible evidence of accelerating economic progress is found in long-term charts of the stock market.  Furthermore, the number of millionaires in the world is rising by several percent each year.  The ranks of the wealthy did not grow this rapidly even just a few decades ago. 

3) Astronomical observation technology is also accelerating exponentially, which will result in the detection of Earth-like planets around other stars as soon as 2011.  Transportation speeds also appear to be on an exponential curve of increase, even if significant jumps are decades apart.  The cost to send a man to the Moon today in relation to US GDP is only 1/30th as much as it was in 1969. 

4) Biotechnology is converging with information technology, and by some estimations medical knowledge is doubling every 8 years at this point.  There is a lot of research underway that could directly or indirectly increase human life expectancy, which does appear to have been increasing at an accelerating rate throughout human history.  But I am a bit more cautious to predict major gains here just yet, as each successive unit gain in lifespan might require increasingly greater research efforts.  Negatives are also rising, as the dropping cost of small-scale biotech projects increase the ease at which small groups could create bioterror agents.  The probability of a bioterror attack that kills over 1 million people before 2025 is very high. 

5) Even energy has burst forth from what appeared to be a century of stagnation into an area of rapid technological advances.  Beyond simple market forces like the price of oil, the revolutions in computing, biotechnology, and nanotechnology are all converging on the field of energy through multiple avenues to chip away at the seemingly gargantuan obstacles we face.  Energy, too, is in the process of becoming a knowledge-based technology, and hence guaranteed to see accelerating exponential innovation. 

The Milli, Micro, Nano, Pico curves are another dimension from which to view accelerating trends in an all-encompassing view.  Internalize this chart, and much of the technological progress of the last 50 years seems natural, as do all of the future predictions here that may take most people by surprise. 

Of course, not everything is accelerating.  If a cat catches a bird, that action is no different today than it was 30, 3000, or 3 million years ago.  The gestation period for a human is still 9 months, just as it was 30,000 years ago.  The trends we have seen above do not appear to be on a path to change these natural processes.  But don't assume that even these are permanently immune to change, as accelerating forces continue to swallow up more pieces of our world.

The Technological Singularity is defined as a time at which the rate of accelerating change increases to a point where it becomes human surpassing.  To visualize what this can mean, ponder this chart, and then this chart.  All credible futurists agree on such an event occurring, and differ only on predictions of the timing or nature of the Singularity.  My own prediction is for 2050, but this is a vast subject that we will save for another day. 

In any event, we need not worry just yet about whether the Singularity of 44 +/- 20 years hence will be a positive or a negative.  Much more will be written about that in coming years, as many more people grasp the concept.  For the present, just be observant of the accelerating trends that surround you, for the invisible forces that run the world gain much more clarity through that lens.  There is much to gain by being acceleration aware..

December 03, 2006 in Accelerating Change, The Singularity | Permalink | Comments (16) | TrackBack (0)

Tweet This! |

nVidia Graphics Technology Advancement

Check out nVidia's homepage for a sample of the graphics that it's new graphics processors are capable of.  Yes, that face is entirely computer generated, and can be rendered on a PC with nVidia's products available for a grand total of under $2000.  While this demonstration, of course, is constructed on optimal conditions to display pre-selected visuals for maximum 'Ooooh' effect, this will be the level of graphical detail that mainstream games will contain by 2012.  Our prediction of a radical reshaping of consumer entertainment appears to be on track. 

An important accomplishment of this demo is the apparent surmounting of the Uncanny Valley pitfall.  More demos are needed to confirm that this obstacle has been overcome, however.

Related :

The Technological Progress of Video Games

The Next Big Thing in Entertainment

Next Generation Graphics, a Good Intro

November 26, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (2) | TrackBack (0)

Tweet This! |

Telescope Power - Yet Another Accelerating Technology

285pxhubble_01Earlier, we had an article about how our advancing capability to observe the universe would soon enable the detection of Earth-like planets in distant star systems.  Today, I present a complementary article, in which we will examine the progression in telescopic power, why the rate of improvement is so much faster than it was just a few decades ago, and why amazing astronomical discoveries will be made much sooner than the public is prepared for. 

The first telescope used for astronomical purposes was built by Galileo Galilei in 1609, after which he discovered the 4 large moons of Jupiter.  The rings of Saturn were discovered by Christaan Huygens in 1655, with a telescope more powerful than Galileo's.  Consider that the planet Uranus was not detected until 1781, and similar-sized Neptune was not until 1846.  Pluto was not observed until 1930.  That these discoveries were decades apart indicates what the rate of progress was in the 17th, 18th, 19th, and early 20th centuries. 

383pxextrasolar_planets_20040831_1The first extrasolar planet was not detected until 1995, but since then, hundreds more with varying characteristics have been found.  In fact, some of the extrasolar planets detected are even the same size as Neptune.  So while an object of Neptune's size in our own solar system (4 light-hours away) could remain undetected from Earth until 1846, we are now finding comparable bodies in star systems 100 light years away.  This wonderful, if slightly outdated chart provides details of extrasolar planet discoveries. 

The same goes for observing stars themselves.  Many would be surprised to know that humanity had never observed a star (other than the sun) as a disc rather than a mere point of light, until the Hubble Space Telescope imaged Betelgeuse in the mid 1990s.  Since then, several other stars have been resolved into discs, with details of their surfaces now apparent.

So is there a way to string these historical examples into a trend that projects the future of what telescopes will be able to observe?  The extrasolar planet chart above seems to suggest that in some cases, the next 5 years will have a 10x improvement in this particular capacity - a rate comparable to Moore's Law.  But is this just a coincidence or is there some genuine influence exerted on modern telescopes by the Impact of Computing? 

Many advanced telescopes, both orbital and ground-based, are in the works as we speak.  Among them are the Kepler Space Observatory, the James Webb Space Telescope, and the Giant Magellan Telescope, which all will greatly exceed the power of current instruments.  Slightly further in the future is the Overwhelmingly Large Telescope (OWL).  The OWL will have the ability to see celestial objects that are 1000 times as dim as what the Hubble Space Telescope (HST) can observe, and 5 trillion times as faint as what the naked eye can see.  The HST launched in 1990, and the OWL is destined for completion around 2020 (for the moment, we shall ignore the fact that the OWL actually costs less than the HST).  This improvement factor of 1000 over 30 years can be crudely annualized into a 26% compound growth rate.  This is much slower than the rate suggested in the extrasolar planet chart, however, indicating that the rate of improvement in one aspect of astronomical observation does not automatically scale to others.  Still, approximately 26% a year is hugely faster than progress was when it took 65 years after the discovery of Uranus to find Neptune, a body with half the brightness.  65 years for a doubling is a little over 1% a year improvement between 1781 and 1846.  We have gone from having one major discovery per century to having multiple new discoveries per decade - that is quite an accelerating curve. 

We can thus predict with considerable confidence that the first Earth-like planet will make headlines in 2010 or 2011, and by 2023, we will have discovered thousands of such planets.  This means that by 2025, a very important question will receive considerable fuel on at least one side of the debate...

September 28, 2006 in Accelerating Change, Science, Space Exploration, Technology, The Singularity | Permalink | Comments (14) | TrackBack (0)

Tweet This! |

Virtual Touch Brings VR Closer

Scientists have created a touch interface that, while smooth, can provide the user with a simulated sensation of a variety of surfaces, including that of a sharp blade or needle (from MIT Technology Review).

By controlling the direction in which pressure is applied to the skin when a user's finger is run across the smooth surface, the brain can be tricked into feeling a variety of pointed or textured objects where there are none.  Essentially, this is the touch equivalent of an optical illusion. 

While this technology is still in the earliest stages of laboratory testing, within 15 years, it will be commercially viable.  By then, it will find many uses in medicine, defense, education, entertainment, and the arts.  Examples of practical applications include training medical students in surgical techniques, or creating robots with hands that can perfectly duplicate human characteristics. 

This will be one of the critical components of creating compelling and immersive virtual reality environments, and the progress of this technology between now and 2020 will enable prediction of the specific details and capabilities of virtual reality systems for consumers.  A fully immersive VR environment available to the average household has already been predicted here, and now one more key component appears to be well on track. 

Update : More from Businessweek

August 27, 2006 in Accelerating Change, Technology, The Singularity | Permalink | Comments (4) | TrackBack (0)

Tweet This! |

Broadband Speeds of 50 Mbps for $40/month by 2010

In 1999, maybe 50 million US households had dial-up Internet access at 56 kbps speeds.  In 2006, there are 50 million Broadband subscribers, with 3-10 mbps speeds.  This is roughly a 100X improvement in 7 years, causing a massive increase in the utility of the Internet over this period.  The question is, can we get an additional 10X to 30X improvement in the next 4 years, to bring us the next generation of Internet functionality?  Let's examine some new technological deployments in home Internet access.

Verizon's high-speed broadband service, known as FIOS, is currently available to about 3 million homes across the US, with downstream speeds of 5 Mbps available for $39.95/month and higher speeds available for greater prices.  How many people subscribe to this service out of the 3 million who have the option is not publicly disclosed.

However, Verizon will be upgrading to a more advanced fiber-to-the-home standard that will increase downstream speeds by 4X and upstream speeds by 8X.  Verizon predicts that this upgrade will permit it to offer broadband service at 50 or even 100 Mbps to homes on its FIOS network.  Furthermore, the number of homes with access to FIOS service will rise from the current 3 million to 6 million by the end of 2006. 

Verizon's competitors will, of course, offer similar speeds and prices shortly thereafter.

The reason this is significant is that if falls precisely within the concept of the Impact of Computing.  The speed of the Internet service increases by 4X to 8X, while the number of homes with access to it increases by 2X, for an effective 8X to 16X increase in Impact, and the associated effects on society.  High-definition video streaming, video blogging, video wikis, and advanced gaming will all emerge as rapidly adopted new applications as a result. 

We often hear about how Japan and South Korea already have 100 Mbps broadband service while the US languishes at 3-10 Mbps with little apparent progress.  True, but Africa has vast natural resources and Taiwan, Israel, and Switzerland do not.  Which countries make better use of the advantages available to them?  In the same way, South Korea and Japan may have a lot of avid online gamers, but have not made use of their amazing high-speed infrastructure to create businesses in the last 2 years like Google Adwords, Zillow, MySpace, Wikipedia, etc.  The US has spawned these powerful consumer technologies even with low broadband speeds, due to our innovation and fertile entrepreneurial climate that exceeds even that of advanced nations like Japan and South Korea.  Just imagine the innovations that will emerge with the greatly enhanced bandwidth that will soon be available to US innovators. 

Give the top 80 million American households and small businesses access to 50 Mbps Internet connections for $40/month by 2010, and they will produce trillions of dollars of new wealth, guaranteed.

Related :

The Next Big Thing in Entertainment

The Impact of Computing

Why Do The Biggest Technological Changes Take Almost Everyone By Surprise?

The End of Rabbit Ears, a Billion More Broadband Users

July 28, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (13) | TrackBack (0)

Tweet This! |

The Nanotech Report 2006 - Key Findings

The 2006 edition of the Nanotech Report from Lux Research was published recently.  This is something I make a point to read every year, even if only a brief summary is available for free. 

Some of the key findings that are noteworthy :

1) Nanotechnology R&D reached $9.6 billion in 2005, up 10% from 2004.  This is unremarkable when one considers that the world economy grew 7-8% in nominal terms in 2005, but upon closer examination of the subsets of R&D, corporate R&D and venture capital grew 18% in 2005 to hit $5 billion.  This means that many technologies are finally graduating from basic research laboratories and are being turned into products, and that investment in nanotechnology is now possible.  This also confirms my estimation that the inflection point of commercial nanotechnology was in 2005. 

2) Nanotechnology was incorporated in $30 billion of manufactured goods in 2005 (mostly escaping notice).  This is projected to reach $2.6 trillion of manufactured goods by 2014, or a 64% annual growth rate.  Products like inexpensive solar roof shingles, lighter yet stronger cars yielding 60 mpg, stain and crease resistant clothes, and thin high-definition displays will be common. 

But a deeper concept worth internalizing is how an extension of the Impact of Computing will manifest itself.  If the quality of nanotechnology per dollar increases at the same 58% annual rate as Moore's Law (a modest assumption), combining this qualitative improvement rate with a dollar growth of 64% a year yields an effective Impact of Nanotechnology of (1.58)*(1.64) = 160% per year.  As the base gets larger, this will become very visible.

3) Nanotech-enabled products on the market today command a price premium of 11% over traditional equivalents, even if the nanotechnology is not directly noticed. 

The next great technology boom is upon us, and it is beginning now. 

May 30, 2006 in Accelerating Change, Computing, Nanotechnology, Science, Technology, The Singularity | Permalink | Comments (4) | TrackBack (0)

Tweet This! |

Next-Generation Graphics - A Good Intro

IGN has a good article on what level of sophistication can be expected in video game graphics in the next couple of years.  While their optimism about the period between now and 2008 is cautious, this reaffirms the technological trends and is consistent with my prediction that the descendants of modern video games will become the most popular form of home entertainment by 2012, mostly at the expense of television. 

As always, viewing the pictoral history of video games gives an idea of the rate of progress that one can expect in the next decade. 

May 06, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (2) | TrackBack (0)

Tweet This! |

Virtual Worlds - BusinessWeek Catches On

The cover story of BusinessWeek this week is devoted to virtual game worlds and the real economic opportunities that some entrepreneurs are finding in them. 

I spoke of exactly this less than a month earlier, on April 1, about how video games would evolve into an all-encompassing next generation of entertainment to displace television, and also become a huge ecosystem for entrepreneurship.  It seems that we are on the cusp of this vision becoming reality (by 2012, as per my prediction). 

April 25, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (0) | TrackBack (0)

Tweet This! |

Milli, Micro, Nano, Pico

What would be the best way to measure, and predict, technological progress?  One good observation has been The Impact of Computing, but why has computing occurred now, rather than a few decades earlier or later?  Why is nanotechnology being talked about now, rather than much earlier or later?

Engineering has two dimensions of progress - the ability to engineer and manufacture designs at exponentially smaller scales, and the ability to engineer projects of exponentially larger complexity.  In other words, progress occurs as we design in increasingly intricate detail, while simultaneously scaling this intricacy to larger sizes, and can mass produce these designs. 

For thousands of years, the grandest projects involved huge bricks of stone (the Pyramids, medieval castles).  The most intricate carvings by hand were on the scale of millimeters, but scaled only to the size of hand-carried artifacts.  Eventually, devices such as wristwatches were invented, that had moving parts on a millimeter scale.

At the same time, engineering on a molecular level first started with the creation of simple compounds like Hydrochloric Acid, and over time graduated to complex chemicals, organic molecules, and advanced compounds used in industry and pharmaceuticals.  We are currently able to engineer molecules that have tens of thousands of atoms within them, and this capability continues to get more advanced. 

The chart below is a rough plot of the exponentially shrinking detail of designs which we can mass-produce (the pink line), and the increasingly larger atom-by-atom constructs that we can create (the green line).  Integrated circuits became possible as the pink line got low enough in the 1970s and 80s, and life-saving new pharmaceuticals have emerged as the green line got to where it was in the 1990s and today.  The two converge right about now, which is not some magical inflection point, but rather the true context in which to view the birth of nanotechnology. 

Untitled_2

As we move through the next decade, molecular engineering will be capable of producing compounds tens of times more complex than today, creating amazing new drugs, materials, and biotechnologies.  Increasingly finer design and manufacturing capabilities will allow computer chips to accomodate 10 billion transistors in less than one square inch, and for billions of these to be produced.  Nanotechnology will be the domain of all this and more, and while the beginnings may appear too small to notice to the untrained observer, the dual engineering trends of the past century and earlier converge to the conception of this era now.

Further into the future, molecule-sized intelligent robots will be able to gather and assemble into solid objects almost instantly, and move inside our body to monitor our health and fight pathogens without our noticing.  Such nanobots will change our perception of physical form as we know it.  Even later, picotechnology, or engineering on the scale of trillionths of a meter - that of subatomic particles - will be the frontier of mainstream consumer technology, in ways we cannot begin to imagine today.  This may coincide with a Technological Singularity around the middle of the 21st century. 

For now, though, we can sit back and watch the faint trickle of nanotechnology headlines, products, and wealth thicken and grow into a stream, then a river, and finally a massive ocean that deeply submerges our world in its influence. 

April 22, 2006 in Accelerating Change, Biotechnology, Nanotechnology, Technology, The Singularity | Permalink | Comments (3) | TrackBack (0)

Tweet This! |

The Next Big Thing in Entertainment - Part II

Continuing from Part I, where a case is made that the successor to video games, virtual reality, will draw half of all time currently spent on television viewership by 2012. 

The film industry, on the other hand, has far less of a captive audience than television, and thus evolved to be much closer to a meritocracy.  Independent films with low budgets can occasionally do as well as major studio productions, and substantial entrepreneurship is conducted towards such goals. 

Toystorydisneypixaranimations This is also a business model that continually absorbs new technology, and even has a category of films generated entirely through computer animation.  A business such as Pixar could not have existed in the early 1990s, but from Toy Story (1995) onwards, Pixar has produced seven consecutive hits, and continues to generate visible increases in graphical sophistication with each film.  At the same time, the tools that were once accessible only to Pixar-sized budgets are now starting to become available to small indie filmmakers. 

Even while the factors in Part I will draw viewers away from mediocre films, video game development software itself can be modified and dubbed to make short films.  Thesims2halloween1Off-the-shelf software is already being used for this purpose, in an artform known as machinima.  While most machinima films today appear amateurish and choppy, in just a few short years the technology will enable the creation of Toy Story calibre indie films. 

By democratizing filmmaking, machina may effectively do to the film industry what blogs did to the mainstream media.  In other words, a full-length feature film created by just 3 developers, at a cost of under $30,000, could be quickly distributed over the Internet and gain popularity in direct proportion to its merit.  Essentially, almost anyone with the patience, skill, and creativity can aspire to become a filmmaker, with very little financing required at all.  This too, just like the blogosphere before it, will become a viable form of entrepreneurship, and create a new category of self-accomplished celebrities. 

At the same time, machinima will find a complementary role to play among the big filmmakers as well, just as blogs are used for a similar purpose by news organizations today.  Peter Jackson or Steven Spielberg could use machinima technology to slash special-effects costs from millions to mere thousands of dollars.  Furthermore, since top films have corresponding games developed alongside them, machinima fits nicely in between as an opportunity for the fan community to create 'open source' scenes or side stories of the film.  This helps the promotion and branding of the original film, and thus would be encouraged by the producer and studio. 

Thousands of people will partake in the creation of machinima films by 2010, and by 2012 one of these films will be in the top 10 of all films created that year, in terms of the number of Google search links it generates.  These machinima films will have the same effect on the film industry that the blogosphere has had on the mainstream media. 

There you have it, the two big changes that will fundamentally overturn entertainment as we know it, while making it substantially more fun and participatory, in just 6 short years. 

April 04, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (0) | TrackBack (0)

Tweet This! |

The Next Big Thing in Entertainment - Part I

Previously, I had written about why the biggest technological changes take almost everyone by surprise.  Not many people recognize the exponential, accelerating nature of technological change, and fewer still have the vision to foresee how two seemingly unrelated trends could converge to create massive new industries and reconstruct popular culture.

Today, we will attempt to make just such a prediction.

Computer graphics and video games have improved in realism in direct accordance with Moore's Law.  Check out the images of video game progression to absorb the magnitude of this trend.  One can appreciate this further by merely comparing Pixar's Toy Story (1995) to their latest film, Cars (2006).  But to merely project this one trend to predict that video games will have graphics that look as good as the real thing is an unimaginative plateau.  Instead, let's take it further and predict :

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply enjoyed that it will reduce the time spent on watching network television to half of what it is today, by 2012.

Impossible, you say?  How can this massive change happen in just 6 years?  First, think of it in terms of 'Virtual Reality' (VR), rather than 'games'.  Then, consider that :

1) Flat hi-def television sets that can bring out the full beauty of advanced graphics will become much cheaper and thinner, so hundreds of millions of people will have wall-mounted sets of 50 inches or greater for under $1000 by 2012.

Randomtackle1 2) The handheld controllers that adults find inconvenient will be replaced by speech and motion recognition technology.  The user experience will involve speaking to characters in the game, and sports simulations will involve playing baseball or tennis by physically moving one's hand.  Eventually, entire bodysuits and goggles will be available for a fully immersive experience. 

3) Creative talent is already migrating out the television industry and into video games, as is evident by the increase in story quality in games and the decline in the quality of television programs.  This trend will continue, and result in games available for every genre of film.  Network television has already been reduced to depending on a large proportion of low-budget 'reality shows' to sustain their cost-burdened business models. 

4) Adult-themed entertainment has driven the market demand and development of many technologies, like the television, VCR, DVD player, and Internet.  Gaming has been a notable exception, because the graphics have not been realistic enough to attract this audience, except for a few unusual games.  However, as realism increases through points 1) and 2), this vast new market opens up, which in turn pushes development.  For the first time, there are entire conferences devoted to this application of VR technology.  The catalyst that other technologies received is yet to stimulate gaming.

5) Older people are averse to games, as they did not have this form of entertainment when they were young.  However, people born after 1970 have grown up with games, and thus still occasionally play them as adults.  As the pre-game generation is replaced by those familiar with games, more VR tailored for older people will develop.  While this demographic shift will not make a huge change by 2012, it is irreversibly pushing the market in this direction every year. 

5146) Online multiplayer role-playing games are highly addictive, but already involve people buying and selling game items for real money, to the tune of a $1.1 billion per year market.  Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a sizable group of people who are able to earn a full-time living through these VR worlds.  This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today.

There you have it, a convergence of multiple trends bringing a massive shift in how people spend their entertainment time by 2012, with television only watched for sports, documentaries, talk shows, and a few top programs. 

The progress in gaming also affects the film industry, but in a very different way.  The film industry will actually become greatly enhanced and democratized over the same period.  For this, stay tuned for Part II tomorrow. 

April 01, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (11) | TrackBack (0)

Tweet This! |

Are You Prepared to Live to 100? - Part II

Refer back to Part I here, where we discuss that despite the many stunning advances in medicine, there is still something within us that doubts that our present lives could be extended to 100 years.

The exponentially progressing advances in genomic and proteomic science will cure many genetic predispositions that an individual may have to certain diseases, again, with medical knowledge currently doubling every 8 years.  Programmable nanobots that can keep us healthy from inside, by detecting cancerous cells or biochemical changes very early, are also a near-certainty by the 2020s.  Furthermore, if just half of the world's 8 million millionaires were each willing to pay $500,000 to add 20 healthy, active years to their lives, the market opportunity would be (4 million X $500,000) = $2 trillion.  The technological trend and market incentive is definitely in place for revolutions in this field. 

But that is still not quite enough to assure that the internal mechanisms that make cells expire by a certain time, or the continuous damage done by cosmic rays perpetually going through our bodies, can be fully negated. 

Ray Kurzweil, in his essay "The Law of Accelerating Returns", seems confident that additions to human lifespan will grow exponentially.  While I agree with most of his conclusions in other areas, over here, I am not convinced that this growth is accelerating at the moment.  I feel that the new advances will be increasingly more complex, and only the most high-informed and disciplined individuals will be able to capitalize on the technologies available to them to extend their lifespan.  This will benefit a few people, but not enough to lift the broader average by much. 

However, where I do agree with Kurzweil and other Futurists is the concept of a Technological Singularity and Post-Human existence.  The advances in biotechnology and nanotechnology will become so advanced that humans will be able to reverse-engineer their brains re-engineer their entire bodies down to the molecular level.  In fact, you could effectively transfer your 'software' (your mind) into upgraded hardware.  This is not as crazy as it sounds, as even today, many devices are used within or near the body in order to prolong or augment human life, and many of these are fully part of The Impact of Computing; so both their sophistication and number could rise rapidly. 

This potentially will afford immortality to the human mind for those fortunate enough to be around in 2050 or so.  Of course, as the years progress, we will have a better idea of how realistic this possibility actually is.

So that is my conclusion.  Average human life expectancy will make moderate but unspectacular gains for the next 50 years, with only those who maintain healthy lifestyles and are deeply aware of the technologies available to them living past the age of 100.  This will be true until the Technological Singularity, where humans *may* be able to separate their minds from their bodies, and reside in different, artificially engineered bodies.  This is a vast subject which I will describe in more detail in future posts.  For some reading, go here. 

Also read about The Longevity Dividend.

March 08, 2006 in Biotechnology, Technology, The Singularity | Permalink | Comments (9) | TrackBack (0)

Tweet This! |

Are You Prepared to Live to 100? - Part I

There is a lot of speculation about whether new medical science will allow not just newborn babies to live until 100, but even people who are up to 40 years old today.  But how much of it is realistic?

At first glance, human life expectancy appears to have risen greatly from ancient times :

Neolithic Era : 22

Roman Era : 28

Medieval Europe : 33

England, 1800 : 38

USA, 1900 : 48

USA, 2005 : 78

But upon further examination, the low life expectancies in earlier times (and poorer countries today) are weighed down by a high infant mortality rate.  If we take a comparison only of people who have reached adulthood, life expectancy may have risen from 45 to 80 in the last 2000 years.  This does not appear to be as impressive of a gain rate.

But, if you index life expectancy against Per Capita GDP, then the slow progress appears differently.  Life expectancy began to make rapid progress as wealth rose and funded more research and better healthcare, and since Economic Growth is Accelerating, an argument can be made that if lifespans jumped from 50 to 80 in the 20th century, they might jump to 100 by the 2020s.

But that still seems to be too much to expect.

125pxdna123_2 We hear that if cancer and cardiovascular disease were cured, average lifespans in America would rise into the 90s.  We acknowledge that medical knowledge is doubling every 8 years or so.  We see in the news that a gene that switches off aging has been found in mice.  We even know that the market demand for such biotechnology would be so great - most people would gladly pay half of their net worth to get 20 more healthy, active years of life - that it will attract the best and brightest minds. 

Yet something within us is doubtful.....

(stay tuned tomorrow for Part II)

March 06, 2006 in Accelerating Change, Biotechnology, Technology, The Singularity | Permalink | Comments (10) | TrackBack (0)

Tweet This! |

The Impact of Computing : 78% More Each Year

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months.  But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years.  To not internalize this more deeply is to miss investment opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement.  Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 15% a year for the last fifty years.  Individual years have ranged between +30% and -12%, but let's say these industries have grown large enough that their growth rate slows down to an average of 12% a year for the next couple of decades.

So, we can crudely conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year.  If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78. 

The Impact of Computing grows at a screaming rate of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves.  Consider the most popular television shows of the 1970s, such as The Brady Bunch or The Jeffersons, where the characters had virtually all the household furnishings and electrical appliances that are common today, except for anything with computational capacity.  Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970.  It is obvious what has changed during this period, to induce the economic gains. 

In the 1970s, there was virtually no household product with a semiconductor component.  Even digital calculators were not affordable to the average household until very late in the decade. 

In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.

In the early 1990s, many people began to have home PCs.  For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.

In the late 1990s, the PC was joined by the Internet connection and the DVD player, bringing the number of household devices on the Moore's Law-type curve to three. 

Today, many homes also have a wireless router, a cellular phone, an iPod, a flat-panel TV, a digital camera, and a couple more PCs.  In 2006, a typical home may have as many as 8 or 9 devices which are expected to have descendants that are twice as powerful for the same price, in just the next 12 to 24 months. 

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0

1980s : 1-2

1990s : 3-4

2000s : 6-12

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

One extraordinary product provides a useful example, the iPod :

First Generation iPod, released October 2001, 5 GB capacity for $399

Fifth Generation iPod, released October 2005, 60 GB capacity for $399, or 12X more capacity in four years, for the same price. 

Total iPods sold in 2002 : 381,000

Total iPods sold in 2005 : 22,497,000, or 59 times more than 2002.

12X the capacity, yet 59X the units, so (12 x 59) = 708 times the impact in just three years.  The rate of iPod sales growth will moderate, of course, but another product will simply take up the baton, and have a similar growth in impact. 

Now, we have a trend to project into the near future.  It is a safe prediction that by 2015, the average home will contain 25-30 such computationally advanced devices, including sophisiticated safety and navigation systems in cars, multiple thin HDTVs greater than 60 inches wide diagonally, networked storage that can house over 1000 HD movies in a tiny volume, virtual-reality ready goggles and gloves for advanced gaming, microchips and sensors embedded into several articles of clothing, and a few robots to perform simple household chores. 

Not only does Moore's Law ensure that these devices are over 100 times more advanced than their predecessors today, but there are many more of them in number.  This is the true vision of the Impact of Computing, and the shocking, accelerating pace at which our world is being reshaped. 

I will expand on this topic greatly in the near future.  In the meantime, some food for thought :

Visualizing Moore's Law is easy when viewing the history of video games.

The Law of Accelerating Returns is the most important thing a person could read.

How semiconductors are becoming a larger share of the total economy.

Economic Growth is Exponential and Accelerating, primarily due to information technology becoming all-encompassing. 

February 21, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (34)

Tweet This! |

Exponential, Accelerating Growth in Transportation Speed

In the modern world, few people truly understand that the world is progressing at an exponential and accelerating rate.  This is the most critical and fundamental aspect of making any attempt to understand and predict the future.  Without a deep appreciation for this, no predictions of the intermediate and distant future are credible.

Read Ray Kurzweil's essay on this topic for an introduction.

Among the many examples of accelerating progress, one of the easiest to historically track and grasp is the rate of advancement in transportation technology.  Consider the chart below :

Speed_1

For thousands of years, humans could move at no more than the pace of a horse.  Then, the knee of the curve occurred, with the invention of the steam engine locomotive in the early 19th century, enabling sustained speeds of 60 mph or more.  After that came the automobile, airplane, and supersonic jet.  By 1957, humans had launched an unmanned vehicle into space, achieving escape velocity of 25,000 mph.  In 1977, the Voyager 1 and 2 spacecraft were launched on an interplanetary mission, reaching peak speeds of 55,000 mph.  However, in the 29 years since, we have not launched a vehicle that has exceeded this speed. 

Given these datapoints, what trajectory of progress can we extrapolate for the future?  Will we ever reach the speed of light, and if so, under what circumstances?

Depending on how you project the trendline, the speed of light may be reached by Earth-derived life-forms anywhere between 2075 and 2500.  How would this be possible?

Certainly, achieving the speed of light would be extremely difficult, just like a journey to the Moon might have appeared extremely difficult to the Wright brothers.  However, after the 1000-fold increase in maximum speed achieved during the 20th century, a mere repeat of the same magnitude of improvement would get us there.

But what of various limits on the human body, Einstein's Theory of Relativity, the amount of energy needed to propel a vehicle at this speed, or a host of other unforseen problems that could arise if we get closer to light-speed transportation?  Well, why assume that the trip will be made by humans in their current form at all?

Many top futurists believe that the accelerating rate of change will become human-surpassing by the mid-21st century, in an event known as the Singularity.  Among other things, this predicts a merger between biology and technology, to the extent that a human's 'software' can be downloaded and backed up outside of his 'hardware'. 

Such a human mind could be stored in a tiny computer that would not require air or water, and might be smaller than a grain of sand.  This would remove many of the perceived limitations on light-speed travel, and may in fact be precisely the path we are on. 

I will explain this in much more detail in the near future.  In the meantime, read more about why this is possible.

February 07, 2006 in Accelerating Change, Space Exploration, Technology, The Singularity | Permalink | Comments (24) | TrackBack (0)

Tweet This! |

The Technological Progression of Video Games

As several streams of technological progress, such as semiconductors, storage, and Internet bandwidth continue to grow exponentially, doubling every 12 to 24 months, one subset of this exponential progress that offers a compelling visual narrative is the evolution of video games.

Video games evolve in graphical sophistication as a direct consequence of Moore's Law.  A doubling in the number of graphical polygons per square inch every 18 months would translate to an improvement of 100X after 10 years, 10,000X after 20 years, and 1,000,000X after 30 years, both in resolution and in number of possible colors.

Sometimes, pictures are worth thousands of words :

1976 :

Pong

1986 :

Enduro_Racer_Arcade

1996 :

Tomb_raider_tomb_of_qualopec

2006 :

Visiongt20060117015740218

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2010?  or 2016?

I actually predict that video games will become so realistic and immersive that they will displace other forms of entertainment, such as television.  Details on this to follow.

The future will be fun...

Related : The Next Big Thing in Entertainment

January 28, 2006 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (25)

Tweet This! |

Search

Ads

Categories

  • About
  • Accelerating Change
  • Artificial Intelligence
  • ATOM AotM
  • Biotechnology
  • China
  • Comedy
  • Computing
  • Core Articles
  • Economics
  • Energy
  • India
  • Nanotechnology
  • Political Debate
  • Politics
  • Science
  • Space Exploration
  • Stock Market
  • Technology
  • The ATOM
  • The Misandry Bubble
  • The Singularity

Recent Posts

  • Endings and Reincarnations
  • ATOM Award of the Month, January 2021
  • Comment of the Year - 2020
  • ATOM Award of the Month, November 2020
  • More ATOM Proof Piles Up
  • ATOM Award of the Month, August 2020
  • ATOM Award of the Month, June 2020
  • ATOM Webcast on Covid-19
  • The Accelerating TechnOnomic Medium, v2.0
  • ATOM Award of the Month, February 2020
Subscribe to this blog's feed

Reference

  • The Economist
  • KurzweilAI
  • MIT Technology Review

Archives

  • January 2021
  • December 2020
  • November 2020
  • September 2020
  • August 2020
  • June 2020
  • April 2020
  • February 2020
  • December 2019
  • November 2019

More...

© The Futurist