The Futurist

"We know what we are, but we know not what we may become"

- William Shakespeare

Timing the Singularity, v2.0

Exactly 10 years ago, I wrote an article presenting my own proprietary method for estimating the timeframe of the Technological Singularity. Since that time, the article has been cited widely as one of the important contributions to the field, and a primary source of rebuttal to those who think the event will be far sooner.  What was, and still is, a challenge is that the mainstream continues to scoff at the very concept, whereas the most famous proponent of this concept persists with a prediction that was too soon, which will inevitable court blowback when his prediction does not come to pass.  Now, the elapsed 10-year period represents 18-20% of the timeline since the publication of the original article, albeit only ~3% of the total technological progress expected within the period, on account of the accelerating rate of change.  Now that we are considerably nearer to the predicted date, perhaps we can narrow the range of estimation somewhat, and provide other attributes of precision.  

In order to see if I have to update my prediction, let us go through updates on each of the four methodologies one by one, of which mine is the final entry of the four.  

1) Ray Kurzweil, the most famous evangelist for this concept, has estimated the Technological Singularity for 2045, and, as far as I know, is sticking with this date.  Refer to the original article for reasons why this appeared incorrect in 2009, and what his biases leading to a selection of this date may be.  As of 2019, it is increasingly obvious that 2045 is far too soon of a prediction date for a Technological Singularity (which is distinct from the 'pre-singularity' period I will define later).  In reality, by 2045, while many aspects of technology and society will be vastly more advanced than today, there will still be several aspects that remain relatively unchanged and underwhelming to technology enthusiasts.  Mr. Kurzweil is currently writing a new book, so we shall see if he changes the date or introduces other details around his prediction.  

2) John Smart's prediction of 2060 ± 20 years from 2003 is consistent with mine.  John is a brilliant, conscientious person and is less prone to let biases creep into his predictions than almost any other futurist.  Hence, his 2003 assessment appears to be standing the test of time.  See his 2003 publication here for details.  

3) The 2063 date in the 1996 film Star Trek : First Contact portrays a form of technological singularity triggered from the effect that first contact with a benign, more advanced extraterrestrial civilization had on changing the direction of human society within the canon of the Star Trek franchise.  For some reason, they chose 2063 rather than a date earlier or later, answering what was the biggest open question in the Star Trek timeline up to that point.  This franchise, incidentally, does have a good track record of predictions for events 20-60 years after a particular Star Trek film or television episode is released.  Interestingly, there has been exactly zero evidence of extraterrestrial intelligence in the last 10 years despite an 11x increase in the number of confirmed exoplanets.  This happens to be consistent with my separate prediction on that topic and its relation to the Technological Singularity.  

4) My own methodology, which also gave rise to the entire 'ATOM' set of ideas, is due for an evaluation and update.  Refer back to the concept of the 'prediction wall', and how in the 1860s the horizon limit of visible trends was a century away, whereas in 2009 it was in perhaps 2040, or 31 years away.  This 'wall' is the strongest evidence of accelerating change, and in 2019, it appears that the prediction wall has not moved 10 years further out in the elapsed interval.  It is still no further than 2045, or just 26 years away.  So in the last 10 years, the prediction wall has shrunk from 31 years to 26 years, or approximately 16%.  As we get to 2045 itself, the prediction wall at that time might be just 10 years, and by 2050, perhaps just 5 years.  As the definition of a Technological Singularity is when the prediction wall is almost zero, this provides another metric through which to arrive at a range of dates.  These are estimations, but the prediction wall's distance has never risen or stayed the same.  The period during which the prediction wall is under 10 years, particularly when Artificial Intelligence has an increasing role in prediction, might be termed as the 'pre-Singularity', which many people will mistake for the actual Technological Singularity.  

SingularityThrough my old article, The Impact of Computing, which was the precursor of the entire ATOM set of ideas, we can estimate the progress made since original publication.  In 2009, I estimated that exponentially advancing (and deflation-causing) technologies were about 1.5% of World GDP, allowing for a range between 1% and 2%.  10 years later, I estimate that number to be somewhere between 2% and 3.5%.  If we allow a newly updated range of 2.0-3.5% in the same table, and an estimate of the net growth of this diffusion in relation to the growth of the entire economy (Nominal GDP) as the same range between 6% and 8% (the revenue growth of the technology sector above NGDP), we get an updated table of when 50% of the World economy comprises of technologies advancing at Moore's Law-type rates.  

We once again see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates.  Taking all of these points in combination, we can predict the timing of the Singularity.  I hereby predict that the Technological Singularity will occur in :

2062 ± 8 years

This is a much tighter range than we had estimated in the original article 10 years ago, even as the median value is almost exactly the same.  We have effectively narrowed the previous 25-year window to just 16 years.  It is also apparent that by Mr. Kurzweil's 2045 date, only 14-17% of World GDP will be infused with exponential technologies, which is nothing close to a true Technological Singularity.     

So now we know the 'when' of the Singularity.  We just don't know what happens immediately after it, nor can anyone with any certainty. 

 

Related :

Timing the Singularity, v1.0

The Impact of Computing

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

SETI and the Singularity

 

Related ATOM Chapters :

2 : The Exponential Trendline of Economic Growth

3 : Technological Disruption is Pervasive and Deepening

4 : The Overlooked Economics of Technology

 

 

August 20, 2019 in About, Accelerating Change, Artificial Intelligence, Computing, Core Articles, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (62)

Tweet This! |

The Federal Reserve Continues to Get it Wrong

image from 3.bp.blogspot.com
 
The most recent employment report revealed 279,000 new jobs (including revisions to prior months), and an unemployment rate of just 3.6%, which is a 50-year low.  Lest anyone think that this month was an anomaly, the last 12 months have registered about 2.6M new jobs (click to enlarge).  
 
Over the last two years, the Federal Reserve, still using economic paradigms from decades ago, assumed that when unemployment goes below 5.0%, inflation would emerge.  With this expectation, they proceeded on two economy-damaging measures : raising the FF rate and Quantitative Tightening (i.e. reversal of Quantitative Easing, to the tune of $50B/month). 
 
As the Fed raised the Fed Funds rate all the way up from the appropriate 0% to the far-too-high 2.5%, the yield on the 10-year note is still 2.1%, resulting in a negative yield curve.  Similarly, inflation continues to remain muted, even after $23 Trillion and counting of worldwide QE, as I have often pointed out.
 
Yet, the Federal Reserve STILL wanted to raise interest rates, in direct violation of their own supposed principles regarding both the yield curve and existing inflation.  They were exposed as looking at only one indicator : the unemployment rate.  Their actions reveal that they think that a low unemployment rate presages inflation, and no other indicator matters.  
 
Now, for the big question : Why do they think any UE rate under 5.0% leads to inflation, and why are they getting it so wrong now? 
 
The answer is because back in the 1950-80 period, too many people having jobs led to excess demand for materially heavy items (cars, houses, etc.).  In those days, there was far too little deflationary technology to affect traditional statistics.  
 
Today, people still buy these things, but a certain portion of their consumption (say, 2%) comprises of software.  Software consumes vastly less physical matter to deploy and operate, and never 'runs out of supply', particularly now in the download/streaming era. If Netflix had 10 million new people sign up tomorrow, the cost of servicing them would be very little, and the time spent to sign up all of the new customers would also be negligible.  This is not hard to understand at all, except for those who 'know so much that isn't so'.  The Federal Reserve has over 600 PhDs, but if they all just cling to the same outdated models and look at just ONE indicator, having 600 PhDs is no better than having one PhD (and, in this case, worse than having zero PhDs).  
 
But alas, the Federal Reserve, (and by extension, most PhD macroeconomists) just cannot adjust to this 21st-century economic reality, even if they cannot explain the lack of inflation, and are incurious about why this is.  They are afflicted with a level of 'egghead' groupthink the likes of which exceeds what exists in any other major field today.  When this happens, we are often on the brink of a major historical turning point.  Analogous situations in the past were when the majority of mechanical engineers in the 1880s insisted that heavier-than-air flying machines large enough to carry even a single human were not possible, and when pre-Copernican astronomers believed the Sun revolved around the Earth.  
 
The percentage of the total economy that is converging into high-tech (and hence high-deflation) technologies is rising, and is now up to 2.5-3.0% of total world GDP.  This disconnect can only widen.
 
President Trump, seeing what is obvious here, has not just pressured the Federal Reserve to stop raising rates (which they were about to do in late 2018, which would have created the inverted yield curve that they supposedly consider to be troubling), but has recently said that the Fed should lower the Fed Funds rate by 1%, effectively saying that their last four rate hikes were ill-considered.  He rightfully flipped the script on them.  
 
Now, normally I would be the first to say a head of state should not pressure a central bank in any way, but in this particular case, the President is correct, and the ivory-tower is wrong.  The correct outcome through the wrong channel is not ideal, but the alternative is a needless recession that damages the financial well-being of hundreds of millions of people, and destroys millions of jobs.  He is right to push back on this, and anyone who cares about jobs must hope he can halt and reverse their damage-causing trajectory.  
 
In this vein, I urge everyone who is on board with the ATOM concepts, and who wishes to avoid an entirely needless recession, to send polite emails to the Federal Reserve Board of Governors, with a request that they look at the ATOM publication and correct their outdated grasp of monetary effects from liquidity programs, and the necessity of modernizing the field of macroeconomics for the technological age.  The website via which to contact them is here :
 
https://www.federalreserve.gov/aboutthefed/contact-us-topics.htm
 
image from futurist.typepad.comWe are at a crucial juncture in the history of macroeconomics, the economics of technology, and the entire concept of jobs and employment.  It is a matter of time before a Presidential candidate stands before a cheering audience and points out how trillions of QE were done, but none of the people in the audience got a single dime.  Imagine such a candidate simply firing up the audience with queries of "Did you get a QE check?  Did you get a QE check?  ?Usted recibiste un QE cheque?"  That could be a political meme that gains very rapid momentum.  
 
This is how a version of UBI will eventually happen.  We, of course, call it something better : DUES (Direct Universal Exponential Stipend).  
 
The question is, when least expected, such a leader will emerge (probably not in the US), to transition us to this era of new economic realities.  It will certainly be someone from the tech industry (the greatest concentration of people who 'get it' regarding what I have just elaborated above).  Who will be that leader?  A major juncture of history is on the horizon.  All roads lead to the ATOM.  
 
Related ATOM Chapters :
 
4. The Overlooked Economics of Technology
 
6. Current Government Policy Will Soon be Ineffective
 
10. Implementation of the ATOM Age for Nations
 
 
 

May 18, 2019 in Accelerating Change, Economics, Technology, The ATOM | Permalink | Comments (28)

Tweet This! |

ATOM Award of the Month, April 2019

image from thumbs.dreamstime.comFor this month's ATOM AotM, we return to the familiar, but in the process, we want to recognize an ATOM trend that has not gotten as much credit as it has deserved.  

We all know what Moore's Law is, and what has been enabled by it.  But what has always been amazing to me is how little recognition a similar law has received.  Storage capacity has risen at a rate equal to (or slightly higher than) Moore's Law.  It is not a technological byproduct of Moore's Law, as it has always been worked on by different people in different companies with different technical talents.  

If storage capacity were not improving at the same rate as Moore's Law, most computer-type products would not have continued to produce decades of viable new iterations.  From PCs to Smartphones to Video Game consoles, all have a storage requirement that has to match up to the size and number of files downloaded and processed.  Correspondingly, data transfer speeds have also had to rise (USB 1.0 to 2.0 to 3.0, Ethernet to Fast Ethernet to Gigabit Ethernet, etc.).  A 2019-era PC could not have a 2006-era hard drive and be very useful.  

image from www.mkomo.comLike Moore's Law, the exponential doubling has spanned a sequence of technologies that all sustain the underlying megatrend, from internal spinning hard drives, to flash storage, to storage in the cloud.  Greater density has been matched by shrinking weight per unit storage and less power consumption.  This also means that has computing decouples from Moore's Law and moves into different (and probably faster) forms of exponential growth, storage will almost certainly also follow suit.  DNA-based storage is a prospective technology that has many attributes comparable to expected future computing technologies such as Quantum Computing.  

Unlike Moore's Law, storage has not always advanced at a steady rate.  There are times when it advanced much faster than Moore's Law, and times when it advanced much slower (such as in recent years).  The 40-year average, however, does appear to match Moore's Law's doubling rate rather closely, and hence what one dollar purchases today is the same as what one billion dollars could purchase then, which itself would have been the size of a house.  

Also unlike Moore's Law, there is not a universally-accepted name associated with this trend.  Mark Kryder is sometimes given this attribute, but he did not put forth a prediction early enough for it to be a prediction by any measure (Kryder officially spoke of this in 2005 whereas Moore made his prediction in 1965), his name is not mentioned in any of Ray Kurzweil's writings or other publications, and since he was not the founder of a major storage company, he is not analogous to Gordon Moore.  

As rising storage efficiency is crucial towards the productization of any other form of computing product (including Smartphones), it deserves recognition for its contribution to the technological age, despite often being overlooked in favor of Moore's Law.  

 

Related ATOM Chapters :

3.  Technological Disruption is Pervasive and Deepening

 

April 01, 2019 in ATOM AotM, Computing, Technology, The ATOM | Permalink | Comments (8)

Tweet This! |

ATOM Award of the Month, December 2018

It is time for another ATOM AotM.  This month, we return to the energy sector, for it is where the greatest size and scale of ATOM disruptions are currently underway.

BatteriesWe visited batteries briefly in August 2017's ATOM AotM.  There are two exponentials here, battery cost per unit of energy, and battery energy density per unit volume.  Hence, despite 40 years of apparent stagnation interspersed with angst about how electric vehicles failed to arrive in response to 1973 and 1981 oil spikes, the exponential trend quietly progressed towards the inflection point that we have arrived at.  True to the exponential nature of technology, more progress happened in 2011-20 than 1970-2010, and we now have viable electric vehicles that are selling rapidly and are set to displace gasoline consumption in a matter of just a few short years.  Electric vehicles are now 2% of all new vehicle sales in the US, and 3% worldwide, with a high annual growth rate.  Due to the rapid cost improvements in EVs expected in the next three years, a substantial tipping point is perhaps no more than three years away.  

This rapid rise is in the face of two major headwinds : the low price of oil (due to another ATOM disruption), and the high price of EVs (the top-seller in units is a $50,000 vehicle).  It is now a certainty that once a high-quality EV becomes available at the $30,000 pricepoint, the speed of displacement will be startling.

EVsA tracker that records monthly sales at both US and WW levels is here.  The speed of advancement merits monthly visits to this tracker, at this point.  Note that over time, the US is actually where total displacement of ICEs by EVs will be the slowest, since other countries are more suited for EVs than the US (they have higher gasoline prices, and often 220V electrical outlets that lead to faster charging).  In fact, a suddenly popular home upgrade in the US is, ironically, the installation of 220V outlets in the garage, specifically for EV charging.  

As an example of a true ATOM disruption, the transformation will be multi-layered.  From oil import/export balances, to gasoline refinement and distribution networks, to the reduction of near-slave labor from the Subcontinent forced to find work in Gulf petrostates, to mechanics dependent on ICE vehicle malfunctions, to surplus used ICEs unable to sell and thus forced to slash prices, to power management and pricing by electric utilities, to prime land occupied by gas stations, a variety of status quos will end. 

Don't underestimate how soon the domino effect will take place.  Once EVs are sufficiently mainstreamed in the luxury car market (which is set to happen in 2019), then the entire range of urban commercial/government vehicles will swiftly transition to electric.  The US has 800,000 police cars, 210,000 USPS vans, and a variety of other government vehicles.  On top of that, private enterprises include 110,000 UPS vehicles, 60,000 FedEx vehicles, and perhaps over 300,000 pizza delivery vehicles.  As these transition, observe how many gasoline stations shutter.  

The much greater lifespan of EVs relative to ICE's will be one of the four factors that lead to the majority of automobile use migrating to an on-demand, autonomous vehicle model by 2032, as discussed before.  

 

Related :

The End of Petrotyranny

Why I Want Oil to be $120/Barrel

 

Related ATOM Chapters :

3.  Technological Disruption is Pervasive and Deepening.

 

December 10, 2018 in ATOM AotM, Energy, Technology, The ATOM | Permalink | Comments (101)

Tweet This! |

ATOM Award of the Month, October 2018

For this month's ATOM AotM, we visit the medical industry, and examine a technology that seems quite intuitive, but on account of patents and other obstacles, has seen rapid improvement greatly delayed until now. 

Davinci-inlineSurgery seems as though robotics would be ideally suited for it, since it combines complexity and precision with a great deal of repetition of well-established steps.  The value of smaller incisions, fewer instances of bones being sawed, etc. is indisputable, from qualitative measures such as healing pain, to tangible economic metrics such as hospital stay duration post-surgery.  

IncisionsIntuitive Surgical released its Da Vinci robot to the market in 2001, but on account of Intuitive's patents, they sustained a monopoly and did not improve the product much over the subsequent 17 years.  Under ATOM principles, this is a highly objectionable practice, even if technically they can still earn a high profit margin without any product redesigns.  As a result, only 4000 such robots are currently in use, mostly in the US.  Intuitive has achieved a market capitalization of over $60 Billion, so it has succeeded as a business, but this may soon change.  Now that Intuitive's patents are finally close to expiry, a number of competitors are ready to introduce ATOM-consistent exponential improvements into the competitive landscape.  

The Economist has a detailed article about the new entrants into this market, and the innovations they have created.  In addition to mere cost-reduction due to smaller electronics, one obvious extension of the robotic surgery model is for each robot to be connected to the cloud, where the record of each surgery trains an Artificial Intelligence to ensure ever-improving automation for several steps of the surgery.  With AI, greater usage makes it improve, and when thousands of surgeries around the world are all recorded, that makes each machine simultaneously better.  As costs lower and unit volume increases, the volume of data generated rises.  As the accumulation of data rises, the valuation of companies capturing this data also rises, as we have seen in most other areas of technology.  

This level of data combined with greater circuitry within the robot itself can also increase the speed of surgery.  When more of it is automated, and the surgeon is doing less of the direct manipulation, then what is to prevent surgeries from being done at twice or thrice the speed?  This enables a much shorter duration of anesthesia, and hence fewer complications from it.  

 

 

October 11, 2018 in Artificial Intelligence, ATOM AotM, Technology, The ATOM | Permalink | Comments (40)

Tweet This! |

How a Small Country Can Quickly Capture Gains from the Technological Economy

In the ATOM publication, we examine how the only way to address the range of seemingly unrelated economic challenges in a holistic manner is to monetize technological deflation.  For reasons described therein, the countries best suited to do this are small countries with high technological density.  Furthermore, we examine the importance of the first-mover advantage, where when a country can monetize the technological deflation in the rest of the world for the benefit of their domestic economy, the first $1 Trillion is practically free money.  

In Chapter 10, I outline a systematic program for how the US could theoretically transition to this modernization of the economy.  But I then identify the four countries that are much more suitable than the US.  These are two Western democracies (Canada and Switzerland) and two Pacific-Rim city states (Singapore and Hong Kong).  But it is possible to create custom solutions for more countries as well.  To determine how to do that, let us go back to a seminal event in the emergence of these ideas.  

What Japan Discovered for the Benefit of Humanity : Few people have any awareness of what an important event happened in April of 2013.  Up to that time, the US was the only country that had embarked on a program to engineer negative interest rates through monetary creation (rather than the punitive and reductive practice of deducting from bank accounts).  Japan decided that after two decades of stagnation and extremely low interest rates, something more drastic and decisive had to be done.  The early success of the US Quantitative Easing (QE) program indicated that a more powerful version of this could be effective against the even worse stagnation that Japan's economy was mired in.  

In April of 2013, the Bank of Japan (BoJ) decided to go big.  They embarked on a program of monetary easing in the amount of 30% of their annual GDP.  This was a huge upgrade over the US QE programs for two reasons.  Firstly, it was much larger as a proportion to the host nation's GDP, and secondly, it had no end date, enabling long-term decisions.  Since the formal economics profession in the West is burdened by a wide range of outdated assumptions about money printing, inflation, and technology, the Western Economists yet again predicted high inflation.  And yet again, they were wrong.  There was no inflation then at the start of the program.  Japan had correctly called the bluff of the inflation specter.  The third-largest economy in the world could print 30% of its GDP per year for five years, and still experience no inflation.  When I observed this, I drew the connection between technological deflation (worldwide) and the vanishing QE (also worldwide).  Most of Japan's QE was flowing outside of Japan (and indeed into the US, which had long since stopped QE, and has forestalled a major market correction only by drawing from overseas QE, mainly from Japan).  Hence, the combined QE of the world was merely offsetting the technological deflation worldwide.  Japan's big gambit proved this, and in doing so, they showed us how much QE can be done before world inflation even hits 3% (i.e. much more than formal economists thought).  

What is a Small, Prosperous Country to do?  While it is always better to be a prosperous country than an impoverished one, almost every small country (the size of Canada or smaller) is faced with a major vulnerability in the modern economy.  Their economy invariably depends on one or two major industries, and is hence vulnerable to a technological disruption that arises from somewhere else in the world.  The need to diversify against such external risks is obvious, but most countries are not on the best path to achieve this goal.  

These days, everyone I meet from the government of some foreign country seems to have the same goal for their country - to create an ecosystem of local technology startups.  This goal is not just extremely difficult to attain, but it is very misguided.  Technology is becoming increasingly governed by winner-take-all dynamics and capital concentration, which means even in the US, rival cities are unable to compete with Silicon Valley (which itself has concentrated into a smaller portion of the San Francisco Bay Area than was the case in the late 1990s).  Small countries with technology sectors, such as Israel and Singapore, started decades ago and have a number of unique factors in their favor, including a major Silicon Valley diaspora.  Hence, a country that thinks it is productive to create a tech startup cluster in their countries will almost certainly create a situation where young people receive training at local expense, only to leave for Silicon Valley.  So these initiatives only end up feeding Silicon Valley at the expense of the original country.  Even if a few tech startups can be forcibly created in the country, it is extremely unlikely that they will achieve any great size within even 15 years.

2000px-Flag_of_New_Zealand.svgTake, for example, a country like New Zealand.  It has many favorable characteristics, but certain disadvantages as well in an increasingly globalized economy.  It relies on agricultural and dairy exports, as well as the film industry and tourism.  It is too remote to easily plug into the well-traveled routes of tech executives (less than 30M people live within 3000 miles of New Zealand) or major supply chains.  It is too small to be a significant domestic market for tech (particularly when a functional tech ecosystem has to comprise of startups in multiple areas of tech in order to achieve rudimentary diversification).  New Zealand's success in getting Hollywood films shot in New Zealand cannot similarly translate into getting some Silicon Valley business, as an individual film project has a short duration and distinct ending, with key personnel on site for just a brief period.  Technology, by contrast, is inherently endless, and requires interdependency between many firms that have to have co-location.  Furthermore, no society is capable of placing more than 1-2% of its population into high-tech professions and still have them be competitive at the international level (most tech innovation is done by people in the top 1% of cognitive ability).  For this reason, a tech startup ecosystem does not create broad prosperity (it is no secret that even within Silicon Valley, only a fraction of people are earning almost all the new wealth.  Silicon Valley has among the most extreme inequality found anywhere).  

Now, from the research contained in the ATOM publication, we know that there is a far easier solution that can deliver benefits in a much shorter time.  New Zealand's fiscal budget reveals that as of 2018, it collects about $80 Billion in taxes and spends the same $80B per year.  The world was recently generating $200B/month in QE and is still doing an insufficient $120B/month.  The entire annual budget of New Zealand is well below one month of the world's QE - the QE that is needed just to halt technological deflation.  It would be very easy for New Zealand to waive all income taxes, and merely print the same $80B/year from their central bank.  A brief transition period can be inserted just to soften the temporary downgrades that international rating agencies deliver.  But the waiver of income tax will boost New Zealand's economy with immediate effect.  It can even enter and dominate the lucrative tax-haven industry until other countries adopt the same strategy.

Now, it is difficult for government officials, legislators, and statesmen to take such a drastic step, particularly when the entire Economics profession is still mired in outdated thinking about how QE will someday, somehow cause inflation (despite being wrong about this for 9 years and over the course of $20 Trillion in cumulative world QE).  For this reason, a second, less drastic option is also available for New Zealand.  That involves create what I describe as a Sovereign Venture Fund, where the New Zealand Central Bank creates a segregated account that is completely partitioned off from the domestic economy, and prints money to place into that account (say, $100 Billion).  It is crucial that this money not circulate domestically at first, as it would cause inflation.  The purpose of this $100B Sovereign Venture Fund is to invest in startups worldwide that might be disrupting New Zealand's domestic industries.  This model is extremely effective and flexible, as :

  • i) The money was not taken from New Zealand taxpayers, but rather generated for free by the New Zealand Central Bank.  Hence, it can invest in speculative startups across the world with far more boldness.
  • ii) The diversification achieved is immediate, and can always be adjusted with equal immediacy as needed.  
  • iii) The Fund is leveraging the rest of the world's technological deflation for New Zealand's domestic benefit.  
  • iv) Tech startups worldwide become extremely vocal advocates for the fund, and even the country itself.  It boosts New Zealand's branding (generating even more tourism).  
  • v) Fund gains can be used to offset government spending by replacement of income tax, or to fund training to enable citizens to modernize their skills.  It can also provide a greater social safety net to cushion industries buffeted by disruption, but without taxing those who are still working.  This is how to repatriate the money without inflation.  
  • vi) Even a larger fund of $800B can earn $80B/year from a 10% return, which exceeds the total taxes collected by the country.  

The Sovereign Venture Fund is an extremely effective, speedy, and versatile method of economic diversification.  It can be customized for any prosperous country (for example, an oil exporter should simply invest in electric vehicle, battery, and photovoltaic technologies to hedge their economic profile).  As a huge amount of worldwide QE has to be done just to offset technological deflation, there is no contribution to inflation even worldwide, let alone domestically.  As the winds of technological change shift, the Fund can respond almost immediately (unlike a multi-decade process of creating a tech startup ecosystem only to worry if the sectors represented are about to be disrupted).  

Since there is a very high and exponentially rising ceiling of how much world QE can be done before world inflation reaches even 3% (about $400B/month in 2018, as per my calculations), there is an immense first-mover advantage that is possible here.  The first $1 Trillion is effectively 'free money' for the country that decides to be Spartacus.  

New Zealand, in particular, has even more factors that make it a great candidate.  The NZ$ is currently too strong, which is crimping New Zealand's exports.  This sort of program may create a bit of currency weakening just from the initial reaction.  For this additional reason, it is a low-risk, high-return strategy for generating a robust and indeed indestructible safety net for New Zealand's citizens, hedging them from the winds of global technological disruption.  

Related ATOM Chapters :

Chapter 4 : The Overlooked Economics of Technology

Chapter 10 : Implementation of the ATOM Age for Nations

 

 

 

April 03, 2018 in Economics, Technology, The ATOM | Permalink | Comments (29)

Tweet This! |

ATOM Award of the Month, February 2018

image from www.gettysburgflag.comFor this month's ATOM AotM, we will address the sector that any thought leader in technological disruption recognizes as the primary obstruction to real progress.  When we see that sectors that are overdue for disruption, such as medicine, education, and construction all happen to be sectors with high government involvement, the logical progression leads us to question why government itself cannot deliver basic services at comparable costs to private sector equivalents. 

As just one example out of hundreds, in California and other high-tax states, the annual license plate registration can cost $400/year.  What does the taxpayer truly receive?  The ability to trace license plates to driver's licenses and insurance.  Why should such a simple system cost so much?  It seems that it should only cost $2/year under 2018 technological levels.  By contrast, note how much value you receive from a $96/year Netflix subscription.  By all accounts, many basic government services could easily implement cost reductions of 98-99%.  

While the subject of government inefficiency vs. the ATOM is perhaps the primary topic of this website and the ATOM publication, just one small example across the world demonstrates what a modernized government looks like.  The tiny country of Estonia contains just 1.3 million people.  A desire to catch up from decades of being part of the Soviet Union perhaps spurred them into a unique desire to modernize and digitize government services into a state that Americans would scarcely believe could exist.  Here are some articles by publication about Estonia's successful digitization, where you can read about specific details :

The New Yorker

The Atlantic

Fortune

Estonia has also taken early steps towards certain ATOM realities.  While it does have high consumption taxes, income tax is a flat 21%, thereby saving immense costs in complexity and processing (which cost the US over $700 Billion/yr).  If only it figures out the ATOM principles around monetization of technological deflation, it could reduce income taxes to zero.  

Now, for the unfortunate part.  When a country manages to produce a product or service that the rest of the world wants and cannot produce at the same quality and price themselves, the first country can export the product to the outside world.  From Taiwanese chipsets to South Korean smartphones and television sets to Italian cheeses, the extension of sales to exports is straightforward.  Yet in the governance sector, despite being a third of the world economy, Estonia has no market where it can sell its services to hasten the digitization of other governments.  Whether at the Federal, State, City, or County level, the United States has hundreds of governments that could simply hire Estonian consultants and implementation staff to rapidly install new services.  This could be lucrative enough to make Estonia a very wealthy country, and then attract competition from other countries (such as nearby Finland, which is attempting to follow Estonia's path).  Yet, unlike a private sector product or service, governance just does not value efficiency or productivity to this extent.  The State of California alone could save billions of dollars per year, and either spend the taxes on other things, or (preferably) pass the savings on to the taxpayers.

Before long, the ATOM will force even the largest nation states to improve their productivity of government services.  But that process will be messy, and government officials may take a scorched-earth approach to defending their own rice bowls.  Let us hope that Estonia inspires at least a few other countries into voluntary modernization.   

 

Related ATOM Chapters :

10. Implementation of the ATOM Age for Nations.

 

 

February 28, 2018 in ATOM AotM, Technology, The ATOM | Permalink | Comments (20)

Tweet This! |

ATOM Award of the Month, January 2018

With the new year, we have a new ATOM AotM.  This is an award for a trend that ought to be easy for anyone to recognize who is at all familiar with Moore's Law-type concepts, yet is greatly overlooked despite quite literally being in front of people's faces for hours a day.  

The most crude and uninformed arguments against accelerating technological progress are either of a "Word processing is no better than in 1993, so Moore's Law no longer matters" or "People can't eat computers, so the progress in their efficiency is useless" nature.  However, the improvements in semiconductor and similar technologies endlessly finds ways into previously low-tech products, which is the most inherent ATOM principle.  

1960s TVThe concept of television has altered cultures across the world more than almost any other technology.  The range of secondary and tertiary economies created around it are vast.  The 1960 set pictured here, for $795, cost 26% of US annual per capita GDP at the time.  The equivalent price today would be $15,000.  Content was received over the air and this was often subject to poor reception.  The weight and volume of the device relative to the area of the screen was high, and the floorspace consumed was substantial.  There were three network channels in the US (while most other countries had no broadcasts at all).  There was no remote control.  

There were slow, incremental improvements in resolution and screen-size-to-unit-weight ratios from the 1960s until around 2003, when one of the first thin television sets was available at the retail level.  It featured a 42" screen, was only 4 inches thick, and cost $8000.  Such a wall-mountable display, despite the high price, was a substantial improvement above the cathode ray tube sets of the time, most of which were too large and heavy to be moved by one person, and consumed a substantial amount of floor space.

image from assets.hardwarezone.comBut in true ATOM exemplification, this minimally-improving technology suddenly got pulled into rapid, exponential improvement (part of how deflationary technology increased from 0.5% of World GDP in 1999 to 1% in 2008 to 2% in 2017).  Once the flat screen TV was on the market, plasma and LCD displays eventually gave way to LED displays, which are a form of semiconductor and improve at Moore's Law rates. 

Today, even 60-inch sets, a size considered to be extravagant in 2005, are very inexpensive.  image from infographic.statista.comLike any other old electronic device, slightly out of date sets are available on Craigslist in abundance (contributing to the Upgrade Paradox).  A functional used set that cost $8000 in 2003 can hardly be sold at all in 2018; the owner is lucky if someone is willing to come and take it for free.    

Since once ATOM-speed improvements assimilate a technology, the improvements never stop, and sets of the near future may be thin enough to be flexible, along with resolutions of 4K, 8K, and beyond.  Sets larger than 240" (20 feet) are similarly declining in price and visible in increasing numbers in commercial use (i.e. Times Square everywhere).  This is hence one of the most visible examples of ATOM disruption, and how cities of today have altered their appearance relative to the recent past.  

This is a large ATOM disruption, as there are still 225 Million new sets sold each year, amounting to $105 Billion/year in sales.  

 

Related :

The Impact of Computing

 

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening.

 

January 21, 2018 in Accelerating Change, ATOM AotM, Computing, Technology, The ATOM | Permalink | Comments (53)

Tweet This! |

ATOM-Oriented Class at Stanford

I have been selected to teach a class at Stanford Continuing Studies, titled 'The New Economics of Technological Disruption'.  For Bay Area residents, it would be great to see you there.  There are no assignments or exams for those who are not seeking a letter grade, and by Stanford standards, the price ($525 for an 8-week class) is quite a bargain.  

35 44 students have already signed up.  See the course description, dates, and more.  

 

 

January 07, 2018 in Accelerating Change, Economics, Technology, The ATOM | Permalink | Comments (0)

Tweet This! |

Recent TV Appearances

Inch by inch, the ATOM is reaching more people.  

I was invited back to Reference Point a second time to discuss the ATOM :


I was also back on FutureTalk a second time to discuss blockchain and cryptocurrencies :

Remember that older media content for the ATOM is here.

December 24, 2017 in Technology, The ATOM | Permalink | Comments (11)

Tweet This! |

ATOM Award of the Month, October 2017

For this month, the ATOM AotM goes to an area we have not visited yet.  Enterprise software and associated hardware technologies may appear boring at first, but there is currently a disruption in this area that is generating huge productivity gains.  

AWSAmazon Web Services (AWS) is an ever growing list of services that replaces computing, storage, and networking expenditures at client companies.  At present, over 90 different services are available.  Here is a slideshow of the various companies and sectors being disrupted by AWS.  

loud computing itself is relatively new, but this revolution by Amazon as taken direct slices out of the existing businesses of Microsoft, IBM, and Oracle, which were slow to deploy cloud-based solutions since they wanted to extend the lives of their existing product lines.  Their anti-technology behavior deserves to be punished by the ATOM, and Amazon obliged.  AWS is set to register $14 Billion in revenue for 2017, most of which has replaced a greater sum of revenue at competing companies.  

The biggest value is the lower cost of entry to smaller companies from the on-demand flexibility enabled by AWS.  Now that IT Security and Compliance is far more cost-effective through AWS, the barrier to entry for smaller firms is lowered.  This is particularly useful for clients in far-flung locations, enabling a decentralization that facilitates greater technological progress.  Upgrades across computing, storage, software, networking, and security are disseminated seamlessly, and since far less hardware is used, the upgrade process is far more materially efficient.  This removes a variety of smaller bottlenecks to technological progress, mitigating the corporate equivalent of the Upgrade Paradox.         

Another great benefit is elasticity, where a company does not have to worry about estimating hardware capacity needs in the future, which can often lead to overbuying of rapidly deflating technologies, or underbuying, which can cause customer dissatisfaction due to slow speeds.  All of this can now be scaled dynamically through AWS.  

For the productivity gains inherent to the scale and dynamism of AWS, it receives the October 2017 ATOM AotM.  

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening.

 

October 31, 2017 in ATOM AotM, Technology, The ATOM | Permalink | Comments (12)

Tweet This! |

ATOM Award of the Month, September 2017

For September 2017, the ATOM AotM takes a very visual turn.  With some aspects of the ATOM, seeing is believing.    

Before photography, the only image capture was through sketches and paintings.  This was time-consuming, and well under 1% were prosperous enough to have even a single hand-painted portrait of themselves.  For most people, after they died, their families had only memories via which to imagine faces.  If portraits were this scarce, other images were even scarcer.  When image capture was this scarce, people certainly had no chance of seeing places, things, or creatures from far away.  It was impossible to know much about the broader world.    

The very first photograph was taken as far back as 1826, and black&white was the dominant form of the medium for over 135 years.  That it took so long for b&w to transition to color may seem quite surprising, but the virtually non-existent ATOM during this period is consistent with this glacial rate of progress.  The high cost of cameras meant that the number of photographs taken in the first 100 years of photography (1826-1926) was still an extremely small.  Eventually, the progression to color film seemed to be a 'completion' of the technological progression in the minds of most people.  What more could happen after that?  

But the ATOM was just getting started, and it caught up with photography around the turn of the century with relatively little fanfare, even though it was notable that film-based photography and the hassles associated with it were removed from the consumer experience.  The cost of film was suddenly zero, as was the transit time and cost from the development center.  Now, everyone could have thousands of photos, and send those over email endlessly.  Yet, standalone cameras still cost $200 as of 2003, and were too large to be carried around everywhere at all times.  

CamerasAs the ATOM progressed, digital cameras got smaller and cheaper, even as resolution continued to rise.  It was discovered that the human eye does in fact adapt to higher resolution, and finds previously acceptable lower resolution unacceptable after adapting to higher resolution.  Technology hence forces higher visual acuity and the associated growth of the brain's visual cortex.  

With the rise of the cellular phone, the ATOM enabled more and more formerly discrete devices to be assimilated into the phone, and the camera was one of the earliest and most obvious candidates.  The diffusion of this was very rapid, as we can see from the image that contrasts the 2005 vs. 2013 Papal inaugurations in Vatican City.  Before long, the cost of an integrated camera trended towards zero, to the extent that there is no mobile device that does not have one.  As a result, 2 billion people have digital cameras with them at all times, and stand ready to photograph just about anything they think is important.  Suddenly, there are countless cameras at every scene.  

But lest you think the ubiquity of digital cameras is the end of the story, you are making the same mistake as those who thought color photography on film in 1968 was the end of the road.  Remember that the ATOM is never truly done, even after the cost of a technology approaches zero.  Digital imaging itself is just the preview, for now we have it generating an ever-expanding pile of an even more valuable raw material : data.  

Images contain a large volume of data, particularly the data that associates things with each other (the eyes are to be above the nose, for example).  Data is one of the two fuels of Artificial Intelligence (the other being inexpensive parallel processing).  Despite over a decade of digital images being available on the Internet, only now are there enough of them for AI to draw extensive conclusions from them, and for Google's image search to be a major force in the refinement of Google's Search AI.  Most people don't even remember when Google added image search to its capabilities, but now it is hard to imagine life without it.  

Today, we have immediate access to image search that answers questions in the blink of an eye, and fosters even greater curiosity.  In a matter of seconds, you can look up images for mandrill teeth, the rings of Saturn, a transit of Venus over the Sun, the coast of Capri, or the jaws of Carcharocles Megalodon.  More searches lead to more precise recommendations, and more images continue to be added.  In the past, the accessibility of this information was so limited that the invaluable tangents of curiosity just never formed.  Hence, the creation of new knowledge speeds up.  The curious can more easily pull ahead of the incurious.  

Digital imaging is one of the primary transformations that built the Internet age, and is a core pillar of the impending ascent of AI.  For this reason, it receives the September 2017 ATOM AotM.    

 

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

 

September 30, 2017 in Accelerating Change, Artificial Intelligence, ATOM AotM, Technology, The ATOM | Permalink | Comments (80)

Tweet This! |

ATOM Award of the Month, August 2017

For the August 2017 ATOM AotM, we will bend one of the rules.  The rule is that a disruption already has to have begun, and be presently underway.  

But this time, a conversation in the last month's comments brought forth a vision of a quad-layer disruption that is already in the early stages and will manifest in no more than 15 years time.  When fully underway, this disruption will further tighten the screws on government bodies that are far too sclerotic to adapt to the speed of the ATOM.  

To start, we will list out the progression of each of the four disruptions separately.

Batteries1) Batteries are improving quickly, and while electric vehicles are not yet competitive in terms of cost and charging speeds (partly due to the true cost of imported oil not being directly visible to consumers).  At the same time, an electric car has far fewer moving parts, and fewer liquids to deal with.  By many estimates, an electric car can last 300,000 before significant deterioration occurs, vs. 150,000 for an internal combustion engine car.  Now, under the current ownership model, a car is driven only 12,000 miles/year and is parked 90% of the time or more.  The second half of an electric vehicle's lifetime (150,001-300,000 miles) would only begin in year 13 and extend until year 25 of ownership, which is not practical.  If only there were a way to avoid having the car remain idle 90% of the time, occupying parking spaces.  It may take until 2032 for electric cars to compress the cost delta to the extent of being superior to ICE cars in total ownership costs for the early years, which then leads to the dividend available in the later years of the electric car's life.  

2) Autonomous vehicles are a very overhyped technology.  Stanford University demonstrated an early prototype in 2007.  Yet even a decade later, a fully autonomous car that operates without any human involvement, let alone the benefit of having a network of such cars, seems scarcely any closer.

Eventually, by about 2032, cars will be fully autonomous, widely adopted, and communicate with each other, greatly increasing driving efficiency through high speeds and far less distance between cars than humans can manage.  Uber-like services will cost 60-80% less than they do now, since the earnings of the human driver are no longer an element of cost, and Uber charges just 20-30% of the fare itself.  It will be cheaper for almost everyone to take the on-demand service all the time, than to own a car outright or even take the bus.  If such a car is driven 20 hours a day, it can in fact accrue 300,000 miles in just 5 years of use.  This effectively is the only way that electric cars can be driven all the way up to the 300,000 limit.  

Retail Square Footage3) The displacement of brick and mortar retail by e-commerce has far greater implications for the US than for any other country, given the excessive level of land devoted to retails stores and their parking lots.  The most grotesque example of this is in Silicon Valley itself (and to a lesser extent, Los Angeles), where vast retail strip mall parking lots are largely empty, yet are within walking distance of tall, narrow townhouses that cost $1.5M despite taking up footprints of barely 600 sqft each.  

As the closure of more retail stores progresses, and on-demand car usage reduces the need for so many parking spaces, these vast tracts of land can be diverted for another purpose.  In major California metros, the economically and morally sound strategy would be to convert the land into multi-story buildings, preferentially residential.  But extreme regulatory hurdles and resistance towards construction of any new housing supply will leave this land as dead capital for as long as the obstructionists can manage.  

But in the vast open suburbs of the American interior, land is about to go from plentiful to extremely plentiful.  If you think yards in the suburbs of interior cities are large, wait until most of their nearby strip malls are available for redevelopment, and the only two choices are either residential land or office buildings (there are more than enough parks and golf courses in those locations already).  Areas where population is already flat or declining will have little choice but to build even more, and hope that ultra-low real estate costs can attract businesses (this will be no problem at all if the ATOM-DUES program is implemented by then).  

This disruption is not nearly as much a factor in any country other than the US and, to a lesser extent, Australia, as other countries did not misallocate so much land to retail (and the associated parking lots) in the first place.   

Construction4) This fourth disruption is not as essential to this future as the first three, but is highly desirable, simply due to how overdue the disruption is.  It is quite shocking that the least productive industry in the private sector relative to 50 years ago is not education, not medicine, but construction.  US construction productivity has fallen over the last 50 years.  Not merely failed to rise, mind you, but outright declined in absolute terms.  

But remember, under ATOM principles, the more overdue a disruption is, and the more artificial the obstructions thwarting it, the more sudden it is when it eventually happens.  China is not held back by the factors that have led to the abysmally low productivity in US construction, and when there is so much retail land to repurpose, pressure to revive that dead capital will just become too great, even if that means Chinese construction companies have to come in to provide what the US counterparts cannot.  This pressure could be the catalyst of the long overdue construction productivity catchup.  This topic warrants a lengthy article of its own, but that is for another day.  

Hence, the first three factors, and possibly the fourth, combine by 2032 to generate a disruption that will be so comprehensive in the US that the inability of government to change zoning laws and permitting at anything close to the speed of market demand will be greatly exposed.

The first disruption, batteries, alone could be an ATOM AotM, but this time, the cumulative disruption from these multiple factors, even if it will take the next 15 years to accomplish, gets the award.

Related :

The End of Petrotyranny (and Victory)

Why I Want(ed) Oil to Hit $120 per Barrel

A Future Timeline for Automobiles

A Future Timeline for Energy

Why $70/Barrel Oil is (was) Good for America

 

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

11. Implementation of the ATOM Age for Individuals

 

August 27, 2017 in ATOM AotM, Economics, Energy, Technology, The ATOM | Permalink | Comments (72)

Tweet This! |

ATOM Award of the Month, July 2017

TaxiThe ATOM AotM for July 2017 reminds us of the true core principles of the ATOM.  Whenever anything becomes too expensive relative to value provided, particularly if done so through artificial government intervention in markets, a technological solution invariably replaces the expensive incumbent.  

Taxi medallions, particularly in New York City, are just the crudest form of city government corruption.  Drunk with its own greed, the city ratcheted up the price of taxi medallions from $200,000 in 2003 to $1M in 2013, which is far faster than even the S&P500, let alone inflation.  Note how there was no decline at all during the 2008-09 recession.  This predatory extraction from consumers, much like high oil prices artificially engineered by OPEC, created a market window that might otherwise have not existed until several years later.  This induced the ATOM to address this imbalance sooner than it otherwise might have. and gritty entrepreneurs swiftly founded companies like Uber and Lyft, which provided a dramatically better value for money.  As a result, the price of taxi medallions in NYC fell by 80% from the inflated peak.  The ATOM was at a sufficiently advanced level for the technological response to be as rapid as it was (unlike with, say, expensive oil in the 1973-81 period, when there was almost no ATOM of macroeconomic significance).  

Remember that the reduction in cost for a certain ride and demolition of a seemingly intractable government graft obstacle is just the first of several ATOM effects.  The second is the security of each driver and passenger being identified before the ride.  The third is the volume of data that these millions of rides generate (data being one of the two core fuels of Artificial Intelligence).  The fourth is the ability to dynamically adjust to demand spikes (the surge pricing that the economically illiterate malign).  The fifth is the possibility of new service capabilities altogether.  Recall this excerpt from Chapter 11 of the ATOM :  

Automobile commuters with good jobs but lengthy commutes have joined Uber-type platforms to take a rider along with them on the commute they have to undertake anyway. The driver earns an extra $200-$400/week (against which an appropriate portion of car and smartphone costs can be applied as deductions) with no incremental input time or cost.  Meanwhile, other commuters enjoy having one less car on the road for each such dynamically generated carpooling pair.  The key is that a dead commute is now monetized even by corporate-class people, increasing passengers per car and reducing traffic congestion, while replacing dedicated taxicabs.  For the macroeconomy, it also creates new VM where none existed before.

The creation of an entirely new sub-economy, with entirely new velocity-of-money (VM), is where new real wealth creation is the purest.  This effect of these ride-sharing platforms is still in its infancy.  When autonomous vehicles replace human drivers, the loss of driver income is matched (indeed exceeded in post-tax terms) by savings to passengers.  

It does not matter which company ultimately wins (Uber is having some PR problems lately), but rather that the disruption is already irreversible and woven into the fabric of the ATOM and broader society.  Maybe Uber and Lyft will just be to transportation services what Data General and Commodore were to computing.  The point is, this is a superb example of how the ATOM works, and how the transformation is often multi-layered.  

 

July 19, 2017 in ATOM AotM, Technology, The ATOM | Permalink | Comments (27)

Tweet This! |

Recent TV Appearances for The ATOM

I have recently appeared on a couple of television programs.  The first was Reference Point with Dave Kocharhook, as a two-part Q&A about The ATOM.


The next one was FutureTalk TV with Martin Wasserman, that included a 10-minute Q&A about The ATOM.

Inch-by-inch, we will get there.  The world does not have to settle for our current substandard status quo.

As always, all media coverage is available here.  

 

 

June 05, 2017 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (24)

Tweet This! |

ATOM Award of the Month, April 2017

It is time for the ATOM AotM for April 2017, for which we return to an article I wrote way back in 2009.  That article is titled 'The Publishing Disruption', and at the time of writing, we were on the brink of a transformation in content publication as seismic as the invention of the Gutenberg printing press.  Since that time, the anticipated sequence of events unfolded as expected.  

To excerpt from that article, consider how many centuries of background evolution occurred to get us to where we were in 2007 :

What a unique thing a book is.  Made from a tree, it has a hundred or more flexible pages that contain written text, enabling the book to contain a large sum of information in a very small volume.  Before paper, clay tablets, sheepskin parchment, and papyrus were all used to store information with far less efficiency.  Paper itself was once so rare and valuable that the Emperor of China had guards stationed around his paper possessions. 

Before the invention of the printing press, books were written by hand, and few outside of monastaries knew how to read.  There were only a few thousand books in all of Europe in the 14th century.  Charlemagne himself took great effort to learn how to read, but never managed to learn how to write, which still put him ahead of most kings of the time, who were generally illiterate. 

But with the invention of the printing press by Johannes Gutenberg in the mid-15th century, it became possible to make multiple copies of the same book, and before long, the number of books in Europe increased from thousands to millions. 

But then, note how incredibly low-tech and low-productivity the traditional publishing industry still was well into the 21st century : 

Fast forward to the early 21st century, and books are still printed by the millions.  Longtime readers of The Futurist know that I initially had written a book (2001-02), and sought to have it published the old-fashioned way.  However, the publishing industry, and literary agents, were astonishingly low-tech.  They did not use email, and required queries to be submitted via regular mail, with a self-addressed, stamped envelope included.  So I had to pay postage in both directions, and wait several days for a round trip to hear their response.  And this was just the literary agents.  The actual publishing house, if they decide to accept your book, would still take 12 months to produce and distribute the book even after the manuscript was complete.  Even then, royalties would be 10-15% of the retail price.  This prospect did not seem compelling to me, and I chose to parse my book into this blog you see before you.

The refusal by the publishing industry to use email and other productivity-enhancing technologies as recently as 2003 kept their wages low.  Editors always moaned that they worked 60 hours a week just to make $50,000 a year, the same as they made in 1970.  My answer to them is that they have no basis to expect wage increases without increasing their productivity through technology. 

An industry this far behind was just begging to be disrupted.  As we have seen from the ATOM, the more overdue a particular disruption is, the more dramatic and swift the disruption when it eventually occurs, as the distance to travel just to revert to the trendline of that particular innovation, is great.  Proceeding further in the original article :

The Amazon Kindle launched in late 2007 at the high price of $400.  Many people feel that the appeal of holding a physical book in our hands cannot be replaced by a display screen, and take a cavalier attitude towards dismissing e-readers.  The tune changes upon learning that the price of a book on an e-reader is just a third of what the paper form at a brick-and-mortar bookstore, with sales tax, would cost. 

As of 2017, an entry-level Kindle 8 costs just $80 (with 3 GB of storage), yet is far more advanced than the $400 Kindle of 2007 (with just 250 MB of storage).  Cumulative Kindle sales are estimated to be over 100 million units now.  

BooksBut the Kindle hardware is not the real disruption, as it is a new purchase imposed on people who needed no such device to read paper books.  The real ATOM disruption is in books themselves.  Now, an author can publish directly on Kindle, and at a $10 sales price, immediately begins to receive a 70% royalty.  Contrast that with the 10-15% royalty on a $20 sales price in traditional book publishing, that too after a 12-month waiting period even after the manuscript is complete.  While bound books may still make sense for famous authors, the new market created by the Kindle has enabled the publication of many books that only expect to sell 10,000 copies.  There is no material involved, so the production and distribution cost of any such publication has literally fallen by a factor of millions.  A hefty cost is now no cost, precisely as the ATOM predicts.   

2017 is the year where e-book sales have surpassed print and audio book sales, as per the chart.  Since the previous article, brick and mortar bookstores have seen a torrent of closures.  Borders Bookstores has completely shut down all of its 511 bookstores in the US.  Barnes & Noble still exists, partly due to capturing the residual Borders revenue, but a growing share of B&N's in-store revenue is now from the coffee shop, magazines, and certain specialty book sales.   

The unshackling of the bottom 99% of authors and aspiring authors from the extreme inefficiency of the traditional publishing industry has unleashed more content than was ever possible before, and is a market upgrade just as significant as that of the Gutenberg press in the 15th century.  It is also a perfect demonstration of the accelerating rate of change, for while it took centuries for the diffusion of printed books to manifest, the e-book transformation was in mere years.  For this reason, the Amazon Kindle and e-book ecosystem are the winner of April 2017's ATOM Award of the Month.  I need more candidate submissions for future ATOM AotM awards.  

Related ATOM Chapters :

3 : Technological Disruption is Pervasive and Deepening

4 : The Overlooked Economics of Technology

 

April 30, 2017 in ATOM AotM, Technology, The ATOM | Permalink | Comments (40)

Tweet This! |

The Upgrade Paradox

There is an emerging paradox within the flow of technological diffusion.  The paradox is, ironically, that rapid progress of technology has constrained its own ability to progress further.  

6a00d83452455969e201bb090d6d28970d-320wiWhat exactly is the meaning of this?  As we see from Chapter 3 of the ATOM, all technological products currently amount to about 2% of GDP.  The speed of diffusion is ever faster (see chart), and the average household is taking on an ever-widening range of rapidly advancing products and services.    

Refer to the section from that chapter, about the number of technologically deflating nodes in the average US household by decade (easily verified by viewing any TV program from that decade), and a poll for readers to declare their own quantity of nodes.  To revisit the same thing here :

Include : Actively used PCs, LED TVs and monitors, smartphones, tablets, game consoles, VR headsets, digital picture frames, LED light bulbs, home networking devices, laser printers, webcams, DVRs, Kindles, robotic toys, and every external storage device.  Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Exclude : Old tube TVs, film cameras, individual software programs and video games, films on storage discs, any miscellaneous item valued at less than $5, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year.

 
 
 The estimation of results that this poll would have yielded by decade, for the US, is :

1970s and earlier : 0

1980s : 1-2

1990s : 2-4

2000s : 5-10

2010s : 12-30

2020s : 50-100

2030s : Hundreds?

Herein lies the problem for the average household.  The cost to upgrade PCs, smartphones, networking equipment, TVs, storage, and in some cases the entire car, has become expensive.  This can often run over $2000/year, and unsurprisingly, upgrades have been slowing.  

The technology industry is hence a victim of its own success. By releasing products that cause so much deflation and hence low Nominal GDP growth and sluggish job growth, the technology industry has been constricting its own demand base.  Amidst all the job-loss through technological automation, the hiring of the tech industry itself is constrained if fewer people can keep buying their products.  If the bottom 70-80% of US household income brackets can no longer keep up with technological upgrades, their ability to keep up with new economic opportunities will suffer as well.  

This is why monetization of technological progress into a dividend is crucial, which is where the ATOM Direct Universal Exponential Stipend (DUES) fits in.  It is so much more than a mere 'basic income', since it is directly indexed to the exact speed to technological progress.  As of April 2017, the estimated DUES amount in the US is $500/month (up from $400/month back in February 2016 when the ATOM was first published).  A good portion of this cushion enables faster technology upgrades and more new adoption.  

 

April 16, 2017 in Accelerating Change, Technology, The ATOM | Permalink | Comments (21)

Tweet This! |

ATOM Award of the Month, March 2017

It is that time of the month again.  For our third ever award of the ATOM AotM, we return to an article I wrote over 10 years ago about the lighting revolution.  At that time, when the disruption was still in the future, I highlighted how the humble status of the light fixture leads to an associated disruption going widely unnoticed.  That continues to be true even today, despite the important product transition that most people have already undertaken.  

LightingSo the ATOM AotM award for March 2017 goes to the LED lightbulb.  Something that most people do not even notice is a major engine of the ATOM, as it has introduced rapid price/performance improvements into what used to be a stagnant product.

Charge of the Light Brigade :  Remember that the average household has about 25 light bulbs on average.  From the chart, we can see that light output per until of energy and cost per watt of LED lighting are both improving rapidly, leading to a double-exponential effect.  Lighting is hence now fully in the path of the ATOM and is seeing progress at a rate entirely beyond what the predecessor technology could have have experienced, and is indeed one of the fastest technology shifts ever (see the second chart).  Bulbs are now purchased in packs of 4-12 units, rather than the single-unit purchases of the recent past.  The expected electricity savings worldwide are estimated to be over $100 Billion per year in the near future.  

LED DiffusionThe domino effects of this are immense.  Follow the sequence of steps below :

  • LED bulbs are reducing the electricity consumed by lighting.
  • This reduction in demand more than accommodates the proliferation of electric cars.  The first 100 million electric cars worldwide (a level we are still extremely far from) will merely offset the loss of electricity demand for lighting.  
  • The spread of electric cars with no net rise in electricity consumption nonetheless reduces oil consumption and hence oil imports.  The US already has a trade surplus with OPEC, for the first time in half a century, and this force is strengthening further.  Even if the price per barrel of oil had not fallen through fracking, the number of imported barrels still would have plunged.  
  • So even though most lighting is not fueled by oil, it created a puncture point through which a second-degree blow to oil demand arose.  

That is truly amazing, making LED lighting not just a component of the ATOM but one of the largest disruptions currently underway.  

That concludes this month's ATOM AotM.  I need more reader submissions to ensure we have a good award each month.  

Related :

3. Technological Disruption is Pervasive and Deepening.

The Imminent Revolution in Lighting, and Why it is More Important Than You Think

 

March 26, 2017 in ATOM AotM, Energy, Technology, The ATOM | Permalink | Comments (21)

Tweet This! |

ATOM Award of the Month, February 2017

After the inaugural award in January, a new month brings a new ATOM AotM.  This time, we go to an entirely different sector than we examined last time.  The award for this month goes to the collaboration between the Georgia Institute of Technology, Udacity, and AT&T to provide a fully accredited Masters of Science in Computer Science degree, for the very low price of $6700 on average. 

The disruption in education is a topic I have written about at length.  In essence, most education is just a transmission of commoditized information, that, like every other information technology, should be declining in cost.  However, the corrupt education industry has managed to burrow deep into the emotions of its customers, to such an extent that a rising price for a product of stagnant (often declining) quality is not even questioned.  For this reason, education is in a bubble that is already in the process of deflating.  

What the MSCS at GATech accomplishes is four-fold :

  • Lowering the cost of the degree by almost an order of magnitude compared to the same degree as similarly-ranked schools
  • Making the degree available without relocation to where the institution is physically located
  • Scaling the degree to an eventual intake of 10,000 students, vs. just 300 that can attend a traditional in-residence program at GATech
  • Establishing best practices for other departments at GATech, and other institutions, to implement in order to create a broader array of MOOC degree programs

After a slow start, enrollment now is reported to be over 3300 students, representing a significant fraction of students presently studying MS-level computer science at equal or higher ranked schools.  The only reason enrollment has not risen all the way up to the full 10,000 is due to insufficient resourcefulness in shopping around and implementing ATOM principles to greatly increase one's living standards through ATOM means.  Aside from perhaps the top two schools like MIT and Stanford, there is perhaps no greater value for money than the GATech MSCS, which will become apparent as the slower adopters drift towards the program, particularly from overseas.  

Eventually, the sheer size of enrollment will rapidly lead to GATech becoming a dominant alumni community within computer science, forcing other institutions to catch up.  When this competition lowers costs even further, we will see one of the most highly paid and future-proof professions being accessible at little or no cost.  When contrasted to the immense costs of attending medical or law school, many borderline students will pursue computer science ahead of professions with large student debt burdens, creating a self-reinforcing cycle of ever-more computer science and ATOM propagation.  The fact that one can enroll in the program from overseas will attract many students from countries that do not even have schools of GATech's caliber (i.e. most countries), generating local talent despite remote education.  

Crucially, this is strong evidence of how the ATOM always finds new ways to expand itself, since the field most essential to the feeding of the ATOM, computer science, is the one that found a way to greatly increase the number of people destined to work in it, by attacking both cost thresholds and enrollment volumes.  This is not a coincidence, because the ATOM always finds a way around anything that is inhibiting the growth of the ATOM, in this case, access to computer science training.  Subsequent to this, the ATOM can increase the productivity of education even in less ATOM-crucial fields medicine, law, business, and K-12, since the greatly expanded size of the computer science profession will provide entrepreneurs and expertise to make this happen.  This is how the ATOM captures an ever-growing share of the economy into rapidly-deflating technological fundamentals.   

As always, the ATOM AotM succeeds through reader suggestions, so feel free to suggest candidates.  Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.  

Related :

The Education Disruption : 2015

11. Implementation of the ATOM Age for Individuals 

 

February 26, 2017 in Accelerating Change, ATOM AotM, Computing, Technology | Permalink | Comments (8)

Tweet This! |

ATOM Award of the Month, January 2017

With the new year, we are starting a new article series here at The Futurist.  The theme will be a recognition of exceptional innovation.  Candidates can be any industry, corporation, or individual that has created an innovation exemplifying the very best of technological disruption.  The more ATOM principles exhibited in an innovation (rising living standards, deflation acting in proportion to prior inflation in the incumbent industry, rapid continuous technological improvement, etc.), the greater the chance of qualification.

Fracking BreakevensThe inaugural winner of the ATOM Award of the Month is the US hydraulic fracturing industry.  While 'fracking' garnered the most news in 2011-13, the rapid technological improvements continued.  Natural gas continues to hover around just $3, making the US one of the most competitive countries in industries where natural gas is a large input.  Oil prices continue to fall due to ever-improving efficiencies, and from the chart, we can see how many of the largest fields have seen breakevens fall from $80 to under $40 in just the brief 2013-16 period.  This is of profound importance, because now even $50 is a profitable price for US shale oil.  There is no indication that this trend of lower breakeven prices has stopped.  Keep in mind that the massive shale formations in California are not even being accessed yet due to radical obstruction, but a breakeven of $30 or lower ensure the pressure to extract this profit from the Monterrey shale continues to rise.  Beyond that, Canada has not yet begun fracking of its own, and when it does, it will certainly have at least as much additional oil as the US found.  

This increase, which is just an extra 3M barrels/day to US supply, was nonetheless enough to capsize this highly elastic market and crash world oil prices from $100+ to about $50.  Given the improving breakevens, and possibility of new production, this will continue to pressure oil prices for the foreseeable future.  This has led to the US turning the tables on OPEC and reversing a large trade deficit into what is now a surplus.   OPEC Trade DeficitIf you told any of those 'peak oil' Malthusians that the US would soon have a trade surplus with OPEC, they would have branded you as a lunatic.  Note how that ill-informed Maoist-Malthusian cult utterly vanished.  Furthermore, this plunge in oil prices has strengthened the economies of other countries that import most of their oil, from Japan to India.  

Under ATOM principles, technology always finds a way to lower the cost of something that has become artificially expensive and is hence obstructing the advancement of other technologies.  Oil was a premier example of this, as almost all technological innovation is done in countries that have to import large portions of their oil, while almost none is done by oil exporters.  Excess wealth accumulation by oil exporters was an anti-technology impediment, and demanded the attention of a good portion of the ATOM.  Remember that the worldwide ATOM is of an ever rising size, and comprises of the sum total of all technological products in production at a given time (currently, about 2% of world GDP).  Hence, all technological disruptions are interconnected, and when the ATOM is freed up from the completion of a certain disruption, that amount of disruptive capacity becomes available to tackle something new.  Given the size of this disruption to oil prices and production geography, this occupied a large portion of the ATOM for a few years, which means a lot of ATOM capacity is now free to act elsewhere.

This disruption was also one of the most famous predictions of mine here at The Futurist.  In 2011, I predicted that high oil prices was effectively a form of burning a candle at both ends and such prices were jolting at least six compensating technologies into overdrive.  I provided an equation predicting when oil would topple, and it toppled well in accordance with that prediction (even sooner than the equation estimated).  

This concludes our very first ATOM AotM to kick off the new year.  I need candidate submissions from readers in order to get a good pool to select from.  Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.  

 

January 31, 2017 in Accelerating Change, ATOM AotM, Energy, Technology, The ATOM | Permalink | Comments (36)

Tweet This! |

Google Talk on the ATOM

Kartik Gada had a Google Talk about the ATOM :  

 

December 26, 2016 in Accelerating Change, Artificial Intelligence, Economics, Technology, The ATOM, The Singularity | Permalink | Comments (25)

Tweet This! |

Artificial Intelligence and 3D Printing Market Growth

3D Printing Market AI MarketI came across some recent charts about the growth of these two unrelated sectors, one disrupting manufacturing, the other disrupting software of all types (click to enlarge).  On one hand, each chart commits the common error of portraying smooth parabola growth, with no range of outcomes in the event of a recession (which will surely happen well within the 8-year timelines portrayed, most likely as soon as 2017).  On the other hand, these charts provide reason to be excited about the speed of progress seen in these two highly disruptive technologies, which are core pillars of the ATOM.  

This sort of growth rate across two quite unrelated sectors, while present in many prior disruptions, is often not noticed by most people, including those working in these particular fields.   Remember, until recently, it took decades or even centuries to have disruptions of this scale, but now we see the same magnitude of transformation happen in mere years, and in many pockets of the economy.  This supports the case that all technological disruptions are interconnected and the aggregate size of all disruptions can be calculated, which is a core tenet of the ATOM.   

Related :

3.  Technological Disruption is Pervasive and Deepening 

 

November 21, 2016 in Accelerating Change, Artificial Intelligence, Technology, The ATOM | Permalink | Comments (3)

Tweet This! |

Why Do Job Searches Take Longer Than 30 Years Ago?

Job-FairI have recently come into contact with a few professionals in transition, many from the now-shrinking big semiconductor companies.  In speaking to them, one thing that stood out is how it takes them 9-12 months or more to secure a new position.  

Why is this the case, in an age of accelerating technological progress, as per the ATOM?  This is an instance of where culture has prevented the adoption of a solution that is technologically feasible.  

Where Cultural Inertia Obstructs Technology : Before the Internet age, if you wanted to research a subject, you had to go to the library, spend hours there, check out some books, and go back home.  Overall, this consumed half a day, and could only be conducted during the library's hours of operation.  If the books did not have all the information you needed, you had to repeat this process.  Even this was available only in the dozen or so countries that have good public libraries in the first place.  But now, in the Internet age, the same research can be conducted in mere minutes, from any location.  The precision of Google and other search engines continues to improve, and with deep learning, many improvements are self-propagating.  There is a 10x to 30x increase in the productivity of searching for information.  

If you feel that this example is imprecise, take the case of LinkedIn.  It has enabled many aspects of career research and networking that were just not possible before.  If a young person wishes to explore dozens of career paths and estimate common patterns, the utility of a certain degree, or the probability of reaching a certain title, LinkedIn has an endless supply of information and people you can identify and communicate with.  

Yet despite all of this, job searches are just as lengthy as in the days before the Internet, LinkedIn, and other resources.  If a candidate can match with three potential jobs in their search region at any given time, then the connection between employer and candidate should take mere weeks, not close to a year.  There is no other widespread transaction within society that takes anywhere near as long.  Despite new apps to organize the job search and new social media outlets that announce endless meetups and networking events, technology has clearly failed to generate any productivity gains in this process.  

UE DurationFor one thing, the Internet has reduced the marginal cost of an application to so little that each position receives hundreds of candidates, unlike three or four back when paper resumes had to be sent via the US Postal Service.  To cope with this, employers use software that searches resumes for keywords.  This method selects for certain types of resumes, with keyword optimization superceding more descriptive elements of the resume, and filtering out many suitable candidates in favor of those who know how to game the keyword algorithm.  

From this point, a desire to mitigate hiring risk combined with the lack of imagination inherent to most corporations defaults into a practice of increasing the number of interviewers that the candidate faces.  Three rounds and a dozen interviews is not uncommon, but by most accounts, job interviews are nearly useless as predictors of performance.  In reality, a candidate only needs to be interviewed by three people : the hiring manager, the manager above that, and one lateral peer.  If these three people cannot make an accurate assessment, adding several other interviewers is not going to add additional value.  Indeed, if the boss's boss cannot make accurate assessment of candidates, then they are failing at the primary skill that an executive is supposed to have. Reference checks are also a peculiar ritual, as a candidate will only submit favorably disposed references who have been contacted beforehand.  

Modernizing Hiring For the Information Age : Matching openings with candidates should not be so tedious in this age of search engines, emailed resumes, and LinkedIn.  Resistance to change and a miscalculation of risk and opportunity cost are the human obstacles standing athwart favorable evolution.  

To correct this obsolete situation, consider the mismanagement that occurs at the source.  Only after a hiring manager sees a persistent and pronounced need for additional personnel does the process of getting a requisition approved and advertised commence.  Hence, the job begins to receive resumes only several months after the need for a new hire arose.  After that point, the lengthy selection and interviewing process takes months more.  

Instead, what if the Data Analytics of a corporate setting could be gathered, mined, and processed, so that the AI identifies a cluster of gaps within the existing team, and identifies suitable candidates from LinkedIn?  Candidates with the correct skillset could be identified with a compatibility score such as '86% fit', '92% fit', and so on.  The entire process from the starting point of where a team begins to find itself understaffed to when a candidate deemed to be an acceptable fit is hired, can compress from over a year to mere weeks.  The hefty fees charged by recruiters vanish, and the shorter duration of unemployment reduces all the indirect costs of extended unemployment.  

For this level of dynamic assessment of gaps and subsequent candidate mapping, the capability of search and data analytics within a corporation has to evolve to a far more advanced state than presently exists.  Emails, performance reviews, and project schedules, etc. all have to be searchable across the same search and patterning capabilities.  Then, this has to interface with LinkedIn, which itself has to become far more advanced with the capability for a candidate to continuously re-verify skills and prove certain competencies (through tests, certified courses, etc.).  The platform has had no real improvement in capabilities in the last few years, and the obvious next step - generating a complex set of skill parameters for LinkedIn members, and matching that pattern to employers with similar needs, is quite overdue.  If this seems like added work for candidates, remember that this effort is far less than the amount of time and hassle it will save in the job search process.  

Of course, such a capability across LinkedIn and some pattern matching machine learning engine will not be adopted overnight.  After all, corporations still think university degrees and school rank are good indicators of candidate job performance, despite both evidence and common sense.  After that, the interface between some internal corporate software and LinkedIn will take a lot of work to become robust.  Finally, the belief that a greater number of interviews somehow reduces the risk of hiring a candidate is a belief that will be difficult to purge.  

But eventually, with technology companies leading the way, the massive hidden cost of current hiring practices may come to light, and give way to a system that uses AI to find more precise matches with much greater speed.  

Conclusion :  We now possess the machine learning capabilities to dynamically detect gaps within corporate teams and organizational structures that may be large enough to warrant an increase in headcount.  These gaps can be matched with parameters that can be mined from LinkedIn profiles, and provide candidates with an assessment of their approximate fit.  A percentage score calculated for each candidate is not only a more accurate indicator than the very imprecise interview process, but is far quicker as well.  It is high time that these tools were created by LinkedIn and others, and that corporate culture shifted towards their adoption.  

This application of AI is the second most necessary technological disruption that AI can deliver to our civilization at present.  For the first, check back for the next article.   

I do not have the time to pursue a company built around this type of machine learning product, but if someone else is inspired to take up this challenge, I would certainly like to be on your board of directors.  

 

Related ATOM Chapters : 

11. Implementation of the ATOM age for Individuals

 

November 13, 2016 in Artificial Intelligence, Economics, Technology | Permalink | Comments (9)

Tweet This! |

Invisible Disruptions : Deep Learning and Blockchain

In the ATOM e-book, we examine how technological disruption can be measured, and how the aggregate disruption ongoing in the world at any given time continues along a smooth, exponentially rising trendline.  Among these, certain disruptions are invisible to most onlookers, because a tangential technology is simultaneously disrupting seemingly unrelated industries from an orthogonal direction.  In that vein, here are two separate lists of industries that are being disrupted, one by Deep Learning and the other by Blockchain.    

13 Industries Using Deep Learning to Innovate. 

20 Industries that Blockchain could Disrupt

Technology-Adoption

Note how many industries are present in both of the above lists, meaning that the sectors have to deal with compound disruptions from more than one direction.  

In addition, we see that sectors where disruption was artificially thwarted due to excessive regulation and government protectionism merely see a sharper disruption, higher up in the edifice.  When the disruption arrives through abstract technologies such as Deep Learning and Blockchain, the incumbents are unlikely to be able to thwart it, due to the source of the disruption being effectively invisible to the untrained eye.  What is understood by very few is that the accelerating rate of adoption/diffusion, depicted in this chart here from Blackrock, is enabled by such orthogonal forces that are not tied to any one product category or even industry.  

Related ATOM Chapters :

Technological Disruption is Pervasive and Deepening

The Overlooked Economics of Technology

 

September 13, 2016 in Accelerating Change, Technology, The ATOM | Permalink | Comments (9)

Tweet This! |

Artificial Intelligence Finally Disrupting Medicine

The best news of the last month was something that most people entirely missed.  Amidst all the distractions and noise that comprises modern media, a quiet press release discloses that a supercomputer has suddenly become more effective than human doctors in diagnosing certain types of ailments.  

IBM's Watson correctly diagnoses a patient after doctors are stumped.

This is exceptionally important.  As previously detailed in Chapter 3 of The ATOM, not only was a machine more competent than an entire group of physicians, but the machine continues to improve as more patients use it, which in turn makes it more attractive to use, which enables the accrual of even more data upon which to improve further.  

But most importantly, a supercomputer like Watson can treat patients in hundreds of locations in the same day via a network connection, and without appointments that have to be made weeks in advance.  Hence, such a machine replaces not one, but hundreds of doctors.  Furthermore, it takes very little time to produce more Watsons, but it takes 30+ years to produce a doctor from birth, among the small fraction of humans with the intellectual ability to even become a physician.  The economies of scale relative to the present doctor-patient model are simply astonishing, and there is no reason that 60-80% of diagnostic work done by physicians cannot soon be replaced by artificial intelligence.  This does not mean that physicians will start facing mass unemployment, but rather than the best among them will be able to focus on more challenging problems.  The most business-minded of physicians can incorporate AI into their practice to see a greater volume of patients on more complicated ailments.  

This is yet another manifestation of various ATOM principles, from technologies endlessly crushing the cost of anything overpriced, to self-reinforcing improvement of deep learning.  

Related :  Eight paraplegics take their first step in years, thanks to robotics.  

Related ATOM Chapters :

3. Technological Disruption is Pervasive and Deepening

4. The Overlooked Economics of Technology

 

 

August 14, 2016 in Accelerating Change, Biotechnology, Technology, The ATOM | Permalink | Comments (4)

Tweet This! |

Tesla's Rapid Disruption

MIT Technology Review has an article describing how Tesla Motors has brought rapid disruption to the previously staid auto industry, where there are too many factors precluding the entry of new companies.  But this is nothing new for readers of The Futurist, as I specifically identified Tesla as a key candidate for disruption way back in 2006.  In Venture Capital terms, this was an exceptionally good pick such an early stage.  

In ATOM terms, the progress of Tesla is an example of everything from how all technological disruptions are interlinked, to how each disruption is deflationary in nature.  It is not just about the early progress towards electric cars, removal of the dealership layer of distribution, or the recent erratic progress of semi-autonomous driving.  Among other things, Tesla has introduced lower-key but huge innovations such as remote wireless software upgrades of the customer fleet, which itself is a paradigm shift towards rapidly-iterating product improvement.  In true ATOM form, the accelerating rate of technological change is beginning to sweep the automobile along with it.  

When Tesla eventually manages to release a sub-$35,000 vehicle, the precedents set in dealership displacement, continual wireless upgrades, and semi-autonomous driving will suddenly all be available across hundreds of thousands of cars, surprising unprepared observers but proceeding precisely along the expected ATOM trajectory.  

July 12, 2016 in Accelerating Change, Energy, Technology, The ATOM | Permalink | Comments (3)

Tweet This! |

The Technological Progress of Video Games, Updated

A decade ago, in the early days of this blog, we had an article tracking video game graphics at 10-year intervals.  As per that cadence, it is time to add the next entry to the progression.  

The polygons in any graphical engine increase as a square root of Moore's Law, so the number of polygons doubles every three years.  

Sometimes, pictures are worth thousands of words :

1976 :

Pong

1986 :

Enduro_Racer_Arcade

1996 :

Tomb_raider_tomb_of_qualopec

2006 :

Visiongt20060117015740218

I distinctly remember when the 2006 image looked particularly impressive.  But now, it no longer does.  This inevitably brings us to...

2016 (an entire video is available, with some gameplay footage) : 

 

This series illustrates how progress, while not visible over one or two years, accumulates to much more over longer periods of time.   

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2020?  or 2026?  Additionally, note that screen sizes, screen resolution, and immersion (e.g. VR goggles) have risen simultaneously.  

 

April 01, 2016 in Accelerating Change, Computing, Technology | Permalink | Comments (6)

Tweet This! |

The End of Petrotyranny - Victory

I refer readers back to an article written here in 2011, titled 'The End of Petrotyranny', where I claimed that high oil prices were rapidly burning through the buffer that was shielding oil from technological disruption.  I quantified the buffer in an equation, and even provided a point value to how much of the buffer was still remaining at the time.

I am happy to declare a precise victory for this prediction, with oil prices having fallen by two-thirds and remaining there for well over a year.  While hydraulic fracturing (fracking) turned out to be the primary technology to bring down the OPEC fortress, other technologies such as photovoltaics, batteries, and nanomaterials contributed secondary pressure to the disruption.  The disruption unfolded in accordance with the 2011 Law of Finite Petrotyranny :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70.  There are only 200 such dollar-years remaining for the current world petro-order.  We can call this the 'Law of Finite Petrotyranny'. 

Go to the original article to see various scenarios of how the dollar-years could have been depleted.  While we have not used up the full 200 dollar-years to date, the range of scenarios is now much tighter, particularly since fracking in the US continues to lower its breakeven threshold.  At present, over $2T/year that was flowing from oil importers to oil producers, has now vanished, to the immense benefit of oil importers, which are the nations that conduct virtually all technological innovation.  

The 2011 article was not the first time this subject of technological pressure rising in proportion to the degree of oil price excess has been addressed here at The Futurist.  There were prior articles in 2007, as well as 2006 (twice).  

As production feverishly scales back, and some of the less central petrostates implode, oil prices will gradually rise back up, generally saturating at the $70 level (itself predicted in 2006) in order to deplete the remaining dollar-years.  But we may never again see oil at such a high price relative to world GDP, as existed from most of 2007-14 (oil would have to be $200+/barrel today to surpass the record of $147 set in 2008, in proportion to World GDP).

 

March 08, 2016 in Accelerating Change, Economics, Energy, Technology | Permalink | Comments (21)

Tweet This! |

Two Overdue Technologies About to Arrive

The rate of technological change has been considerably slower than its trendline ever since the start of the 21st century.  I wrote about this back in 2008, but at the time, I did not have quite as advanced techniques of observing and measuring the gap between the rate of change and the trendline, as I do now.

The dot-com bust coincided with a trend toward lower nominal GDP (since everyone wrongly focuses on 'real' GDP, which has less to do with real-world decisions than nominal GDP), and this has led to technological change, despite sporadic bursts, generally progressing at what is currently only 60-70% of its trendline rate.  For this reason, may technologies that seemed just 10 years away in 2000, have still not arrived as of 2014.  I will write much more on this at a later date.

But for now, two overdue technologies are finally plodding towards where many observers thought they would have been by 2010.  Nonetheless, they are highly disruptive, and will do a great deal to change many industries and societies.  

1) Artificial Intelligence : 

A superb article by Kevin Kelly in Wired Magazine describes how three simultaneous breakthroughs have greatly accelerated the capabilities of Artificial Intelligence (AI).  Most disruptions are usually the result of two or more seemingly unrelated technologies both crossing certain thresholds, and observers tend to be surprised because each group of observers was following only one of the technologies.  For example, the iPod emerged when it did because storage, processing, and the ability to store music as software all reached certain cost, size, and power consumption limits at around the same time.  

What is interesting about AI is how it can greatly expand the capabilities of those who know know to incorporate AI with their own intelligence.  The greatest chess grandmaster of all time, Magnus Carlssen, became so by training with AI, and it is unclear that he would have become this great if he lived before a time when such technologies were available.  

The recursive learning aspect of AI means that an AI can quickly learn more from new people who use it, which makes it better still.  One very obvious area where this could be used is in medicine.  Currently, millions of MD general practitioners and pediatricians are seen by billions of patients, mostly for relatively common diagnostics and treatments.  If a single AI can learn enough from enough patient inputs to replace most of the most common diagnostic capabilities of doctors, then that is a huge cost savings to patients and the entire healthcare system.  Some doctors will see their employment prospects shrink, but the majority will be free to move up the chain and focus on more serious medical problems and questions.  

Another obvious use is in the legal system.  On one hand, while medicine is universal, the legal system of each country is different, and lawyers cannot cross borders.  On the other hand, the US legal system relies heavily on precedent, and there is too much content for any one lawyer or judge to manage, even with legal databases.  An AI can digest all laws and precedents and create a huge increase in efficiency once it learns enough.  This can greatly reduce the backlog of cases in the court system, and free up judicial capacity for the most serious cases.  

The third obvious application is in self-driving cars.  Driving is an activity where the full range of possible traffic situations that can arise is not a particularly huge amount of data.  Once an AI gets to the point where it analyzes every possible accident, near-accident, and reported pothole, it can easily make self-driving cars far safer than human driving.  This is already being worked on at Google, and is only a few years away.  

Get ready for AI in all its forms.  While many jobs will be eliminated, this will be exceeded by the opportunity to add AI into your own life and your own capabilities.  Make your IQ 40 points higher than it is when you need it most, and your memory thrice as deep - all will be possible in the 2020s for those who learn to use these capabilities.  In fact, being able to augment your own marketable skills through the use of AI might become one of the most valuable skillsets for the post-2025 workforce.   

2) Virtual Reality/Augmented Reality : 

Longtime readers recall that in 2006, I correctly predicted that by 2012-13, video games would be a greater source of entertainment than television.  Now, we are about to embark on the next phase of this process, as a technology that has had many false starts for over 20 years might finally be approaching reality.  

Everyone knows that the Oculus Rift headset will be released to the consumer in 2015, and that most who have tried it has had their expectations exceeded.  It supposedly corrects many of the previous problems of other VR/AR technologies that have dogged developers for two decades, and has a high resolution.  

But entertainment is not the only use for a VR/AR headset like the Oculus Rift, for the immersve medium that the device facilitates has tremendous potential for use in education, military training, and all types of product marketing.  Entirely new processes and business models will emerge.       

One word of caution, however.  My decade of direct experience with running a large division of a consumer technology company compels me to advise you not to purchase any consumer technology product until it is in its third generation of consumer release, which is usually 24-48 months after initial release.  The reliability and value for money are usually not compelling until Gen three.  Do not mistake fractional generations (i.e. 'version 1.1', or 'iPhone 5, 5S, and 5C) for actual generations.  Thre Oculus Rift may be an exception to this norm (as are many Apple products), but in general, don't be an early adopter on the consumer side.  

Update (5/27/2016) : The same Kevin Kelly has an equally impressive article about VR/AR.  

Combining the Two :

Imagine, if you would, that the immersive movies and video games of the near future are not just fully actualized within the VR of the Oculus Rift, but that the characters of the video game adapt via connection to some AI, so that game characters far too intelligent to be overcome by hacks and cheat codes emerge.  

Similarly, imagine if various forms of training and education are not just improved via VR, but augmented via AI, where the program learns exactly where the student is having a problem, and adapts the method accordingly, based on similar difficulties from prior students.  Suffice it to say, both VR and AI will transform medicine from its very foundations.  Some doctors will be able to greatly expand their practices, while others find themselves relegated to obsolesence.  

Two overdue technologies, are finally on our doorstep.  Make the most of them, because if you don't, someone else surely is.  

Related :

The Next Big Thing in Entertainment 

Timing the Singularity

The Impact of Computing : 78%/year

 

December 21, 2014 in Accelerating Change, Computing, Technology | Permalink | Comments (28)

Tweet This! |

The Education Disruption : 2015

I was not going to write an article, except that this disruption is so imminent that if I wait any longer, this article would no longer be a prediction.  Long-time readers may recall how I have often said that the more overdue a disruption is, the more sudden it is when it finally occurs, and the more off-guard the incumbents are caught.  We are about to see a disruption in one of the most anti-productivity, self-important, and corrupt industries of them all, and not a moment too soon.  High-quality education is about to become more accessible to more people than ever before.  

The Natural Progression of Educational Efficiency : The great Emperor Charlemagne lived in a time when even most monarchs (let alone peasants) were illiterate.  Charlemagne had a great interest in attaining literacy for himself and fostering literacy on others.  But the methods of education in the early 9th century were primitive and books were handwritten, and hence scarce.  Despite all of his efforts, Charlemagne only managed to learn to read after the age of 50, and never quite learned how to write.  This indicates how hard it was to attain modern standards of basic literacy at the time.  

Over time, as the invention of the printing press enabled the mass production of books, literacy became less exclusive over the subsequent centuries, and methods of teaching that could teach the vast majority of six-year-old children how to read became commonplace, delivered en masse via institutions that came to be known as 'schools'.  Since most of us grew up within a mass-delivered classroom model with minimal customization, we consider this method of delivery to be normal, and almost every parent can safely assume that if their child has an IQ above 80 or so, that they will be able to read competently at the right age.  

But consider what the Internet age has made available for those who care to take it.  I can say with great certainty that the most valuable things I have learned have all been derived from the Internet, free of cost.  Whether it was the knowledge that led to new incomes streams, new social capital, or any other useful skills, it was available over the Internet, and that too in just the last decade.  Almost every challenge in life has an answer than can be found online.  This brings up the question of whether formal schooling, and the immense pricetag associated with it, is still the primary source from which a person can attain the most marketable skills.   

Why Education Became an Industry Prone to Attracting Inefficiency : To begin, we first have to address some of the adverse conditioning that most people receive, about what education is, what it should cost, and where it can be obtained.  Through centuries of marketing that preys on human insecurity at being left behind, and the tendency to conflate correlation with causation, an immense bubble has inflated over a multi-decade period, and is at its very peak.  

Education, which in the bottom 99.9% of classroom settings is really just the transmission of highly commoditized information, has usually correlated to greater economic prospects, especially since, until recently, very few people were likely to overtake the threshold beyond which further education would no longer have a tight correlation to greater earnings.  This is why many parents are willing to spare no expense on the education of their children, even to the extent of having fewer children than they might otherwise have had, when estimating the cost of educating them.  Exploiting the emotions of parents, the education industry manages to charge ever more money for a product that is often declining in quality, with surprisingly little questioning from their customers.  We are so accustomed to this unrelenting rise in costs at all levels of education that few people realize how highly perverse it is.  

Glenn Reynolds of Instapundit, with his books 'The Higher Education Bubble' and 'The K-12 Implosion', has been the earliest and most vocal observer of a bubble in the education industry.  The vast corruption and sexual misconduct by faculty in K-12 public schools is described in the latter of those two books, but over here, we will focus mostly on higher education.  

Among the dynamics he has described are how government subsidization of universities directly as well as of student loans enables universities to increase fees at a rate that greatly outstrips inflation, which in turn allows universities to hire legions of non-academic staff, many of whom exist only to politicize the university experience and further the goals of politicians and government bureaucrats.    

Student-loans-home-equity-credit-lines-auto-credit-card_chartAs a result, university degrees have gotten more expensive, while the salaries commanded by graduates have remained flat or even fallen.  The financial return of many university degrees no longer justifies their cost, and this is true not just of Bachelor's Degrees, but even of many MBA and JD degrees from any school ranked outside the Top 10 or even Top 5.  

Graduates often have as much as $200,000 in debt, yet have difficulty finding jobs that pay more than $50,000 a year.  Student loan debt has tripled in a decade, even while many universities now see no problem in departing from their primary mission of education, and have drifted into a priority of ideological brainwashing.  Combine all these factors, and you have a generation of young people who may have student debt larger than the mortgage on a median American house (meaning they will not be the first-time home purchasers that the housing market depends on to survive), while having their head filled with indoctrination that carries zero or even negative value in the private sector workforce.  

When you combine this erosion of value with the fact that it now takes just minutes to research a topic, from home and at any hour, that previously would have involved half a day at the public library, why should the same sort of efficiency gain not be true for more formal types of education that are actually becoming scarcer within universities?

Primed For Creative Destruction : Employers want skills, rather than credentials.  There may have been a time when a credential had a tight correlation with a skillset that an employer sought in a new hire, but that has weakened over time, given the dynamic nature of most jobs, and the dilution of rigor in attaining the credential that most degrees have become.  Furthermore, technology makes many skillsets obsolete, while creating openings for new ones.  With the exception of those with highly specialized advanced degrees, very few people over the age of 30 today, can say that the demands of their current job have much relevance to what they learned in college, or even what computing, productivity, and research tools they may have used in college.  Furthermore, anyone who has worked at a corporation for a decade or more is almost certainly doing a very different job than the one they were doing when they were first hired.  

Hence, the superstar of the modern age is not the person with the best degree, but rather the person who acquires the most new skills with the greatest alacrity, and the person with the most adaptable skillset.  A traditional degree has an ever-shortening half-life of relevance as a person's career progresses, and even fields like Medicine and Law, where one cannot practice without the requisite degree, will not be exempt from this loosening correlation between pedigree and long-term career performance.  Agility and adaptability will supercede all other skillsets in the workforce.    

Google, always leading the way, no longer mandates college degrees as a requirement, and has recently disclosed that about 14% of its employees do not have them.  If a few other technology companies follow suit, then the workforce will soon have a pool of people working at very desirable employers, who managed to attain their position without the time and expense of college.  If employers in less dynamic sectors still have resistance to this concept, they will find it harder to ignore the growing number of resumes from people who happen to be alumni of Google, despite not having the required degree.  As change happens on the margins, it will only take a small percentage of the workforce to be hired by prestigious employers.           

The Disruption Begins at the Top : Since this disruption is technological and almost entirely about software, perhaps the disruption has to originate where the people most directly responsible for the disruption exist.  The program that has the potential to slash the costs of entry into a major career category is an online Master of Science in Computer Science (MSCS) degree through a collaboration between the Georgia Institute of Technology, Udacity, and AT&T.  For an estimated cost of just $6700, this program can enroll 10,000 geographically dispersed students at once (as opposed to the mere 300 MSCS degrees per year that Georgia Tech was awarding previously).  This is a tremendous revolution in terms of both cost and capacity.  A degree that can make a graduate eligible for high-paying jobs in a fast-growing field, is now accessible to anyone with the ability to succeed in the program.  The implications of this are immense.  

For one thing, this profession, which happens to be one with possibly the fastest-growing demand, has itself found a way to greatly increase the influx of new contributors to the field.  By removing both cost and geographical location, the program competes not just with brick and mortar MSCS programs, but with other degrees as well.  Students who may have otherwise not considered Computer Science as a career at all, may now choose it simply due to the vastly lower cost of preparation relative to similarly high-paying careers like other forms of engineering, law, or medicine.  Career changers can jump the chasm at lower risk than before, for the same reasons.  

As fields similarly suitable to remote learning (say, systems engineering, mathematics, or certain types of electrical engineering) see MOOC degree programs created for them, more avenues open up.  Fields where education can be more easily transmitted to this model will see an inherent advantage over fields that cannot be learned this way, in terms of attracting talent.  These fields in turn grow in size, becoming a larger portion of the economy, and creating even more demand for new entrants above a certain competence threshold.  

But these fields are still not the 'top' echelon of professional excellence.  The profession that is the most widespread, most dynamic, most durable, and has created the greatest wealth, is one that universities almost never do a good job of teaching or even discussing : that of entrepreneurship.  I have stated before that the ever-increasing variety of technological disruption means that the foremost career of the modern era is that of the serial entrepreneur.  If universities are not the place where the foremost career can be learned, then how important are formal degrees from these universities?  Since each entrepreneurial venture is different, the individual will have to synthesize a custom solution from available components.  

Multi-Faceted Disruption : As The Economist has noted, MOOCs have not yet unleashed a 'gale of Schumpeterian creative destruction' onto universities.  But this is still a conflation of the degree and the knowledge, particularly when the demands of the economy may shift many times during a person's career.  Udacity, Coursera, MITx, Khan Academy, and Udemy are just a few of the entities enabling low-cost education at all levels.  Some are for-profit, some are non-profit.  Some address higher education, and some address K-12 education.  Some count as credit towards degrees, and some are not intended for degree-granting, but rather for remedial learning.  But among all these websites, an innovative pupil can learn a variety of seemingly unrelated subjects and craft an interlocking, holistic education that is specific to his or her goals.  

When the sizes and shapes of education available online has so much variety, many assumptions about who has what skills will be challenged.  There will be too many counterexamples against the belief that a certain degree qualifies a person for a certain job.  Furthermore, the standardization of resumes and qualifications that the paradigm of degrees creates has gone largely unchallenged.  People who are qualified in two or more fields will be able to cast a wider net in their careers, and entrepreneurs seeking to enter a new market can get up to speed swiftly.  

Scale to the Topmost Educators : There was a time when music and video could not be recorded.  Hundreds of orchestras across a nation might be playing the same song, or the same play might be performed by hundreds of thespians at the same time.  Recording technologies enabled the most marketable musicians and actors to reach millions of customers at once, benefiting them and the consumer, while eliminating the bottom 99% of workers in these professions.  Consumers and the best producers benefitted, while the lesser producers could no longer justify their presence in the marketplace and had to adapt.

The same will happen to teachers.  It is not efficient for the same 6th-grade math or 8th grade biology to be taught by hundreds of thousands of teachers across the English-speaking world each year.  Instead, technology will enable scale and efficiency.  The best few lectures will be seen by all students, and it is quite possible that the best teacher, as determined by market demand, earns far more than one currently thinks a teacher can earn.  The rise of the 'celebrity teacher' is entirely possible, when one considers the disintermediation and concentration that has already happened with music and theatrical production.  This sort of competition will increase quality that students receive, and ensure renumeration is more closely tied to teacher caliber.  

Conclusion : It is not often that we see something experience a dramatic worsening in cost/benefit ratio while competitive alternatives simultaneously become available at far lower costs than just a few years prior.  When a status quo has existed for the entire adult lifetime of almost every American alive today, people fail to contemplate the peculiarity of spending as much as the cost of a house on a product of highly variable quality, very uncertain payoff, and very little independent auditing.  The degree of outdatedness in the assumption that paying a huge price for a certain credential will lead to a certain career with a certain level of earnings means the edifice will topple far more quickly than many people are prepared for.  

2015 is a year that will see the key components of this transformation fall into place.  Some people will be enter the same career while spending $50,000 less on the requisite education, than they may have expected.  Many colleges will shrink their enrollments or close their doors altogether.  The light of accountability will be shone on the vast corruption and ideological extremism present in some of the most expensive institutions (Moody's has already downgraded the outlook of the entire US higher education industry).  But most importantly, the most valuable knowledge will become increasingly self-taught from content available to all, and the entire economy will begin the process of adjusting to this new reality.  

See Also : 

The Carnival of Creative Destruction

July 23, 2014 in Accelerating Change, Core Articles, Technology | Permalink | Comments (58)

Tweet This! |

The End of Petrotyranny

As oil prices remain high, we once again see murmurs of anticipated doom from various quarters.  Such fears are grossly miscalculated, as I have described in my 2007-08 articles about how oil at $120/barrel creates desirable chain reactions, as well as my rebuttal to the poorly considered beliefs of peak oil alarmists, who seem capable of being sold not one, but two bridges in Brooklyn.  Today, however, I am going to combine the concepts in both of those articles with some new analysis I have done to enable us to predict when oil will lose the economic power it currently holds.  You are about to see that not only are peak oil alarmists wrong, but they are just about as wrong as those predicting in 1988 that the Soviet Union would soon dominate the world, and will soon be equally worthy of ridicule.

Unenlightened Punditry and Fashionable Posturing :

As I mentioned in a previous article, many observers incessantly contradict themselves on whether they want oil to be inexpensive, or whether they want higher oil prices to spur technological innovations.  One of the most visible such pundits is Thomas Friedman, who has many interesting articles on the subject, such as his 2007 piece titled 'Fill 'Er Up With Dictators' :

But as oil has moved to $60 to $70 a barrel, it has fostered a counterwave — a wave of authoritarian leaders who are not only able to ensconce themselves in power because of huge oil profits but also to use their oil wealth to poison the global system — to get it to look the other way at genocide, or ignore an Iranian leader who says from one side of his mouth that the Holocaust is a myth and from the other that Iran would never dream of developing nuclear weapons, or to indulge a buffoon like Chávez, who uses Venezuela’s oil riches to try to sway democratic elections in Latin America and promote an economic populism that will eventually lead his country into a ditch.

But Mr. Friedman is a bit self-contradictory on which outcome he wants, as evidenced across his New York Times columns.

Over here, he says :

In short, the best tool we have for curbing Iran’s influence is not containment or engagement, but getting the price of oil down

And here, he says :

So here’s my prediction: You tell me the price of oil, and I’ll tell you what kind of Russia you’ll have. If the price stays at $60 a barrel, it’s going to be more like Venezuela, because its leaders will have plenty of money to indulge their worst instincts, with too few checks and balances. If the price falls to $30, it will be more like Norway. If the price falls to $15 a barrel, it could become more like America

Yet over here he says :

Either tax gasoline by another 50 cents to $1 a gallon at the pump, or set a $50 floor price per barrel of oil sold in America. Once energy entrepreneurs know they will never again be undercut by cheap oil, you’ll see an explosion of innovation in alternatives.

As well as over here :

And by not setting a hard floor price for oil to promote alternative energy, we are only helping to subsidize bad governance by Arab leaders toward their people and bad behavior by Americans toward the climate.

All of these articles were written within a 4-month period in early 2007.  Both philosophies are true by themselves, but they are mutually exclusive.  Mr. Friedman, what do you want?  Higher oil prices or lower oil prices?  Such confusion indicates how the debate about energy costs and technology is often high on rhetoric and low on analysis. 

Much worse, however, is the fashionable scaremongering that the financial media uses to fill up their schedule, amplified by a general public that gets suckered into groupthink.  To separate the whining from the reality, I apply the following simple test to verify whether people are actually being pinched by high oil prices or not.  If a large portion of average Americans have made arrangements to carpool to work (as was common in the 1970s), then oil prices are high.  Absent the willingness to make this adjustment, their whining about gasoline is not a reflection of actual hardship.  This enables us to declare that oil prices are not approaching crisis levels until most 10-mile-plus commuters are carpooling, that too in groups of three, rather than just two.  Coordinating of carpools is thus the minimum test of whether oil prices are actually causing any significant changes in behavior. 

Fortunately, $100 oil, a price that was considered a harbinger of doom as recently as 2007, is now not even enough to induce carpooling in 2011.  This quiet development is remarkably unnoticed, and conceals the substantial economic progress that has occurred.   

Economic Adaptations :

Trade Deficit The following chart from Calculated Risk (click to enlarge) shows the US trade deficit split between oil and non-oil imports.  This chart is not indexed as a percentage of GDP, but if it were, we would see that oil imports at $100/barrel today are not much higher of a percentage of GDP than in 1998, when oil was just $20/barrel.  In fact, the US produces much more economic output per barrel of oil compared to 1998.  We can thus see that unlike in 1974 when the US economy has much less demand elasticity for oil, today the ability of the economy to adjust oil consumption more quickly in reaction to higher prices makes the bar to experience an 'oil shock' much harder to clear.  US oil imports will never again attain the same percentage of GDP as was briefly seen in 2008. 

World Oil Consumption Per Capita-Downey-Oil 101 Of even more importance is the amazingly consistent per capita consumption of oil since 1982, which has remained at exactly 4.6 barrels/person despite a tripling real GDP per capita during the same period (chart by Morgan Downey).  This immediately deflates the claim that the looming economic growth of China and India will greatly increase oil consumption, since the massive growth from 1982 to 2011 did not manage to do this.  At this point, annual oil consumption, currently at around 32 billion barrels, only rises at the rate of population growth - about 1% a year. 

This leads me to make a declaration.  32 billion barrels at around $100/barrel is $3.2 Trillion in annual consumption.  This is currently less than 5% of nominal world GDP.  I hereby declare that :

Oil consumption worldwide will never exceed $4 Trillion/year, no matter how much inflation, political turmoil, or economic growth there is.  Thus, 'Peak Oil Consumption' happens long before 'Peak Oil Supply' ever could. 

This would mean that oil would gradually shrink as a percentage of world GDP, just as it has shrunk as a percentage of US GDP since 1982.  Even when world GDP is $150 Trillion, oil consumption will still be under $4 Trillion a year, and thus a very small percentage of the economy.  Mark my words, and proceed further to read about how I can predict this with confidence.   

The Carnival of Creative Destruction :

There are at least seven technologies that are advancing to reduce oil demand by varying degrees, many of which have been written about separately here at The Futurist : 

1) Natural Gas : Technologies that aid the discovery of natural gas have advanced at great speed, and supplies have skyrocketed to a level that exceeds anything humanity could consume in the next few decades.  The US alone has enough natural gas to more than offset all oil consumption, and the price of natural gas is currently on par with $50 oil. 

2) Efficiency gains : From innovations in engine design, airplane wing shape, reflective windows, and lighter nanomaterials, efficiency is advancing rapidly, to the extent that economic growth no longer increases oil consumption per capita, as described earlier.  There are many options available to consumers seeking 40 mpg or higher without sacrificing too much power or size, and I predicted back in early 2006 that in 2015, a 4-door family car with a 240 hp engine would deliver 60 mpg (or equivalent) yet still cost no more than $35,000 in 2015 dollars.  People scoffed at that prediction then, but now it seems quite safe.   

3) Cellulose Ethanol and Algae Oil : Corn ethanol was never going to be suitable in cost or scale, but the infrastructure established by the corn ethanol industry makes the transition to more sophisticated forms of ethanol production easier.  But fuels from switchgrass and algae are much more cost-effective, and will be ramping up in 2012.  Solazyme is an algae oil company that went public recently, and already has a market capitalization of $1.5 Billion. 

4) Batteries : Most of the limitations of electric and hybrid vehicles stem from shortcomings in battery technology.  However, since batteries are improving at a rate that is beginning to exceed the traditional 5-8% per year, and companies such as Tesla are able to lower the cost of their fully electric vehicles, the knee of the curve is near. 

5) Telepresence : Telepresence, while expensive today, will drop in price under the Impact of Computing and displace a substantial portion of business air travel, as described in detail here.  By 2015, geographically dispersed colleagues will seem to be closer to each other, despite meeting in person less often than they did in 2008.   

6) Wind Power : Wind Power already generates almost 3% of global electricity consumption, and is growing quickly.  When combined with battery advances that improve the range and power of electric and plug-in hybrid vehicles, we get two simultaneous disruptions - oil being displaced not just by electriciy, but by wind electricity.    

7) Solar Power : This source today generates the least power among those listed here.  But it is the fastest growing of the group with multiple technologies advancing at once, and with decades of steady price declines finally reaching competitive pricepoints.  It also has many structural advantages, most notably the fact that it be deployed to land that is currently unused and inhospitable.  Many of the countries with the fastest growth in energy consumption are also those with the greatest solar intensity. 

Plus, these are just the technologies that displace oil demand.  There are also technologies that increase oil supply, such as supercomputing-assisted oil discovery and new drilling techniques.  Supply-increasing technologies work to reduce oil prices and while they possibly slow down oil demand displacement, they too work to weaken petrotyranny. 

The problem in any discussion of these technologies is that the debate centers around an 'all or none' simplicity of whether the alternative can replace all oil demand, or none at all.  That is an unnuanced exchange that fails to comprehend that each technology only has to replace 10% of oil demand.  Natural gas can replace 10%, ethanol another 10%, efficiency gains another 10%, wind + solar another 10%, and so on.  Thus, if oil consumption as a percentage of world GDP is lower in a decade than it is today, that itself is a huge victory.  It hardly matters which technology advances faster than the others (in 2007, natural gas did not appear as though it would take the lead that it enjoys today), what matters is that all are advancing, and that many of these technologies are highly complementary to each other.     

What is also overlooked is how quickly the pressure to shift to alternatives grows as oil becomes more expensive.  If, say, cellulose ethanol is cost-effective with oil at $70, then oil at $80 causes a modest $10 dollar differential in favor of cellulose.  If oil is $120, then this differential is now $50, or five times more.  Such a delta causes much greater investment and urgency to ramp up research and production in cellulose ethanol.  Thus, each increment in oil price creates a much larger zone of profitability for any alternative. 

The Cost of Petrotyranny :

Map01_1024 This map of nations scaled in proportion to their petroleum reserves (click to enlarge) replaces thousands of words.  Some contend that the easy money derived from exporting oil leads to inevitable corruption and the financing of evil well beyond the borders of petro-states, while others lament the misfortune that this major energy source is concentrated in a very small area containing under 2% of the world's population.  Other sources of energy, such as natural gas, are much more evenly distributed across the planet, and this supply chain disadvantage is starting to work against oil.   

However, as we saw in the 2008 article, many of these regimes are dancing on a very narrow beam only as wide as the span between oil of $70 and $120/barrel.  While a price below $70 would be fatal to the current operations of Iran, Venezuela, and Russia, even a high price leads to a shrinkage in export revenue, as domestic consumption rises to reduce export units to a greater degree than can be offset by a price rise.  Furthermore, higher prices accelerate the advance of the previously mentioned technologies.  For the first time, we can now estimate how long oil can still hold such an exalted economic status. 

Quantifying the Remaining Petro-Yoke :

For the first time, we can make the analysis of both technological and political pressure exerted by a particular oil price more precise.   We can now quantify the rate of technological demand destruction, and predict the actual number of years before oil ceases to have any ability to cause economic recessions, and regimes like Iran, Venezuela, and Russia no longer can subsist on oil exports to the same degree.  This brings me to the second declaration of this article :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70.  There are only 200 such dollar-years remaining for the current world petro-order.  We can call this the 'Law of Finite Petrotyranny'. 

Allow me to elaborate. 

Through some proprietary analysis, I have calculated that the remaining lifetime of oil's economic importance as follows :

  • From the start of 2011, take the average price of West Texas Intermediate (WTI), Brent, or NYMEX oil, and subtract $70 from that, each year. 
  • Take the number accumulated, and designate that as 'X' dollar-years.
  • As soon as X equals to 200 dollar-years, then oil will not just fall below $70, but will never again be a large enough portion of world GDP to have a significant macroeconomic impact. 
     

Oil Price You can plug in your own numbers to estimate the year in which oil will cease to exert such power.  For example, if you believe that oil will average $120, which is $50 above the $70 floor, then the X points are expended at a rate of $50/year, meaning depletion at the end of 2014.  If oil instead averages just $100, then the X points are expended at $30/year, meaning it will take 6.67 years, or until late 2017, to consume them.  Points are only depleted when oil is above $70, but are not restored if oil is below $70 (as research projects may be discontinued or postponed, but work already done is not erased).  For those who (wrongly) insist that oil will soon be $170, the good news for them is that in such an event they will see the X points depleted in just two short years.  The graph provides 3 scenarios, of oil averaging $120, $110, and $100, and indicating in which year such a price trend would exhaust the 200 X points from points A, B, and C, which is the area of each of the three rectangles.  In reality, price fluctuations will cause variations in the rate of X point depletion, but you get the idea. 

Keep in mind the Law of Finite Petrotyranny, and on that basis, welcome any increase in oil prices as the hastening force of oil replacement that it is.  My personal opinion?  We average about $100/barrel, causing depletion of the X points in 2017 (scenario 'C' in green). 

Conclusion :

So what happens after the Law of Finite Petrotyranny manifests itself?  Let me pre-empt the strawmen that critics will erect, and state that oil will still be an important source of energy.  But most people will no longer care about the price of oil, much as the average person does not keep track of the price of natural gas or coal.  Oil will simply be a fuel no longer important enough to cause recessions or greatly alter consumer behavior through short-term spikes.  Many OPEC countries will see a great reduction in their power, and will no longer be able to placate their citizens through petro-handouts alone.  These countries would do well to act now and diversify their economies, phase in civil liberties while they can still do so incrementally, and prepare for a future of much lower leverage over their current customers.

So cheer oil prices higher so that the X points get frittered away quickly.  It will be fun. 

 

Related :

A Future Timeline for Energy

A Future Timeline for Automobiles 

July 01, 2011 in Accelerating Change, Core Articles, Economics, Energy, Technology | Permalink | Comments (76) | TrackBack (0)

Tweet This! |

Carbonara

Observers have been waiting for carbon nanotubes, buckyballs, and graphene to transform the world for quite some time, and the wait has been longer than they expected.  Enthusiasts for this new miracle material had all but vanished.  Is this warranted?  Where does the state of innovation in various forms of carbon, that could yield ultra-strong, ultra-light materials and superfast computing really stand? 

CNET had an article just last month about the multiple disruptions that the various allotopes of carbon are about to make.  That is quite exciting, except that CNET also had a similar article in 2003.  Similarly, Ray Kurzweil extolled carbon nanotubes as a successor to silicon quite heavily in 1999, but not quite as much now, even though that supposed transition would be much closer to the present.  This does not mean that Kurzweil's estimation was in error, but rather that the technology was unexpectedly stagnant during the early 2000s.  So let us examine why there was such an interruption, and whether progress has since resumed.    

Graphene I wrote in 2009 about how we had undergone a multi-year nanotech winter, and how we were emerging from it in 2009.  As anticipated, carbon nanotubes are now finally lowering in price, and being produced at a scale that could start making an impact.  Sure enough, activity began to stir right as I predicted, and the 2010 Nobel Prize in physics has been awarded to research in graphene.  Just like CNETs article, Wired also has an article about the diverse applications that graphene could revolutionize.  Combining the two articles, we can summarize the core possibilities of carbon allotopes as follows :

Ultra-dense computing and storage : Graphene transistors smaller than 1 nanometer have been demonstrated.  Carbon allotopes could keep the exponential doubling of both computing and storage capacity going well into the 2030s. 

Carbon Fiber Vehicles : This lightweight, ultrastrong material can save vast amounts of fuel by reducing the weight of cars and aeroplanes.  While premium products such as the $6000 Trek Madone bicycles are already made from carbon fiber, greater volume is reducing prices and will soon make the average car much ligher than it is today, increasing fuel efficiency and reducing traffic fatalities. 

Energy Storage : Natural Gas is not only much cheaper than oil per unit of energy (oil would have to drop to about $30 to match current NG prices), but the supply of NG is more evenly distributed across the world than the oil supply.  The US alone has an enormous reserve of natural gas that could ensure total energy independence.  The main problem with NG is storage, which is the primary reason oil displacement is not happening rapidly.  But microporous carbon can effectively act as a sponge for natural gas, enabling safe and easy transport.  This could potentially change the entire energy map.

There are other applications beyond these core three, but suffice it to say, the allotopes of carbon can perform a greater variety of functions than any other material available to us today.  Watch for indications of carbon allotopes popping up in the strangest of places, and know that each emergence drives the cost down ever lower. 

Related :

Nanotechnology : Bubble, Bust,.....Boom?

Milli, Micro, Nano, Pico

November 01, 2010 in Accelerating Change, Nanotechnology, Science, Technology | Permalink | Comments (3)

Tweet This! |

The TechnoSponge

After years of thinking about this, I have come up with a term that can describe the thoughts I have had about the new, 'good' type of deflation that is evading the notice of almost all of the top economists in the world today.  This changes many of the most fundamental assumptions about economics, even as most economic thought is far behind the curve. 

First, let us review some events that transpired over the last 2 years.  To stave off the prospect of a deflationary spiral that could lead to a depression, the major governments of the world followed 20th-century textbook economics, and injected colossal amounts of liquidity into the financial system.  In the US, not only was the Fed Funds rate lowered to nearly zero (for now 18 months and counting), but an additional $1 Trillion was injected in. 

However, now that a depression has been averted, and the recession has ended, we were supposed to experience inflation even amidst high unemployment, just like we did in the 1970s, to minimize debt burdens.  But alas, there is still no inflation, despite a yield curve with more than 3% steepness, and a near-0% FF rate for so long.  How could this be?  What is absorbing all the liquidity?   

In The Impact of Computing, I discussed how 1.5% of World GDP today comprises of products where the same functionality can be purchased for a price that halves every 18 months.  'Moore's Law' applies to semiconductors, but storage, software, and some biotech are also on a similar exponential curve.  This force makes productivity gains higher, and inflation lower, than traditional 20th century economics would anticipate.  Furthermore, the second derivative is also increasing - the rate of productivity gains itself is accelerating.  1.5% of World GDP may be small, but what about when this percentage grows to 3% of World GDP?  5%?  We may only be a decade away from this, and the impact of this technological deflation will be more obvious. 

Most high-tech companies have a business model that incorporates a sort of 'bizarro force' that is completely the opposite of what old-economy companies operate under : The price of the products sold by a high-tech company decreases over time.  Any other company will manage inventory, pricing, and forecasts under an assumption of inflationary price increases, but a technology company exists under the reality that all inventory depreciates very quickly (at over 10% per quarter in many cases), and that price drops will shrink revenues unless unit sales rise enough to offset it (and assuming that enough unit inventory was even produced).  This results in the constant pressure to create new and improved products every few months just to occupy prime price points, without which revenues would plunge within just a year.  Yet, high-tech companies have built hugely profitable businesses around these peculiar challenges, and at least 8 such US companies have market capitalizations over $100 Billion.  6 of those 8 are headquartered in Silicon Valley. 

Now, here is the point to ponder : We have never had a significant technology sector while also facing the fears (warranted or otherwise) of high inflation.  When high inflation vanished in 1982, the technology sector was too tiny to be considered a significant contributor to macroeconomic statistics.  In an environment of high inflation combined with a large technology industry, however, major consumer retail pricepoints, such as $99.99 or $199.99, become more affordable.  The same also applies to enterprise-class customers.  Thus, demand creeps upwards even as cost to produce the products goes down on the same Impact of Computing curve.  This allows a technology company the ability to postpone price drops and expand margins, or to sell more volume at the same nominal dollar price.  Hence, higher inflation causes the revenues and/or margins of technology companies to rise, which means their earnings-per-share certainly surges.

So what we are seeing is the gigantic amount of liquidity created by the Federal Reserve is instead cycling through technology companies and increasing their earnings.  The products they sell, in turn, increase productivity and promptly push inflation back down.  Every uptick in inflation merely guarantees its own pushback, and the 1.5% of GDP that mops up all the liquidity and creates this form of 'good' deflation can be termed as the 'Technosponge'.  So how much liquidity can the Technosponge absorb before saturation? 

At this point, if the US prints another $1 Trillion, that will still merely halt deflation, and there will be no hint of inflation at all.  It would take a full $2 Trillion to saturate the techno-sponge, and temporarily push consumer inflation to even the less-than-terrifying level of 4% while also generating substantial jumps in productivity and tech company earnings.  In fact, the demographics of the US, with baby boomers reaching their geriatric years, are highly deflationary (and this is the bad type of deflation), so the US would have to print another $1 Trillion every year for the next 10 years just to offset demographic deflation, and keep the Technosponge saturated. 

A Technosponge that is 1.5% of GDP might be keeping CPI inflation at under 2%, but when the techno-sponge is 3% of GDP, even trillions of dollars of liquidity won't halt deflation.  Deflation may become normal, even as living standards and productivity rise at ever-increasing rates.  The people who will suffer are holders of debt, particularly mortgage debt.  Inflating away debt will no longer be a tool available to rescue people (and governments) from their errors.  The biggest beneficiaries will be technology companies, and those who are tied to them. 

But to keep prosperity rising, productivity has to rise at the maximum possible rate.  This requires the Technosponge to be kept full at all times - the 'new normal'.  Thus, the printing press has to start on the first $1 Trillion now, and printing has to continue until we see inflation.  Economists will be surprised at how much can be printed without seeing any inflation, and will not be able to draw the connection about why the printed money is boosting productivity. 

Related :

The Impact of Computing

Timing the Singularity

July 01, 2010 in Accelerating Change, Computing, Economics, Technology, The Singularity | Permalink | Comments (104)

Tweet This! |

The Carnival of Creative Destruction

Words like 'disruption' and 'destruction' usually have negative meanings, and one may strain to find any good ways in which to use the terms.  But today, the accelerating rate of change ensures that more technologies alter more aspects of life at an ever-quickening rate.  A little-understood dimension of this is the concept of Joseph Schumpeter's 'Creative Destruction', where the process of technological change topples existing norms and replaces them with new ones, often quite rapidly. 

Technological diffusion was in a lull in 2008, as I pointed out at the time.  But now, in 2010, I am happy to report that the recess has passed, and that the accelerating rate of change is rising back to the long-term exponential trendline (although it may not be fully back at the trendline until 2013, when people who have not been paying attention will be wondering why they were taken by surprise).  The Impact of Computing continues to progress, infusing itself into a wider and wider swath of our lives, and speeding up the rate of change in complacently stagnant industries that never thought technology could affect them.  Silicon Valley continues to be 'ground zero' for creative destruction, and complacent industries thousands of miles away could be toppled by someone working from their bedroom in Silicon Valley. 

Just a few of the examples of creative destruction that is presently in process have been covered by prior articles here at The Futurist.  These, along with others, are :

1) Video Conferencing is poised to disrupt not just airline and hotel industry revenues (which stand to lose tens of billions of dollars per year of business travel revenue), but the real-estate, medical, and aeronautical industries as well.  Corporations will see substantial productivity gains from successful adoption of videoconferencing as a substitute for 50% or more of their travel expenses.  Major mergers and acquisitions have happened in this sector in the last few months, and imminent price reductions will open the floodgates of diffusion.  Skype provides a form of video telephony that is free of cost.  This is described in detail in my August 2008 article on the subject, as well as in my earlier October 2006 introductory article. 

2) Surface Computing, which I wrote about in July of 2008, has begun to emerge in a myriad of forms, from the handheld Apple iPad to the upcoming consumer version of the table-sized Microsoft Surface.  This not only transforms human-computer interaction for the first time in decades, but the Apple 'Apps' ecosystem alters the utility of the Internet as well.  All sizes between the blackboard and the iPad will soon be available, and by 2015, personal computing, and the Internet, will be quite different than they are today, with surfaces of varying sizes abundant in many homes. 

3) The complete and total transformation of video games into the dominant form of home entertainment will be visible by 2012 through a combination of technologies such as realistic graphics, motion-responsive controllers, 3-D televisions, voice recognition, etc.  The biggest casualty of this disruption will be television programming, which will struggle to retain viewers.  Beyond this, the way in which humans process sensations of pleasure, excitement, and entertainment will irrevocably change.  Thus, the way humans relate to each other will also change.  I have written about this in April 2006, with a follow-up in July 2009. 

4) The book-publishing industry has been stubbornly resistant to technology, as evidenced by their insistence as late as 2003 that manuscript queries be submitted by postal mail, and that a self-addressed stamped envelope be enclosed in which a reply can be sent.  A completed manuscript would take a full 12 months to be printed and distributed, and the editors didn't even find this to be odd.  Fortunately, two simultaneous disruptions are toppling this obsolete and unproductive industry from both ends.  Print-on-demand services that greatly shorten the self-publishing process and entry-cost, such as iUniverse and Blurb, are now flexible and easy, while finished books can further avoid the paper-binding process altogether and be available to millions in e-book format for the Kindle and other e-readers.  Books that cost, say, $15 to print, bind, and distribute now cost almost zero, enabling the author and reader to effectively split the money saved.  When e-readers are eventually available for only $100, bookstores that sell paper books will be relegated to surviving mostly on gifts, coffee table books, and cafe revenues.  This is a disruption that is happening quickly due to it being so overdue in the first place, resulting in a speedy 'catchup'.  I wrote about this in more detail in December of 2009.

5) The automobile is undergoing multiple major transformations at once.  Strong, light nanomaterials are entering the bodies of cars to increase fuel efficiency, engines are migrating to hybrid and electrical forms, sub-$5000 cars in India and China will lead to innovations that percolate up to lower the cost of traditional Western models, and the computational power engineered into the average car today leads to major feature jumps relative to models from just 5 years ago.  The $25,000 car of 2020 will be superior to the $50,000 car of 2005 in every measurable way. 

By 2016, consumer behavior will change to a mode where people consider it normal to 'upgrade' their perfectly functioning 6-year-old cars to get a newer model with better electronic features.  This may seem odd, but people did not tend to replace fully functional television sets before they failed until the 2003 thin-TV disruption.  The Impact of Computing pulls ever-more products into a rapid trajectory of improvement. 

By 2018, self-driving cars will be readily available to the average US consumer, and will constitute a significant fraction of cars on the highway.  This will revise existing assumptions about highway speeds and acceptable commute distances, and will further impede the real estate prices of expensive areas. 

6) The Mobile Internet revolution, which I wrote about in October of 2009, is already transforming the way consumers in developed markets access the Internet.  The bigger disruption is the entry of 1 billion new Internet users from emerging economies.  While many of these people have relatively little education compared to Western Internet users, as the West shrinks as a fraction of total Internet mindshare, many Western cultural quirks that are seen as normal might be seen for the minority positions that they are.  Thomas Friedman's concept of the world being 'flat' has not even begun to fully manifest. 

7) The energy sector is in the midst of multiple disruptions, which will introduce competition between sectors that were previously unrelated.  Electrical vehicles displace oil consumption with electricity, even while the electricity itself starts to be generated through nuclear, solar, and wind.  The electrical economy will be further transformed by revolutions in lighting and batteries.  Cellulostic ethanol will arrive in 2012, and further replace billions of gallons of gasoline.  I wrote in October 2007 why I want oil to surpass $120/barrel and stay there (it subsequently was above that level for a mere 6-week period in 2008).  This leads to why I claim that 'Peak Oil', far from being fatal for civilization, will actually be a topic few people even mention in 2020.  The creative destruction in energy will extend to the geopolitical landscape, where we will see many petrotyrannies much weaker in 2020 than they are today. 

8) Despite the efforts of Democrats to create a system unfavorable to advancement in healthcare and biotechnology, innovation continues on several fronts (partly due to Asian nations compensating for US shortfalls).  One disruption is robotic surgery, where incisions can be narrow instead of the customary practice of making incisions large enough for the surgeon's hands, which in turn often necessitates sawing open the sternum, pelvis, etc.  Intuitive Surgical is a company that already has a market cap of $14 Billion. 

The biggest disruption, however, is that the globalization of technology is enabling medical tourism.  In the US, about twice as much is spent on healthcare per person as in other OECD countries.  If manufacturing and software work can be offshored, so can many aspects of healthcare, which is much more expensive than manufacturing or software engineering ever became in the US.  This will correct inflated salaries in the healthcare sector, return the savings to consumers, and force innovations and systemic improvements in all OECD countries. 

Genome 9) By all accounts, the cost of genome sequencing has plunged faster than any other technology, ever (it is less clear how this was accomplished, and whether the next 4 years will see a comparable drop).  I tend to be skeptical about such eye-popping numbers, because if something became so much cheaper so quickly, yet it still didn't sweep over the world, then maybe it was not so valuable after all. 

But it is also possible that while the raw data is now available cheaply, there is not yet enough of a community that instructs people why they should get their genome sequenced, and how to use their data.  The Economist has a special report on the implications of inexpensive genome sequencing. 

10) Social media such as Facebook, Twitter, etc. are mostly inundated with the trivialities of young people, or of older people who never matured, who think they have an audience far larger than it is.  However, these mediums have been used to horizontally organize interest groups and movements for political change that know no distance barriers or boundaries. 

Blogs have shattered the hold that traditional media had on the release of information and opinions, and the revenues of newspapers, magazines, and network television have tumbled.  The Tea Party movement in the US was started by a very small number of people, but has surged with a momentum that has reshaped the American landscape in just one year, and, irony of ironies, the Tea Party is spreading to overtaxed Britain.  The next Iranian revolution will not only use Twitter and YouTube, but will have millions of collaborators outside of Iran, operating out of their own homes. 

11) The financial services industry is overdue for disruption, and this was the cover story of Wired Magazine for March 2010, and was a structure established in an era when computing power needed to process transactions was expensive.  Today, several startups are seeking to change the way money is transacted to eliminate this cut that incumbent companies take.  Major financial services companies will see shrinkages in revenue, and will have to innovate and create new value-added services, or accept a diminishment. 

Aside from this effectively being a sizable 'tax cut' for the economy, this is particularly valuable as a complement to mobile Internet penetration in poorer regions, as the capacity to conduct web micro-transactions without fees will be an essential element of human development.  The highly successful concept of micro-finance will be augmented when transaction fees that consumed a high percentage of these sub-$10 transactions are minimized. 

12) 3-D Printing will soon be accessible to small businesses and households.  This transforms everything from commodity consumer goods to the construction of buildings.  An individual could download a design and print it at home, rather than be restricted to only those products that can be mass produced.  It is quite possible that by 2025, construction of basic structures takes less than one-tenth the time that it does today, which, of course, will deflate the value of all existing buildings in the world at that time. 

So we see there are at least 12 ways in which our daily lives will shift considerably in just the next few years.  The typical process of creative destruction results in X wealth being destroyed, and 2X wealth being created instead, but by different people.  For each of the 12 disruptions listed, 'X' might be as much a $1 Trillion.  As a result, the US economy might be mired in a long-term situation where vanishing industries force many laid off workers to start in new industries at the entry level, for half of their previous compensation, even as new fortunes created by the new industries cause net wealth increases.  The US could see a continuation of high unemployment combined with high productivity gains and corporate earnings growth for several years to come.  Big paydays for entrepreneurs will make the headlines frequently, right alongside stories of people who have to accept permanent 50% pay reductions.  This would be the 'new normal'. 

Income diversification is the golden rule of the early 21st century.  Those that fail to create and maintain multiple streams of income are imperiling themselves.  The hottest career one can embark on, which will never be obsolete, is that of the serial entrepreneur. 

 

June 01, 2010 in Computing, Energy, Technology | Permalink | Comments (43)

Tweet This! |

The Publishing Disruption

What a unique thing a book is.  Made from a tree, it has a hundred or more flexible pages that contain written text, enabling the book to contain a large sum of information in a very small volume.  Before paper, clay tablets, sheepskin parchment, and papyrus were all used to store information with far less efficiency.  Paper itself was once so rare and valuable that the Emperor of China had guards stationed around his paper possessions. 

Before the invention of the printing press, books were written by hand, and few outside of monastaries knew how to read.  There were only a few thousand books in all of Europe in the 14th century.  Charlemagne himself took great effort to learn how to read, but never managed to learn how to write, which still put him ahead of most kings of the time, who were generally illiterate. 

But with the invention of the printing press by Johannes Gutenberg in the mid-15th century, it became possible to make multiple copies of the same book, and before long, the number of books in Europe increased from thousands to millions. 

Fast forward to the early 21st century, and books are still printed by the millions.  Longtime readers of The Futurist know that I initially had written a book (2001-02), and sought to have it published the old-fashioned way.  However, the publishing industry, and literary agents, were astonishingly low-tech.  They did not use email, and required queries to be submitted via regular mail, with a self-addressed, stamped envelope included.  So I had to pay postage in both directions, and wait several days for a round trip to hear their response.  And this was just the literary agents.  The actual publishing house, if they decide to accept your book, would still take 12 months to produce and distribute the book even after the manuscript was complete.  Even then, royalties would be 10-15% of the retail price.  This prospect did not seem compelling to me, and I chose to parse my book into this blog you see before you. 

The refusal by the publishing industry to use email and other productivity-enhancing technologies as recently as 2003 kept their wages low.  Editors always moaned that they worked 60 hours a week just to make $50,000 a year, the same as they made in 1970.  My answer to them is that they have no basis to expect wage increases without increasing their productivity through technology. 

In the meantime, self-publishing technologies emerged to bypass the traditional publishers' role as arbitrers of what can become a book and what cannot.  From Lulu to iUniverse to BookSmart, any individual can produce a book, with copies that can be printed on demand.  Instances where an individual is seeking to go it alone without being saddled with a huge upfront inventory production and storage burden, or is otherwise marketing to only a tiny audience, have flourished.  But print-on-demand is not the true disruption - that was yet to come. 

Kindle The Amazon Kindle launched in late 2007 at the high price of $400.  Within 2 years, a substantially more advanced Kindle 2 was available for a much lower price of $260, alongside competing readers from several other companies.  Many people feel that the appeal of holding a physical book in our hands cannot be replaced by a display screen, and take a cavalier attitude towards dismissing e-readers.  The tune changes upon learning that the price of a book on an e-reader is just a third of what the paper form at a brick-and-mortar bookstore, with sales tax, would cost.  Market research firm iSuppli estimates that 5 million readers have been sold in 2009, and another 12 million will sell in 2010.  Amazon estimates that over one-third of its book sales are now through the kindle, greatly displacing sales of paper books. 

Imagine what happens when the Kindle and other e-readers cost only $100.  Brick and mortar bookstores will consolidate to fewer premises, extract profits mainly from picture-heavy books and magazines, and step up their positioning as literary coffeehouses.  Many employees and affiliates of the publishing industry will see their functions eliminated as part of the productivity gains.  College students forced to pay $100 for a textbook produced in small quantities will now pay only $20 for an e-reader version.  But even this is not the ultimate endgame of disruption. 

Intel Reader Intel now has a reader for the visually impaired that scans text from paper books, and reads them in an acceptable audio voice.  It is reported that with practice, an audio rate of 250 words per minute can be coherent.  While the reader costs $1500, and requires a user to turn pages manually, it is a matter of time before not only the reader's price drops, and more and more books are available as text files similar to those contained in e-readers like the Kindle.  There are already books available as free downloads of text files under the ironically named Project Gutenberg. 

Therein lies the crescendo of disruption.  The Intel Reader is a $1500 device for the visually impaired, but will soon evolve into a technology that interfaces with Kindle-type e-readers and chatters off e-books at 250 words/minute, from the full e-book library that is vastly larger than any traditional collection of audiobooks.  A 90,000-word novel could be recited in just 6 hours, enabling a user to imbibe the whole book during a single coast-to-coast flight, even if the lights are dimmed.  People could further choose to preserve their vision at home, devouring book after book with the lights out.  As the technology advances further, the speech technology will allow the user to select a voice of his choosing to be read to in, perhaps even his own voice. 

Thus, without many people even noticing the murmurs, we can predict that the next 3 years will see the biggest transformation in book production and consumption since the days of Johannes Gutenberg.  That is a true demonstration of both the Accelerating Rate of Change and The Impact of Computing.   

December 13, 2009 in Computing, Technology | Permalink | Comments (25)

Tweet This! |

Mobile Broadband Surge : A Prediction Follow Up

Some of you may recall that over three and a half years ago, on February 4, 2006, I predicted that by 2013, at least 900 million people in emerging nations, 80% of whom had no Internet connection in 2006, would have access to a wireless broadband connection through their cellphones.  That seemed like a bold prediction at the time.

Mobile But in the Economist, there is a special report on mobile phones in the developing world, and this chart depicts the progress towards my prediction quite nicely.  Mobile broadband subscribers will go from nearly zero in early 2006 when the prediction was first made, to 1.4 billion by 2013 (of which 900 million can safely be assumed to be in emerging nations). 

It is often said that no other invention has done more for so many people so quickly than the mobile phone, given the large number of people who did not have even a landline phone prior to getting a mobile phone.  However, the inital deployment of rudimentary mobile phones was just the beginning.  As 3G broadband at speeds greater than 1 mbps spread to a billion people with no prior Internet access, the entire nature of their existence is transformed.  As per this second chart from the Economist report, the GDP boost from broadband Internet penetration is far higher than the already-impressive boost we have seen from simple mobile access, and we can thus expect another, stronger wave of human advancement as mobile broadband diffuses. Mobile2 Simultaneously, the entire nature of the Internet is also transformed.  Think of the massive developmental catalyst such a rapid technological diffusion would be.  Child literacy would rise as the educational materials of the full Internet will be available in locations where no libraries exist, making near-universal child literacy a reality within a decade.  Agricultural and fishery supply chains will shorten tremendously.  Disaster relief will become far easier, as will the apprehension of criminals.  The upliftment that once appeared to be a process of decades will now happen in mere years.   

We can thus proceed to the next prediction, which is that by 2020, 4 billion people will have 4G wireless broadband access on their handheld mobile phone, at speeds exceeding 100 Mbps.  In other words, a landline speed that even wealthy Americans could not have in 2005 will be available wirelessly to billions of the very poorest people just 15 years later, in 2020.  Imagine that. 

October 05, 2009 in Technology | Permalink | Comments (23)

Tweet This! |

Video Conferencing : A Cascade of Disruptions

Prod_large_photo0900aecd80553a7e Almost 3 years ago, in October of 2006, I first wrote about Cisco's Telepresence technology which had just launched at that time, and how video conferencing that was virtually indistinguishable from reality was eventually going to sharply increase the productivity and living standards of corporate employees (image : Cisco). 

At that time, Cisco and Hewlett Packard both launched full-room systems that cost over $300,000 per room.  Since then, there has not been any price drop from either company, which is unheard of for a system with components subject to Moore's Law rates of price declines.  This indicates that market demand has been high enough for both Cisco and HP to sustain pricing power and improve margins.  Smaller companies like LifeSIze, Polycom, and Teleris have lower-end solutions for as little as $10,000, that have also been selling briskly, but have not yet dragged down the Cisco/HP price tier.

This article in the San Jose Mercury News indicates what sort of savings these two corporations have earned by use of their own systems :

In a trend that could transform the way companies do business, Cisco Systems has slashed its annual travel budget by two-thirds — from $750 million to $240 million — by using similar conferencing technology to replace air travel and hotel bills for its vast workforce.

Likewise, Hewlett-Packard says it sliced 30 percent of its travel expenses from 2007 to 2008 — and expects even better results for 2009 — in large part because of its video conference technology.

If Cisco can chop its travel expenses by two-thirds, and save $500 million per year (which increases their annual profit by a not-insignificant 6-10%), then every other large corporation can save a similar magnitude of money.  For corporations with very narrow operating margins, the savings could have a dramatic impact on operating earnings, and therefore stock price.  The Fortune 500 alone (excluding airline and hotel companies) could collectively save $100 billion per year, in a wave set to begin immediately if either Cisco or HP drops the price of their solution, which may happen in a matter of months.  We will soon see that for every $20 that corporations used to spend on air travel and hotels, they will instead be spending only $1 on videoconferencing expenses.  This is gigantic gain in enterprise productivity. 

Needless to say, high-margin airline revenue from flights between major business centers (such as San Francisco-Taipei or New York-London) will be slashed, and airlines will have to consolidate to fewer flights, making suitability for business travel even less flexible and losing even more passengers.  Hotels will have to consolidate, and taxis and restaurants in business hubs will suffer as well.  But these are merely the most obvious of disruptions.  What is even more interesting are the less obvious ripple effects that only manifest a few years later, which are :

1) Employee Time and Hassle : Anyone who has had to travel to another continent for a Mon-Fri workweek trip knows that the process of taking a taxi to the airport, waiting 2 hours at the airport, the flight itself, and the ride to the final destination consumes most of the weekends on either side of the trip.  Most senior executives log over 200,000 miles of flight per year.  This is a huge drag on personal time and quality of life.  Travel on weekdays consume productive time that the employer could benefit from, which for senior executives, could be worth thousands of dollars per hour.  Furthermore, in an era of superviruses, we have already seen SARS, bird flu, and swine flu as global pandemic threats within the last few years.  A reduction of business travel will slow down the rate at which such viruses can spread across the globe and make quarantines less inconvenient for business (although tourist travel and remaining business travel are still carriers of this). 

2) Real Estate Prices in Expensive Areas : Home prices in Manhattan and Silicon Valley are presently 4X or more higher than a home of the same square footage 80 miles away.  By 2015, the single-screen solution that Cisco sells for $80,000 today may cost as little as $2000, and those from LifeSize and others may be even cheaper, so hosting meetings with colleagues from a home office might be as easy as running a conference call.  A good portion of employees who have small children may find it possible to do their jobs in a manner than requires them to go to their corporate office only once or twice a week.  If even 20% of employees choose to flee the high-cost housing near their offices, the real estate prices in Manhattan and Silicon Valley will deflate significantly.  While this is bad news for owners of real-estate in such areas, it is excellent news for new entrants, who will see an increase in their purchasing power.  Best of all, working families may be able to afford to have children that they presently cannot finance. 

3) Passenger Aviation Technological Leap : Airlines and aircraft manufacturers have little recourse but to respond to these disruptions with innovations of their own, of which the only compelling possibility is to have each journey take far less time.  It is apparent that there has been little improvement in the speed of passenger aircraft in the last 40 years.  J. Storrs Hall at the Foresight Institute has an article up with a chart that shows the improvements and total flattening of the speed of passenger airline travel.  The cost of staying below Mach 1 vs. being above it are very different, as much as 3X, which accounts for the sudden halt in speed gains just below the speed of sound after the early 1960s.  However, the technologies of supersonic aircraft (which exist, of course, in military planes) are dropping in price, and it is possible that suborbital passenger flight could be available for the cost of a first-class ticket by 2025.  The Ansari X-prize contest and Space Ship Two have already demonstrated early incarnations of what could scale up to larger planes.  This will not reverse the video-conferencing trend, of course, but it will make the airlines more competitive for those interactions that have to be in person. 

So we are about to see a cascade of disruptions pulsate through the global economy.  While in 2009, you may have no choice but to take a 14-hour flight (each way) to Asia, in 2025, the similar situation may present you with a choice between handling the meeting with the videoconferencing system in your home office vs. taking a 2-hour suborbital flight to Asia. 

This, my friends, is progress. 

August 11, 2009 in Accelerating Change, Computing, Economics, Technology | Permalink | Comments (25) | TrackBack (0)

Tweet This! |

The Next Big Thing in Entertainment, A Half-Time Update

On April 1, 2006, I wrote a detailed article on the revolutionary changes that were to occur in the concept of home entertainment by 2012 (see Part I and Part II of the article).  Now, in 2009, half of the time within the six-year span between the original article and the prediction has elapsed.  Of course, given the exponential nature of progress, much more happens within the second half of any prediction horizon relative to the first half. 

The prediction issued in 2006 was:

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply partaken in that it will reduce the time spent on watching network television to half of what it is (in 2006), by 2012.

The basis of the prediction was detailed in various points from the original article, which in combination would lead to the outcome of the prediction.  The progress as of 2009 around these points is as follows :

1) Video game graphics continue to improve : Note the progress of graphics at 10-year intervals starting from 1976.  Projecting the same trend, 2012 will feature many games with graphics that rival that of CGI films, which itself can be charted by comparing Pixar's 'Toy Story' from 1995 to 'Up' from 2009.  See this demonstration from the 2009 game 'Heavy Rain', which arguably exceeds the graphical quality of many CGI films from the 1990s.   

The number of polygons per square inch on the screen is a technology that is closely tied to The Impact of Computing, and can only rise steadily.  The 'uncanny valley' is a hurdle that designers and animators will take a couple of years to overcome, but overcoming this barrier is inevitable as well. 

2) Flat-screen HDTVs reach commodity prices : This has already happened, and prices will continue to drop so that by 2012, 50-inch sets with high resolution will be under $1000.  A thin television is important, as it clears the room to allow more space for the movement of the player.  A large size and high resolution are equally important, in order to create an immersive visual experience. 

We are rapidly trending towards LED and Organic LED (OLED) technologies that will enable TVs to be less than one centimeter thick, with ultra-high resolution. 

3) Speech and motion recognition as control technologies : When the original article was written on April 1, 2006, the Nintendo Wii was not yet available in the market.  But as of June 2009, 50 million units of the Wii have sold, and many of these customers did not own any game console prior to the Wii. 

The traditional handheld controllers are very limited in this regard, despite being used by hundreds of millions of users for three decades.  If the interaction that a user can have with a game is more natural, the game becomes more immersive to the human senses.  See this demonstration from Microsoft for their 'Project Natal' interface technology, due for release in 2010. 

Furthermore, haptic technologies have made great strides, as seen in the demonstration videos over here.  Needless to say, the possibilities are vast. 

4) More people are migrating away from television, and towards games :  Television viewership is plummeting, particularly among the under-50 audience, as projected in the original 2006 article.  Fewer and fewer television programs of any quality are being produced, as creative talent continues to leak out of television network studios.  At the same time, World of Warcraft has 11 million subscribers, and as previously mentioned, the Wii has 50 million units in circulation. 

There are only so many hours of leisure available in a day, and Internet surfing, movies, and video games are all more compelling than the ever-declining quality of television offerings.  Children have already moved away from television, and the trend will creep up the age scale.

5) Some people can earn money through games : There are an increasing number of ways where avid players can earn real money from activities within a Game.  From trading of items to selling of characters, this market is estimated at over $1 billion in 2008, and is growing. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a group of people who are able to earn a full-time living through these VR worlds.  This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today. 

Taking all 5 of these points in combination, the original 2006 prediction appears to be on track.  By 2012, hours spent on television will be half of what they were in 2006, with sports and major live events being the only forms of programming that retain their audience. 

Overall, the prediction seems to be well on track.  Disruptive technologies are in the pipeline, and there is plenty of time for each of these technologies to combine into unprecedented new applications.  Let us see what the second half of the time interval, between now and 2012, delivers. 

July 19, 2009 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (20)

Tweet This! |

The Impact of Computing : 78% More per Year, v2.0

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months.  But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years.  To not internalize this more deeply is to miss financial opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.  Hence, it is time to update the first version of this all-important article that was written on February 21, 2006.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 12% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let us say that the trend growth of both industries is 12% a year for the next couple of decades.

So, we can conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78

The Impact of Computing grows at a scorching pace of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves.  Consider the most popular television shows of the 1970s, where the characters had all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

We can take the concept even closer to the present.  Among 1990s sitcoms, how many plot devices would no lon ger exist in the age of cellphones and Google Maps?  Consider the episode of Seinfeld entirely devoted to the characters not being able to find their car, or each other, in a parking structure (1991).  Or this legendary bit from a 1991 episode in a Chinese restaurant.  These situations are simply obsolete in the era of cellphones.  This situation (1996) would be obsolete in the era of digital cameras, while the 'Breakfast at Tiffany's' situation would be obsolete in an era of Netflix and YouTube. 

In the 1970s, there was virtually no household product with a semiconductor component.  In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.  In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.  In the late 1990s, the PC was joined by the Internet connection and the DVD player. 

Now, I want everyone reading this to tally up all the items in their home that qualify as 'Impact of Computing' devices, which is any hardware device where a much more powerful/capacious version will be available for the same price in 2 years.  You will be surprised at how many devices you now own that did not exist in the 80s or even the 90s.

Include : Actively used PCs, LCD/Plasma TVs and monitors, DVD players, game consoles, digital cameras, digital picture frames, home networking devices, laser printers, webcams, TiVos, Slingboxes, Kindles, robotic toys, every mobile phone, every iPod, and every USB flash drive.  Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Do not include : Tube TVs, VCRs, film cameras, individual video games or DVDs, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year. 

How many 'Impact of Computing' Nodes do you currently own?
Under 10
11-15
16-20
21+
  
Free polls from Pollhost.com

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0-1

1980s : 1-2

1990s : 3-4

2000s : 6-12

2010s : 15-30

2020s : 40-80

The average home of 2020 will have multiple ultrathin TVs hung like paintings, robots for a variety of simple chores, VR-ready goggles and gloves for advanced gaming experiences, sensors and microchips embedded into clothing, $100 netbooks more powerful than $10,000 workstations of today, surface computers, 3-D printers, intelligent LED lightbulbs with motion-detecting sensors, cars with features that even luxury models of today don't have, and at least 15 nodes on a home network that manages the entertainment, security, and energy infrastructure of the home simultaneously. 

At the industrial level, the changes are even greater.  Just as telephony, photography, video, and audio before them, we will see medicine, energy, and manufacturing industries become information technology industries, and thus set to advance at the rate of the Impact of Computing.  The economic impact of this is staggering.  Refer to the Future Timeline for Economics, particularly the 2014, 2024, and 2034 entries.  Deflation has traditionally been a bad thing, but the Impact of Computing has introduced a second form of deflation.  A good one. 

Plasma It is true that from 2001 to 2009, the US economy has actually shrunk in size, if measured in oil, gold, or Euros.  To that, I counter that every major economy in the world, including the US, has grown tremendously if measured in Gigabytes of RAM, TeraBytes of storage, or MIPS of processing power, all of which have fallen in price by about 40X during this period.  One merely has to select any suitable product, such as a 42-inch plasma TV in the chart, to see how quickly purchasing power has risen.  What took 500 hours of median wages to purchase in 2002 now takes just 40 hours of median wages in 2009.  Pessimists counter that computing is too small a part of the economy for this to be a significant prosperity elevator.  But let's see how much of the global economy is devoted to computing relative to oil (let alone gold).

Oil at $50/barrel amounts to about $1500 Billion per year out of global GDP.  When oil rises, demand falls, and we have not seen oil demand sustain itself to the extent of elevating annual consumption to more than $2000 Billion per year.

Semiconductors are a $250 Billion industry and storage is a $200 Billion industry.  Software, photonics, and biotechnology are deflationary in the same way as semiconductors and storage, and these three industries combined are another $500 Billion in revenue, but their rate of deflation is less clear, so let's take just half of this number ($250 Billion) as suitable for this calculation.

So $250B + $200B + $250B = $700 Billion that is already deflationary under the Impact of Computing.  This is about 1.5% of world GDP, and is a little under half the size of global oil revenues. 

The impact is certainly not small, and since the growth rate of these sectors is higher than that of the broader economy, what about when it becomes 3% of world GDP?  5%?  Will this force of good deflation not exert influcence on every set of economic data?  At the moment, it is all but impossible to get major economics bloggers to even acknowledge this growing force.  But over time, it will be accepted as a limitless well of rising prosperity. 

12% more dollars spent each year, and each dollar buys 59% more power each year.  Combine the two and the impact is 78% more every year. 

Related :

A Future Timeline for Economics

Economic Growth is Exponential and Accelerating

Are You Acceleration Aware?

Pre-Singularity Abundance Milestones

The Technological Progression of Video Games

 

April 20, 2009 in Accelerating Change, Computing, Core Articles, Technology, The Singularity | Permalink | Comments (41) | TrackBack (0)

Tags: computing, future, Moore's Law

Tweet This! |

Nanotechnology : Bubble, Bust, ....Boom?

All of us remember the dot-com bubble, the crippling bust that eventually was a correction of 80% from the peak, and the subsequent moderated recovery.  This was easy to notice as there were many publicly traded companies that could be tracked daily.

I believe that nanotechnology underwent a similar bubble, peaking in early 2005, and has been in a bust for the subsequent four years.  Allow me to elaborate.

Nanotech By 2004, major publications were talking about nanotech as if it was about to surge.  Lux Capital was publishing a much-anticipated annual 'Nanotech Report'.  There was even a company by the name of NanoSys that was preparing for an IPO in 2004.  BusinessWeek even had an entire issue devoted to all things nanotech in February 2005.  We were supposed to get excited. 

But immediately after the BusinessWeek cover, everything seemed to go downhill.  Nanosys did not conduct an IPO, nor did any other company.  Lux Capital only published a much shorter report by 2006, and stopped altogether in 2007 and 2008.  No other major publication devoted an entire issue to the topic of nanotechnology.  Venture capital flowing to nanotech ventures dried up.  Most importantly, people stopped talking about nanotechnology altogether.  Not many people noticed this because they were too giddy about their home prices rising, but to me, this shriveling of nano-activity had uncanny parallels to prior technology slumps. 

The rock bottom was reached at the very end of 2008.  Regular readers will recall that on January 3, 2009, I noticed that MIT Technology Review conspicuously omitted a section titled 'The Year in Nanotech' among their year-end roundup of innovations for the outgoing year.  I could not help but wonder why they stopped producing a nanotech roundup altogether, and I subsequently concluded that we were in a multi-year nanotech winter, and that the MIT Technology Review omission marked the lowest point.

Forest But there are signs that nanotech is on the brink of emerging from its chrysalis.  The university laboratories are humming again, promising to draw the genie out of its magic lamp.  In just the first 12 weeks of 2009, carbon nanotubes, after staying out of the news for years, have suddenly been making headlines.  Entire 'forests' of nanotubes are now being grown (image from MIT Tech Review) and can be used for a variety of previously unrelated applications.  Beyond this, there is suddenly activity in nanotube electronics, light-sensitive nanotubes, nanotube superbatteries, and even nanotube muscles that are as light as air, flexible as rubber, but stronger than steel.  And all this is just nanotubes.  Nanomedicine, nanoparticle glue, and nanosensors are also joining the party.  All this bodes well for the prospect of catching up to where we currently should be on the trendline of molecular engineering, and enabling us to build what was previously impossible. 

The recovery out of the four-year nanotech winter could not be happening at a better time.  Nanotech is thus set to be one of the four sectors of technology (the others being solar energy, surface computing, and wireless data) that pull the global economy into its next expansion starting in late 2009. 

Related :

Milli, Micro, Nano, Pico

March 23, 2009 in Accelerating Change, Nanotechnology, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: nanotech

Tweet This! |

The End of Rabbit Ears, a Billion more Broadband Users - Part II

Three years ago, I wrote about the end of broadcasted television signals through the air on February 17, 2009.  It was one of the earliest articles here on The Futurist, and we have now arrived at the date when this transition will take place. 

In the last 3 years, we have seen the Apple iPhone (now in a 2.0 version), as well as broad deployment of 3G service to cellular phones.  Neither were available in February 2006.  But these are small increments compared to what access to the previously unavailable 700 MHz spectrum will give rise to.  The auction for the spectrum fetched $19.6 Billion, indicating how valuable this real-estate is. 

Signals sent at this frequency can easily pass through walls, and over far greater distances than signals in higher frequency bands.  More importantly, since wireless is the dominant (and often only) means of Internet access in many developing countries, the innovations designed to exploit the 700 MHz band in the US will inevitably be modified to supercharge wireless Internet access in India, Latin America, and Africa.  An additional 1 billion broadband Internet users in developing regions will be connected by 2013, as predicted in Part I of this article.  There are few technologies that can help pull people out of poverty so quickly. 

In the depths of a recession, the events that spark the next expansion arise almost unnoticed.  WIthin 24 months of this event, there will be a vast array of exciting wireless products and services for all of us to enjoy.  Remember that today, despite the economy being in its darkest hour, was the day that it began. 

(crossposted on TechSector)

February 16, 2009 in Technology | Permalink | Comments (9) | TrackBack (0)

Tags: 3G, Internet, spectrum, wireless

Tweet This! |

Solar Power's Next 5 Game-Changing Technologies

I have written beffore about the reduction in price of solar energy, and how each succcessive price decline would deliver a new generation of adoption.  Now, we can examine some of the specfic technologies that are driving the race to affordability, and will enable solar energy to be one of the only candidate technologies to lead an economic recovery from the present downturn. 

Popular Mechanics has a roundup of five new areas of innovation in harnessing energy from the Sun.  All five promise to make solar energy competitive with the cheapest sources of fossil-fuel energy, and many of these five technologies could work in combination with each other.  The five technologies are the following :

Solar 

Now, many of these technologies were invented before 2008, so this roundup does not alter the fact that 2008 was a year of very low technological innovation.  However, all these innovations bode very well for a tremendous boom in solar power starting around 2010.  Each technology has one or more startup companies mentioned in each section.  The industry consensus is that solar power becomes competitive with conventional sources of power generation by around 2011, varying by the local cost of electricity and the solar intensity of a particular region (i.e Arizona becomes cost-competitive for solar before British Columbia does).

The greatest benefits, however, will accrue to emerging markets.  Many poorer countries not only have electricity rates that are much more expensive than in the US, but these countries, being in more southern latitudes, receive greater solar intensity to begin with.  Breakeven in these markets arrives even sooner than it does for the wealthy countries at more northern latitudes.  Many villages in India, VietNam, Iraq, Egypt, and Indonesia will go from having no electricity to having photovoltaic electricity. 

Related :

Solar Energy Cost Curve

The Solar Revolution is Near

A Future Timeline for Energy

(crossposted on TechSector)

February 03, 2009 in Energy, Nanotechnology, Technology | Permalink | Comments (12) | TrackBack (0)

Tags: greentech, photovoltaic, solar

Tweet This! |

Why Government is Set to Constrict Silicon Valley

Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

- Robert Heinlein

The secret sauce of Silicon Valley is the tradition of leaving established companies to start or join new ones, secure funding from venture capitalists, build the company to a suitable size, and then either float or sell the company for a windfall to the founders and early employees.  The incentive to continue this practice is the engine that keeps the fire of human technological innovation alive. 

Silicon Valley's unique ecosystem has so far been nearly impossible to eclipse.  The combination of research universities, the best and brightest immigrants from India and China, a culture of entrepreneurship, and a nearly perfect climate has kept the competitors to Silicon Valley at bay.  In the 1990s, the prevalent belief was that the high cost of living in Silicon Valley would enable Austin, Dallas, Seattle, and Phoenix to attract technology workers and cultivate their own tech sectors.  This did not happen, as the Silicon Valley ecosystem just had too strong of a gravitational pull. 

This, however, should not be an excuse for complacency, or a belief that Silicon Valley is a bottomless supply of tax revenue.  There are four steps that would make Silicon Valley prohibitively inhospitable to the formation of new ventures. Any one of these by itself would not be enough to dent the might of the Silicon Valley engine, but all four combined would exceed the breaking point.  The first two of these four steps have already happened, and the final two are set to happen, barring direct intervention. 

The four steps are :

1) Sarbanes-Oxley : This attempt to reduce the risk of another Enron-style fraud has inflicted a cost on the US economy greater than 100 Enron collapses.  In Silicon Valley, the crushing costs of Sarbanes-Oxley compliance (up to $3M a year) have dried up IPOs to a trickle, as the prospect of spending money of compliance that could otherwise be spent on R&D is unappealing.  IPOs are less frequent than they were even in the early 1990s, before the bubble, and start-ups can only hope to be acquired by a larger company.  In the last 8 years, only two IPOs were large enough to be considered 'blockbuster' : Google and VMWare.  This crushes the incentive to leave stable jobs to go work at a new venture. 

2) Tortuous Immigration Process : Any list of the most successful people in the history of Silicon Valley will quickly reveal that at least one third of them were born outside of the US.  In response, America has chosen to make it much harder for more such people to come here, even as the quality of life in their home countries is rising. 

While politicians pander to illegal immigrants with minimal education, they somehow refuse to make immigration easier for legal, highly-skilled immigrants who start new ventures in America.  This is significant given the fact that about half of Silicon Valley's skilled workforce is Indian or Chinese.  Many are choosing to return to their home countries in exasperation, and are advising their younger relatives that the US immigration process is so tedious that it is better to pursue their careers at home, working for Indian or Chinese branches of HP or Microsoft. 

Under current procedures, an engineer from India or China has to be on an H1-B visa for 6 years before he can get a greencard.  If he changes employers during that period, he has to start the clock again.  The immigrant's spouse cannot work during this period.  Even after the greencard, it takes 5 more years to become a US citizen.  Unsurprisingly, the best and brightest are deciding that this 11-year limbo is not worth it, and return to their home countries (eventually starting companies there rather than in Silicon Valley).  In the 1990s, Americans had not even heard of Bangalore or Suzhou. 

I have written up a detailed solution to this problem over here. 

___________________________________________________________________________

If these two factors weren't bad enough, two more negatives are about to be piled on. 

3) California State Income Taxes are Set to Rise : The budget shortfalls and underfunded pensions in California are a ticking time bomb.  CalPERS, which invests in many of the top venture capital funds that nurture the growth of start-ups in Silicon Valley, is in a shambolic state, and has to add $80 billion in assets just to meet present obligations.  The top income bracket in California is already taxed at 9.3%, and this is set to rise.  Sales taxes are also set to rise.  Due to this horrendous mismanagement worthy of a banana republic, California will soon reach a tipping point where taxes are so high as to destroy California's private sector, which until now has been the envy of the world.  It would, of course, be better to reduce CA state expenditures, but government officials have made it clear that raising taxes is their preferred course of action. 

Victor Davis Hanson explains California's black hole in more detail. 

4) Federal Income Taxes are Set to Rise : If the Bush tax cuts are allowed to expire, then from 2011 onwards, the top income bracket will be taxed at 39.6% rather than the current 35%.  Here, too, the concept of reducing expenditures is not palatable to Washington decision-makers.  While this does affect the entire US equally, when this is combined with the increase in California Sate tax, the combined marginal tax rate in California rises several percentage points, and possibly rises well above 50%.

The danger here is that each of these factors by themselves are not life-threatening.  But all four of them in cumulative combination are deadly.  So on top of the difficulty of conducting an IPO, and the brain drain out of Silicon Valley back to Asia, if the financial windfall that a worker receives after his startup makes a successful exit is taxed at a grand total of 50-55%, fewer and fewer people will aspire to toil away for years in a startup.  As a result, fewer startups will form in Silicon Valley, and instead will form in Bangalore, Shanghai, and Taipei. 

Furthermore, after these forces have been in effect for a few years a simple reversal of the higher tax rates, dysfunctional immigration policy, and Sarbanes Oxley will not simply restore Silicon Valley to its prior grandeur.  The technology centers in Asia will have achieved critical mass by then, and Silicon Valley will have permanently lost its exclusivity.  It would never recover the dominance it once had. 

Silicon Valley will be reduced to a location that still hosts the headquarters of HP, Intel, Cisco, and Google, but 90% of the employees of these corporations will be overseas, and startups will be rare.  Silicon Valley will effectively become like Cleveland or Pittsburgh, which even today host the headquarters of more than 20 Fortune 500 corporations each, but still have a lower population than they each had in 1960, and cannot attract new young people to come and live there.  Cleveland and Pittsburgh are still functioning societies, of course, but their economic vibrancy is irretrievably dead. 

This bleak outlook can certainly be reversed if prompt action is taken now.  Sadly, the current path is one that is set to have a smothering effect on Silicon Valley. 

(crossposted on Techsector)

January 25, 2009 in China, Economics, India, Political Debate, Politics, Technology | Permalink | Comments (31)

Tags: immigration, silicon valley, taxes

Tweet This! |

2008 Technology Breakthrough Roundup

Each year, I post a roundup of technology breakthroughs for that year from the MIT Technology Review, and I now present the 2008 edition. 

2008 was a year of unusually low technological innovation.  This is not merely the byproduct of the economic recession, as some forms of innovation actually quicken during a recession.  Furthermore, the innovations from 2006 and 2007 (linked below) showed very little additional progress in 2008, except in the field of energy.  This also confirms my observation from February 2008 that technology diffusion appears to be in a lull. 

The innovations in 2008 are categorized below : 

The Year in Computing

The Year in Robotics

The Year in Biomedicine

The Year in Materials

What is conspicuously absent is any article titled 'The Year in Nanotechnology'.  Both 2006 and 2007 had such articles, but the absence of a 2008 version speaks volumes about how little innovation took place in 2008.  The entire field on nanotechnology was lukewarm. 

Most of the innovations in the articles above are in the laboratory phase, which means that about half will never progress enough to make it to market, and those that do will take 5 to 15 years to directly affect the lives of average people (remember that the laboratory-to-market transition period itself continues to shorten in most fields). 

Furthermore, The Wall Street Journal has its own innovation awards for 2008, but this merely confirms that 2008 was a poor year for innovation.  For example, the Tata Nano is chosen in the WSJ article, yet it is not available to consumers until mid-2009.  Let's hope 2009 has more genuine innovations. 

Into the future we continue, where 2009 awaits....

Related :

2007 Technology Breakthrough Roundup

2006 Technology Breakthrough Roundup

January 03, 2009 in Science, Technology | Permalink | Comments (16) | TrackBack (0)

Tags: innovation, invention, technology

Tweet This! |

Pre-Singularity Abundance Milestones

I am of the belief that we will experience a Technological Singularity around 2050 or shortly thereafter. Many top futurists all arrive at prediction dates between 2045 and 2075. The bulk of Singularity debate revolves not so much around 'if' or even 'when', but rather 'what' the Singularity will appear like, and whether it will be positive or negative for humanity.

To be clear, some singularities have already happened.  To non-human creatures, a technological singularity that overhauls their ecosystem already happened over the course of the 20th century.  Domestic dogs and cats are immersed in a singularity where most of their surroundings surpass their comprehension.  Even many humans have experienced a singularity - elderly people in poorer nations make no use of any of the major technologies of the last 20 years, except possibly the cellular phone.  However, the Singularity that I am talking about has to be one that affects all humans, and the entire global economy, rather that just humans that are marginal participants in the economy.  By definition, the real Technological Singularity has to be a 'disruption in the fabric of humanity'. 

In the period between 2008 and 2050, there are several milestones one can watch for in order to see if the path to a possibile Singularity is still being followed.  Each of these signifies a previously scarce resource becoming almost infinitely abundant (much like paper today, which was a rare and precious treasure centuries ago), or a dramatic expansion in human experience (such as the telephone, airplane, and Internet have been) to the extent that it can even be called a transhuman experience.  The following are a random selection of milestones with their anticipated dates. 

Technological :

Hours spent in videoconferencing surpass hours spent in air travel/airports : 2015

Video games with interactive, human-level AI : 2018

Semi-realistic fully immersive virtual reality : 2020

Over 5 billion people connected to the Internet (mostly wirelessly) at speeds greater than 10 Mbps : 2022

Over 30 network-connected devices in the average household worldwide : 2025

1 TeraFLOPS of computing power costs $1 : 2026

1 TeraWatt of worldwide photovoltaic power capacity : 2027

1 Petabyte of storage costs $1 : 2028

1 Terabyte of RAM costs $1 : 2031

An artificial intelligence can pass the Turing Test : 2040

Biological :

Complete personal genome sequencing costs $1000 : 2020

Cancer is no longer one of the top 5 causes of death : 2025

Complete personal genome sequencing costs $10 : 2030

Human life expectancy achieves Actuarial Escape Velocity for wealthy individuals : 50% chance by 2040

Economic :

Average US household net worth crosses $2 million in nominal dollars : 2024

90% of humans living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960) : 2025

10,000 billionaires worldwide (nominal dollars) : 2030

World GDP per Capita crosses $50,000 in 2008 dollars : 2045

_________________________________________________________________

Each of these milestones, while not causing a Singularity by themselves, increase the probability of a true Technological Singularity, with the event horizon pulled in closer to that date.  Or, the path taken to each of these milestones may give rise to new questions and metrics altogether.  We must watch for each of these events, and update our predictions for the 'when' and 'what' of the Singularity accordingly. 

Related : The Top 10 Transhumanist Technologies

September 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (24) | TrackBack (0)

Tags: Acceleration, Future, Futurist, Moore's Law, Prosperity, Singularity

Tweet This! |

Can Buildings be 'Printed'?

I have discussed the possibility of 3-D printing of solid objects before, in this article where company #5, Desktop Factory, is detailed.  However, the Desktop Factory product can only produce objects that have a maximum size of 5 X 5 X 5 inches, and it can only use one type of material. 

On the Next Big Future blog, the author quite frequently profiles a future product capable of 'printing' entire buildings.  This technology, known as 'Contour Crafting', can supposedly construct buildings at greater than 10 times the speed, yet at just one-fifth the cost of traditional construction processes.  It is claimed that the first commercial machines will be available in 2008 itself. 

Despite my general optimism, this particular machine does not pass my 'too good to be true' test, at least before 2020.  A machine that could construct homes and commercial buildings at such a speed and cost would cause an unprecedented economic disruption across the world.  There would be a steep but brief depression, as existing real estate loses 90% or more of its value, followed by a huge boom as home ownership becomes affordable to several times as many people as today.  I don't think that we are on the brink of such a revolution.

For me to be convinced, I would have to see :

1) Articles on this device in mainstream publications like The Economist, BusinessWeek, MIT Technology Review, or Popular Mechanics.

2) The ability to at least print simple constructs like concrete perimeter walls or sidewalks at a rate and cost several times superior to current methods.  Only then can more complex structures be on the horizon. 

I will revisit this technology if either of these two conditions is solidly met. 

(crossposted on TechSector). 

September 02, 2008 in Accelerating Change, Economics, Technology, The Singularity | Permalink | Comments (21) | TrackBack (0)

Tags: Construction, Contour Crafting, Printing, Real Estate

Tweet This! |

Surfaces : The Next Killer Ap in Computing

Computing, once seamlessly synonymous with technological progress, has not grabbed headlines in recent memory. We have not had a 'killer ap' in computing in the last few years.  Maybe you can count Wi-fi access to laptops in 2002-03 as the most recent one, but if that is not a sufficiently important innovation, we then have to go all the way back to the graphical World Wide Web browser in 1995.  Before that, the killer ap was Microsoft Office for Windows in 1990.  Clearly, such shifts appear to occur at intervals of 5-8 years. 

I can, without hesitation, nominate surface computing as the next great generational augmentation in the computing experience.  This is because surface computing entirely transforms the human-computer interaction in a matter that is more suitable for the human body than the mouse/keyboard model is. In accordance with the Impact of Computing, rapid drops in the costs of both high-definition displays and tactile sensors are set to bring this experience to consumers by the end of this decade.

Surface

BusinessWeek has a slideshow featuring several different products for surface computing. Over ten major electronics companies have surface computing products available. The most visible is the Microsoft Surface, which sells for about $10,000, but will probably drop to $3000 or less within 3-4 years, enabling household adoption.

As far as early applications of surface computing, a fertile imagination can yield many prospects. For example, a restaurant table may feature a surface that displays the menu, enabling patrons to order simply by touching the picture of the item they choose.  The information is sent to the kitchen, and this saves time and reduces the number of waiters needed by the restaurant (as waiters would only be needed to deliver the completed orders).  Applications for classroom and video game settings also readily present themselves. 

Watch for demonstrations of various surface computers at your local electronics store, and keep an eye on the price drops.  After seeing a demonstration, do share at what pricepoint you might purchase one.  The next generation of computing beckons. 

Related :

The Impact of Computing

(Crossposted on TechSector)

July 11, 2008 in Accelerating Change, Computing, Technology, The Singularity | Permalink | Comments (7) | TrackBack (0)

Tweet This! |

The Solar Revolution is Near

I have long been optimistic about Solar Energy (whether photovoltaic or thermal) becoming our largest energy source within a few decades.  Earlier articles on the subject include :

A Future Timeline for Energy

Solar Energy Cost Curve

Several recent events and developments have led me to reinforce this view.  First of all, consider this article from Scientific American, detailing a Solar timeline to 2050. The article is not even Singularity-aware, yet details many steps that will enable Solar energy to expand by orders of magnitude above the level that it is today.  Secondly, two of the most uniquely brilliant people alive today, Ray Kurzweil and Elon Musk (who I recently chatted with), have both provided compelling cases on why Solar will be our largest energy source by 2030.  Both Kurzweil and Musk reside in significantly different spheres, yet have arrived at the same prediction.

However, the third point is the one that I find to be the most compelling. There are a number of publicly traded companies selling solar energy products, many of which had IPOs in just the last three years.  Some of these companies, and their market capitalizations, are :

Solar1

Now consider that the companies on this list alone amount to about $50 Billion in capitalization.  There are, additionally, many smaller companies not included on this list, many companies like Applied Materials (AMAT) and Cypress Semiconductor (CY) for which solar products comprise only a portion of their business, and large private companies like NanoSolar (which I have heavily profiled here) and SolFocus that may have valuations in the billions.  Thus, the market cap of the 'solar sector' is already between $60B and $100B, depending on what you include within the total.  This immense valuation has accumulated at a pace that has taken many casual observers by surprise.  A 2-year chart of some of the stocks listed above tells the story. 

Solar2

While FirstSolar (FSLR) has been the brightest star, all the others have trounced the S&P500 to a degree that would put even Google or Apple to shame over this period.  Clearly, a dramatic ramp in Solar energy is about to make mainstream headlines very soon, even if the present valuations are too high. 

Is this a dot-com-like bubble?  Yes, in the near-term, it is.  However, after a sharp correction, the long term growth will resume for the companies that emerge as leaders.  I won't recommend a specific stock among this cluster just yet, as there are a wave of private companies with new technologies that could render any of these incumbents obsolete.  Specific company profiles will follow soon, but in the meantime, for more detail on the long-term trends in favor of Solar, refer to these additional articles of mine :

Why I Want Oil to Hit $120 per Barrel

Terrorism, Oil, Globalization, and the Impact of Computing

(crossposted on TechSector)

May 20, 2008 in Energy, Nanotechnology, Stock Market, Technology, The Singularity | Permalink | Comments (13) | TrackBack (0)

Tweet This! |

Ten Biotechnology Breakthroughs Soon to be Available

Popular Mechanics has assembled one of those captivating lists of new technologies that will improve our lives, this time on healthcare technologies (via Instapundit).� Just a few years ago, these would have appeared to be works of science fiction.� Go to the article to read about the ten technologies shown below.�

Biotech10_2

Most of these will be available to average consumers within the next 7-10 years, and will extend lifespans while dramatically lowering healthcare costs (mostly through enhanced capabilities of early detection and prevention, as well as shorter recovery times for patients).� This is consistent with my expectation that bionanotechnology is quietly moving along established trendlines despite escaping the notice of most people.� These technologies will also move us closer to Actuarial Escape Velocity, where the rate of lifespan increases exceed that of real time.�

Another angle that these technologies effect is the globalization of healthcare.� We have previously noted the success of 'medical tourism' in US and European patients seeking massive discounts on expensive procedures.� These technologies, given their potential to lower costs and recovery times, are even more suitable for medical offshoring than their predecessors, and thus could further enhance the competitive position of the countries that are quicker to adopt them.� If the US is at the forefront of using the 'bloodstream bot' to unclog arteries, the US thus once again becomes more attractive than getting a traditional procedure done in India or Thailand.� But if the lower cost destinations also adopt these technologies faster than the heavily regulated US, then even more revenue migrates overseas and the US healthcare sector would suffer further deserved blows, and be under even greater pressure to conform to market forces.� As technology once again acts as the great leveler, another spark of hope for reforming the dysfunctional US healthcare sector has emerged.�

These technologies are near enough to availability that you may even consider showing this article to your doctor, or writing a letter to your HMO.� Plant the seed into their minds...

Related :

Actuarial Escape Velocity

How Far Can 'Medical Tourism' Go?

Milli, Micro, Nano, Pico

May 09, 2008 in Accelerating Change, Biotechnology, Computing, Nanotechnology, Technology, The Singularity | Permalink | Comments (11) | TrackBack (0)

Tweet This! |

'Outsourcing' - What a Non-Crisis That Turned Out to Be, v2.0

I wrote version 1.0 of this article on November 26, 2006.  16 months later, it is time for version 2.0 to provide more historical context on how misplaced the hype over some fashionable issues eventually turns out to be, and why what once appeared to be a harbinger of doom is now all but forgotten. 

In the 2001-03 economic downturn, the aftermath of the technology bust resulted in hundreds of thousands of software engineers and assorted high-tech workers losing their jobs.  A jittery public was vulnerable to influence from isolationist politicians, with the likes of Lou Dobbs and Pat Buchanan fanning the flames in the media.  As a result, the simple business practice of moving certain components of daily operations to a lower-cost location, if only to keep up with competitors already doing the same, became a dirty word - 'outsourcing'. 

The cover story of Wired Magazine's February 2004 issue was on the outsourcing of software jobs to India.  Within the article, a core theme was the supposedly tremendous hardships that white-collar Americans were about to experience due to a 'giant sucking sound' of jobs going to India.  In the same month, then Presidential candidate John Kerry screamed about the practicies of "Benedict Arnold CEOs" who outsource American jobs to India, hoping to gain the support of isolationists and the economically ignorant.  Elsewhere, very uncharitable things were said by leftists about brown-skinned Indians, due to their rapid adoption of capitalism and globalization at the expense of the leftist plantation where Indians were required to symbolize Gandhian non-violence, zen spirituality, yoga, curries, and the glorification of poverty. 

Let's call February 2004 as time when the bubble of 'outsourcing' fears reached a fevered peak.  Now, what happens whenever a bubble of psychology reaches a peak?

A quick glance at a few economic indicators from the Bureau of Labor Statistics in the 4 years since then reveals the following :

Outsourcing_2

So 7.5 million jobs were created in this short time, the unemployment rate is lower than it has been for 33 of the last 37 years, and wages have risen while real GDP has grown at a 3.2% clip.  There is thus no evidence of job losses, wage erosion, or underemployment over this period.  Take that, Lou Dobbs, Pat Buchanan, John Kerry, Dennis Kucinich, and other assorted demagogues, who have no ability whatsoever to truly grasp the trends that shape our world. 

India, in the meantime, has benefited greatly as well.  GDP growth has averaged 8% a year over this same period, pulling 100 million people out of poverty.  Political ties with the US have strengthened in a manner unlike any previous episode in the last 50 years.  The faster these ties broaden, the better the world will become.  A prosperous India is a critical component to the US achieving favorable outcomes in both the War on Terror and with China, as seen from where India resides on this particular map.  Anti-Americans become apoplectic when they learn that India is the most pro-US country in the world. 

What does the future of outsourcing hold?  Is there still a risk of jobs vanishing from the US at a rate faster than they can be produced, as pessimists still maintain?  Unlikely, even though Internet backbone bandwidth has quintupled in the last 4 years, and many more people in India have PCs and Broadband connections today than in early 2004.  This is because aggregate demand growth has saturated even India's vast labor pool.  Salaries in India have been rising at over 12% a year due to labor shortages, causing their cost advantage to erode.  The Wired article from 2004 stated that the average salary of an Indian programmer was $8000 a year; today, it is closer to $15,000 a year in US dollars.  India itself has started outsourcing to Bangladesh and Eastern Europe, which are much smaller labor pools and will also saturate quickly.  Indeed, the trends favor more job creation in America and India. 

Now that we are in another recession, phony issues like this one emerge again.  Democrats are still speaking in protectionist tones, bashing NAFTA and opposing free-trade agreements with Columbia.  But other than a few pessimists, socialists, and racists, it is unlikely to gain much traction, as Americans have seen that the benefits have outweighed the costs by a handsome margin.  BusinessWeek also had an article from 4/24/07, six months after version 1.0 of this article was post, on how misrepresented the outsourcing issue is.

Thus, the bubble of fashionable pessimism has moved to the next topic, which happens to be the decline of the dollar.  This, too, will turn out to be a passing concern that the economy adjusts to after a brief period of pain.  Among other things, a competitively priced dollar has led to Europe outsourcing jobs to the US, and is also working towards reducing US dependence on oil.  A debunking of the 'weak dollar' fad will be posted on another day. 

Related :

Terrorism, Oil, Globalization, and the Impact of Computing

Why I Want Oil to Hit $120 per Barrel

Outsourced Education - the Latest Flattener

March 18, 2008 in Economics, India, Political Debate, Politics, Technology | Permalink | Comments (16) | TrackBack (0)

Tweet This! |

»

Search

Ads

Categories

  • About
  • Accelerating Change
  • Artificial Intelligence
  • ATOM AotM
  • Biotechnology
  • China
  • Comedy
  • Computing
  • Core Articles
  • Economics
  • Energy
  • India
  • Nanotechnology
  • Political Debate
  • Politics
  • Science
  • Space Exploration
  • Stock Market
  • Technology
  • The ATOM
  • The Misandry Bubble
  • The Singularity

Recent Posts

  • ATOM Award of the Month, November 2019
  • Timing the Singularity, v2.0
  • Could it Be? Have We Gotten Through to the Federal Reserve?
  • The Federal Reserve Continues to Get it Wrong
  • ATOM Award of the Month, April 2019
  • ATOM Award of the Month, February 2019
  • ATOM-like Thought Elsewhere
  • ATOM Award of the Month, December 2018
  • ATOM Award of the Month, October 2018
  • Economic Trendline Reversion Does Not Happen Evenly
Subscribe to this blog's feed

Site Meter


Reference

  • The Economist
  • Instapundit
  • KurzweilAI
  • MIT Technology Review

Archives

  • November 2019
  • August 2019
  • July 2019
  • May 2019
  • April 2019
  • February 2019
  • January 2019
  • December 2018
  • October 2018
  • July 2018

More...

© The Futurist