There are minor but growing elements of evidence that the rate of technological change has moderated in this decade. Whether this is a temporary trough that merely precedes a return to the trendline, or whether the trendline itself was greatly overestimated, will not be decisively known for some years. In this article, I will attempt to examine some datapoints to determine whether we are at, or behind, where we would expect to be in 2008.
There is overwhelming evidence that many seemingly unrelated technologies are progressing at an accelerating rate. However, the exact magnitude to the accelerating gradient - the second derivative - is difficult to measure with precision. Furthermore, there are periods where advancement can be significantly above or below any such trendline.
This brings us to the chart below from Ray Kurzweil (from Wikipedia) :
This chart appears prominently in many of Kurzweil's writings, and brilliantly conveys the concept of how each major consumer technology reached the mainstream (as defined by a 25% US household penetration rate) in successively shorter times. The horizontal axis represents the year in which the technology was invented.
This chart was produced some years ago, and therein lies the problem. If we were to update the chart to the present day, which technology would be the next addition after 'The Web'?
Many technologies can claim to be the ones to occupy the next position on the chart. IPods and other portable mp3 players, various Web 2.0 applications like social networking, and flat-panel TVs all reached the 25% level of mainstream adoption in under 6 years in accordance with an extrapolation of the chart through 2008. However, it is debatable that any of these are 'revolutionary' technologies like the ones on the chart, rather than merely increments above incumbent predecessors. The iPod merely improved upon the capacity and flexibility of the walkman, the plasma TV merely consumed less space than the tube TV, etc. The technologies on the chart are all infrastructures of some sort, and it is clear that after 'The Web', we are challenged to find a suitable candidate for the next entry.
Thus, we either are on the brink of some overdue technology emerging to reach 25% penetration of US households in 6 years or less, or the rapid diffusion of the Internet truly was a historical anomaly, and for the period from 2001 to 2008 we were merely correcting back to a trendline of much slower diffusion (where it take 10-15 years for a technology to each 25% penetration in the US). One of the two has to be true, at least for an affluent society like the US.
This brings us to the third and final dimension of possibility. This being the decade of globalization, with globalization itself being an expected natural progression of technological change, perhaps a US-centric chart itself was inappropriate to begin with. Landline telephones and television sets still do not have 25% penetration in countries like India, but mobile phones jumped from zero to 10% penetration in under 7 years. The oft-cited 'leapfrogging' of technologies that developing nations can benefit from is a crucial piece of technological diffusion, which would thus show a much smaller interval between 'telephones' and 'mobile phones' than in the US-based chart above. Perhaps '10% Worldwide Household Penetration' is a more suitable measure than '25% US Household Penetration', which would then possibly show that there is no lull in worldwide technological adoption at all.
I may try to put together this new worldwide chart. The horizontal axis would not change, but the placement of datapoints along the vertical axis would. Perhaps Kurzweil merely has to break out of US-centricity in order to strengthen his case and rebut most of his critics.
The future will disclose the results to us soon enough.
Related :
Hmmmm.
I'm struck by a factoid I came upon recently, that the U.S. spent 40 years building its interstate highway system, while China will complete one in just 17 years. Was China faster because they are "better" or because technology allows one to do such things much faster and easier than ever before?
My point is that the items you are tracking are successively many orders of magnitude more powerful. Tracking adoption time masks this fact. In my previous example it would be comparing road construction to hypersonic jets. Adoption rate is less important that the increase in speed.
After the web - I dunno - communications technology is already operating at near light speed already - there is no way to go appreciably faster. Perhaps the next step is faxing of 3-d shapes? The technology is already out there.
Also - How about including the telegraph at the left end of the chart? Maybe add in the fax machine? I'm wondering if home and business adoption rates would be more informative.
Posted by: Geoman | February 20, 2008 at 10:40 AM
GK,
I do believe that technology is in a temporary lull due to the fact that at the current time, all of modern day technology is almost as evolved as it can physically get (that we are currently aware of anyway).
This is not to say that new discoveries will not be made, or that currently held theories and 'scientific law' will not be unbroken.
For example, let's say you managed to get a F-22 raptor back into the 1950's. They would not be able to figure out the micro-circuitry, let alone the material it was made out of. Back then, it would have been an anomoly to people. Yet now that we can quantify and classify the technology today, we can improve upon it. Ditto for the technology 'road blocks' of today.
The only thing you said that I found a hard time grasping, was the paragraph about 'globalization'. This is because no where in history has it ever been done. Also, everytime in history countries have come together for comman purposes, they are soon parted again when the next generation grows up without the same ideals in their head. And I really hate to burst a globalists bubble, but everytime an expansion (or blending) is made, chaos (in some degree) follows before order is restored, and the blending finally settles into a mold.
But such molds have never lasted in history, and never will. The bigger and more complex they grow, the more volatile they become. I mean, do you really believe somalia Warlords or Kim Jung Il will ever make peace with Muslims, or even the average American? And even if by some act of God this were to happen, how long do you honestly believe it would last?
Now to be fair, lets remove governments from the equation. Lets blend Joe Schmoe from Lubbock Texas with a Chinese citizen. Chances of a lasting relationship- slim and none. Also due to the differing mindsets and thought proccesses of the respective peoples, a common ideal (which would need common ground), would be nealy impossible to find. Let alone on a global scale.
Maybe you mean globalization in which every country does their perspective tasks to accomplish an even bigger goal? Inspiring, but not realistic. Such a puzzle would be hard to peice together, and due to perception differences, impossible to complete.
Food for thought.
Posted by: brokerdavelhr | February 21, 2008 at 12:35 AM
dave,
I do believe that technology is in a temporary lull due to the fact that at the current time, all of modern day technology is almost as evolved as it can physically get
It always seems that way. But Moore's Law continues with great regularity. Why do people keep upgrading their PCs, never able to say that this PC is the last one they will need?
but everytime an expansion (or blending) is made, chaos (in some degree) follows before order is restored, and the blending finally settles into a mold.
Schumpeter's creative destruction. The mold does not break, as the technologies that bring increasing integration never go away.
Maybe you mean globalization in which every country does their perspective tasks to accomplish an even bigger goal?
Innovation at super low costs done in India and China does migrate upstream to the US. India's $2500 Tata Nano car may, even after US safety additions, sell for just $6000 in the US. China has already done this with electronics and Wal-Mart consumer goods.
Posted by: GK | February 21, 2008 at 06:03 PM
GK,
People upgrade their PCs to keep up with the latest software and network applications. This is pushed by R&D and I support that 100%. However nothing new has enetered the PC world for quite some time. Take a quad-core proccessor for instance. In truth, it does not have 4 integrated cores, but rather 4 cores on top of each other, each one set to handle differant machine functions. The same applies for RAM, Hard Drives, etc.
What has happened, is that due to technology improvements, smaller channels are available (mostly due to more refined circuitry material), to pass electrical current through. However the smaller the channel gets, the less electricity can be run through it.
The thing with nano technology, is that you can only send small amounts of energy through the circuits. Yes this saves on energy, but the same performance modern day high-end machines have, will not be equaled by nano-technology (that we yet know of anyhow). So until this barrier is broken, we will remain in a bit of a lull.
You said 'due to moores law'. Understand that when Moore made this law in 1965, that they could not make a circuit small enough to even close to the nano size. Therefore they never had the problem of electrical flow. Now for Moores Law to hold true in the next 10 years, scientists will have to find a way to make machines able to use nano circuitry, or it will not work, plain and simple.
You also said that the mold does not break. Technology wise you may be right, but society wise, it can do a great deal of damage.
As for the Nano car- people have been driving 'Smart' cars in Germany for over 4 years now. This is not a new technology either. Matter of fact, I see huge problem wih a car that is barely visible in a 'blind spot'that has a top end speed of 65 miles an hour ever coming to a major road in the US where the average speed in rush hour is at least 70 (in many major metro areas, many motorist drive 80 plus to beat the morning rush).
This just goes to prove two things.
1- There is really no new technology today available to the average consumer today. Just improvements on existing ones.
2- Just because the technology is there to do something, does not necessarily make it a good idea to implement. Sure it may seem like the greatest thing since sliced bread, but in the end, more problems are then encountered when said technology hits mainstream.
Ideally, technology R&D has always inspired countries to come together and do something constructive. Realistically though, once the technology begins to peak, the differences between said countries comes back into play, and each one will fully develop it with its own spin.
Dave
Posted by: brokerdavelhr | February 21, 2008 at 10:31 PM
In addition to last, even Moore stated that a technological expodential advance can only occur for so long before it reaches a peak. Then until a new technology is discovered, a 'lull' as you put it, will occur.
The trick with nano technology and it's dealings with singularity is so far, impossible.
The human body works by electrical impulses sent throughout the body triggering a muscle contraction. The 'circuitry' in the human body is nano size, but note that only small amounts of electricity pass through these channels. Any more, and shock occurs.
So for a machine to use nano technology to mimic human functions to its fullest extent, would limit the power output to the rest of the machine body. I interject at this point that this only applies to what we know today, and maybe in the future when nano technology is fully perfected, then a marriage to machine circuitry might indeed be possible. Really all it would take would be a device that would would convert the signal from nano size to that of a full miro-circuit, and dim down the power coming into the nano circuitry.
Very interesting feild nano technology is.
Posted by: brokerdavelhr | February 22, 2008 at 09:22 PM
If you claim that the TV started in 1925 with Baird then you can't claim that the web started in 1992. Before that you had gopher, minitel and the walled gardens of AOL and Compuserve. They were much closer to the web than Baird TV to the TV of the 25%. The same argument can be made for the mobile phone and pc. There is also the problem that the printing press, pulppaper, train, steamboat and automobile are missing. To be honnest i find the chart heavily manipulated to give the wanted answer. About the Nano. It is nothing new. It is the Indian version of the Fiat 500, 2CV and mini. brokerdavelhr, a quad core has 4 identical cores. They are not made for specific jobs. Also at the moment chips are 2d objects. Moore's law can hold by simply doubling the number of layers in a chip. The number of layers are at the moment so low that this alone will allow a decade or 2 of Moore's law.
Posted by: charly | February 24, 2008 at 06:36 PM
Charly,
These cores are identicle yes. But even intel has stated that they are 4 seperate cores, not integrated as 1. I stand corrected as to they handle seperate functions.
I also agree that using this will allow for Moores law to continue. But I cannot see it happening for 10-20 more years worth. The reason why is that circuitry is quickly reaching the nano level. At which point, it will be a new technology completely. A reason why is that they will have to redesign entire systems to allow for the lesser charge (among many others).
Right now, all these technologies are just being refined. Simply adding layers is a phase of that refinement that so far , has shown little common use. Unless you run a server farm for example, a quad core processor is a little unneccasary. Even for a modern day gamer, a quad core is of little use.
Right now, many savy computer users recognize this, which is why quad-core proccessors are still being out sold by AMDs FX series. This proves that technology is only as good as it usability. Not to mention the fact that on benchmark test, the quad core fell short of AMDs FX series due to the fact that each core is not as advanced as a simple dual meshed.
So yes Moores law is holding, but the ceiling is quickly being reached. How do you figure that we have 10-20 years left in this technological realm simply by multi-layering?
Posted by: brokerdavelhr | February 26, 2008 at 09:12 PM
At the moment the number of layers on a chip is low (of the order of ten). If you double the number of layers than you double the number of transistors per mm which is what Moore's law is about. To make the thickness of the chip of the same magnitude you need tens of thousand of layers. Thousand is 8 doublings and if you add the processshrinks that are in the pipeline than you get atleast 20 more years of Moore.
Posted by: charly | February 27, 2008 at 02:30 PM
Charly,
You mean 20 years OR Moore? :- ) Sorry, bad joke I know.
Do the math. First take into consideration that the smaller the pipes, the less the energy can be put through. The less energy that can be put through, the less output can be expected.
To make thousands + layers, you would have to make it so incredilbly small that you would be delving into Nano technology. At which point all modern concepts of circuitry become meaningless due to the flow of energy and particles.
And if you do not make them smaller?
Then you get a huge chip that needs an equally huge board to run it.
And to what end? I gotta tell you,software can't keep up with modern day hardware as it is. To need a thousand layers (which would provide for millions of multiple processes over the same machine), is kinda pointless. So once again I will iterate- Just because you can do something, does not necessarilly make it a good idea.
Once companies figured out multi-threading (the improved version of intels original dual-core), the concept of layer stacking became obsolete. And yet because it is cheaper and theoretically easier then thousands of multi-layers running multi-threading, you get just plain and simple multi-layering.
Intel claims that their quad cores beat out any home machine. I say this is BS. I have seen machines run with and without them, and the only time I have seen any kind of superior performance from the quad core, was in large data base apps. Once again, the home user will never need such a thing, and any tech familiar with electrical theory and the details of said technology, will attest to this.
If you go to intels site, you will see how they want to create 10's and 100's of core processors. I kind of laugh when I see this because such things will only be good with data basing, as autocad and other intensive app would not be able to run on this 'energy efficient platform'.
First off, nothing comes without price. Even if they did invent such a claimed device, it would be hype based in main sales as it would never be consumer efficient.
Second, I am tired of this 'energy efficient' crap. As I have stated over and over, I do this line of work for a living. And if there is 1 thing I have learned, it is that 'energy efficient'= low reliability. Typically, energy efficient machines do not maintain enough power to keep up with the needed output on the machine.
So explain to me again why layering is such a good thing?
Also please explain to me why it will take another 20 years? Still lost on me.
Posted by: brokerdavelhr | February 27, 2008 at 11:07 PM
The general consensus about Moore's Law, both from Intel and from experts like Ray Kurzweil, is that Moore's Law will expire in 2018.
However, that just means silicon will not be used. Instead, nanotubes, molecular computing, etc. would take up the baton, just like vacuum tubes passed the baton on to silicon ICs in the first place.
Moore's Law is not the first paradigm of exponential computing, but the fifth.
Posted by: GK | February 28, 2008 at 11:18 AM
"The less energy that can be put through, the less output can be expected."
This is simply not true. If you make the path smaller than you need less energy to drive it. There is a limit to how much data energy can hold but the number is so large that 40 years of Moore won't touch it.
"To make thousands + layers, you would have to make it so incredilbly small that you would be delving into Nano technology"
You still don't get it. At the moment a chip has only a few (<10) layers. So while the transistors occupy an area of a penny but the thickness of those layers with transistors is only 100nm. One can increase that thickness and get more transistors per mm^2
"(which would provide for millions of multiple processes over the same machine),"
A GPU is a chip with alarge number of parallel processors. I wouldn't call that useless. And Autocad is exactly the type of program that would benefit from more processors
"'energy efficient'= low reliability."
Chips that use little power are in fact more reliable than energy hogs
Posted by: charly | February 28, 2008 at 01:19 PM
‘If you make the path smaller than you need less energy to drive it. There is a limit to how much data energy can hold but the number is so large that 40 years of Moore won't touch it’
Go back to school. Circuitry (and I mean ALL OF IT) operates in an off/on type function. I will not insult your intelligence, but this is done through binary. Now, to put a recognizable signal through a circuit, it has to be a certain size to be able to process it. Even though the circuit itself maybe able to recognize it, the rest of the system cannot yet do so. As for that limit, you are so full of it. Why do you think network media (even circuitry) needs a certain amount of medium to transfer this data. It does not sound like you are familiar with particle/energy flow. No energy, or particle will flow the same path twice without deviation. There are of course exceptions to this, but none that are applicable to the situation in question.
‘You still don't get it. At the moment a chip has only a few (<10) layers. So while the transistors occupy an area of a penny but the thickness of those layers with transistors is only 100nm. One can increase that thickness and get more transistors per mm^2’
I don’t get it? Pot calling the kettle black that statement is. I think we are on the same sheet of music, just playing different parts. You seem to more concerned with ‘you can build an efficient processor this small’. I lean more in the direction of ‘Even if you could, how would you make it work in conjunction with the technology normal to your average consumer’. You seem to believe that you can make things smaller and smaller with no effects on output. I am simply saying that with more output modifiers, you will need the machine to be able to read it. Furthermore, the machine will have to develop relatively consistently with the core.
‘A GPU is a chip with a large number of parallel processors. I wouldn't call that useless. And Autocad is exactly the type of program that would benefit from more processors ‘
Really? Do tell. Graphics cards these days use on board gpu’s to control it further to ease the burden on the CPU. However, this to is reaching its peak. Also please notice that the more intensive the card, the more power is required to run it. And if you argue this fact, then you truly do not know much about electronics. Autocad uses the graphics cards gpu to do most of the work. However someone who uses Autocad would not need a quad core CPU running on it unless the user is also running some kind of automatic update, listening to music, playing a video game (not like solitaire, but something more intensive), and doing data compiling at the same time. See my point? Even though the GPU can use it (even then it rarely reaches its full potential), it does not mean it is needed.
‘Chips that use little power are in fact more reliable than energy hogs’
Huh? When was the last time you worked on machines? I have been doing it for over 10 years, and must say that this statement is ignorant if nothing else. Go back to the basics. If the flow of energy is not proportionate to both the receiving and transporting devices, then brownouts will occur. Unfortunately, most of the ‘energy star’ products I have worked with function very poorly at the recommended power level. However once more juice is introduced into the system, things miraculously start working again. Better yet. Prove to me your statement. Show me one thing that proves your above statement. And I am not talking one of these so called ‘third party surveys’, but real hard evidence.
Posted by: brokerdavelhr | February 28, 2008 at 08:13 PM
"Circuitry (and I mean ALL OF IT) operates in an off/on type function."
not true, You have trinary chips and the latest flash from intel has 4 levels
Chips die because the doping starts moving and that happens more with higher temperatures and higher voltages. So efficient chips are more.
Autocad is the type of program that would benefit from more cores if it wasn't programmed by people who targeted the x86 market. Now that the x86 is going multi core you will see that Autocad will use those extra cores
What do you think massive parrallel is on this page http://en.wikipedia.org/wiki/GPGPU
Posted by: charly | February 29, 2008 at 04:44 PM
‘not true, You have trinary chips and the latest flash from intel has 4 levels’
But the circuitry itself still works by sending surges of power over the line. When energy is being sent over the line, it is considered on, or transmitting. When no energy is being sent through the circuit, it is off. No matter how it is ‘flashed’ , it still operates in an on/off reading. You really need to study the theory behind something before commenting on it.
‘Chips die because the doping starts moving and that happens more with higher temperatures and higher voltages. So efficient chips are more. ‘
Higher temperatures are caused by insufficient cooling devices (heat sinks, fans, etc) which require a little extra power to run. Higher voltages cause more heat true, but think more in terms of watts when dealing with machines, and you will be better off. I am not saying efficient chips are bad, but always aim for at least 50 Watts higher on your power supply then the recommended wattage. This not only prevents brownouts, but any further power issues as well. Besides, your system BIOS on any modern machine will automatically prevent surges. Please study your information more before commenting.
‘Autocad is the type of program that would benefit from more cores if it wasn't programmed by people who targeted the x86 market. Now that the x86 is going multi core you will see that Autocad will use those extra cores’
To cite autocad as being a good example for the consumer market does not really make any sense. It is a graphic intensive engineering program used for highly advanced processes that eats machine power. However, you are wrong about the cores too. What do you, work for intel or something?! Listen, autocad eats two things on the graphics card- gpu power, and on board memory. When I say GPU power, I mean it needs to be able to run many advanced equations at the same time. This is NOT solved by simply adding new cores. Once again, please study before commenting.
‘What do you think massive parrallel is on this page http://en.wikipedia.org/wiki/GPGPU’
Your citation is filled with ‘citation needed’ and ‘vague’ comments. You are a wiki learner which makes you twice the fool. First off, wiki is good for providing references at the bottom of a topic. Very rarely is it ever on point in the main topic. For kicks and giggles though, I read your cited article (which is way out of date, and by no means accurate), and never even saw the phrase ‘massive parrallel’. Did you mean http://en.wikipedia.org/wiki/Massively_parallel ? If so, you are referencing material over 16 years old that is commonly known as today- a cluster. IT DOES NOT HAVE ANYTHING TO DO WITH THIS TOPIC! Cut the crap and please come up with something real.
Posted by: brokerdavelhr | February 29, 2008 at 08:46 PM
Trinary chip circuitry has the levels +, 0 and -.
making a fan that is as failproof as a chip is really hard. Especially when you need to cool a chip made on a ten year old proces instead on a new, and much cooler proces. Also voltage doesn't only create more heat but will also increase the migration rate of the doping
IIRC it wasn't me who started using autocad as an example.
Lets make you a press release learner.
Quoting a nvidia pressrelease: "The Quadro FX 3700 graphics board offers high throughput for interactive visualization, and with 112 parallel processors, 512 MB of onboard graphics memory, and a 256-bit memory interface, the card easily manages the large models and complex, real-time shaders that dominate the CAD and digital content creation (DCC) markets."
http://www.nvidia.com/object/io_1199782737524.html
Those processors are optimized for graphics but in principle they can do everything that a Athlon can do (only slower)
I don't think you are correct with you view on beowulf clusters. They are the same as a 100 core die except their latency is much higher.
If you look at how computers develop you will see that functionality is first added by an external machine. It then moves to a daughterboard, then on the motherboard and finally on the die. A beowulf cluster is seen in that light just an external machine. A two socketed motherboard is the daughterboard generation.
The first core dual were different dies assembled in one package so that would be the motherboard generation and the two cores on one die is the integrated on die generation
Posted by: charly | March 01, 2008 at 07:18 AM
Trinary chip circuitry has the levels +, 0 and -.
You mean Ternary. Which simply uses a variation in the current running over the circuit to add a priority to a transmitted value. Funny thing is, no one uses this as any user machine out there would not support it. You can ramble about this technology all you want, but it does not change the fact that it is not prevalent in todays technological world, and nor would it ever be. Why? Because the gained advantage is far outweighed by its disadvantage. In actuality, Ternary is just a buzzword, and has no real meaning in todays tech world.
making a fan that is as failproof as a chip is really hard. Especially when you need to cool a chip made on a ten year old proces instead on a new, and much cooler proces. Also voltage doesn't only create more heat but will also increase the migration rate of the doping
Come again? You really have no idea of which you speak. A properly installed fan (or liquid cooler for that matter), will last just as long as the CPU (unless it is a ball bearing fan in which case it will have to be replaced every 3 years). The age of the CPU is irrelevant as to how well it is cooled. Honestly at this point, I can’t even take your inane comments seriously. By the way, doping is the process of adding impurities to a chip to alter its properties (usually resistance or conductivity). Am I speaking with a 13 year old?
IIRC it wasn't me who started using autocad as an example.
LOL can’t argue this one :- )
Lets make you a press release learner.
Quoting a nvidia pressrelease: "The Quadro FX 3700 graphics board offers high throughput for interactive visualization, and with 112 parallel processors, 512 MB of onboard graphics memory, and a 256-bit memory interface, the card easily manages the large models and complex, real-time shaders that dominate the CAD and digital content creation (DCC) markets."
I have been a press release learner for a very long time (but am sensible enough to read the other supporting documents as well), especially in the world of MS and Cisco. Your above comment does nothing for your argument. Sure it is an impressive card used mainly for engineering apps. However the circuitry in it remains the same in its technological background as any other today. The only thing that sets it apart, is its complexity. The comparison to this card and a legacy GeForce 4 x AGP is like comparing a regular puzzle to a 3d one. Ones more complex, but in the end, they are both just puzzles.
http://www.nvidia.com/object/io_1199782737524.html - moron.
Those processors are optimized for graphics but in principle they can do everything that a Athlon can do (only slower).
You have still not yet shown anything new, or that backs up your original statement ‘Moore's law can hold by simply doubling the number of layers in a chip. The number of layers are at the moment so low that this alone will allow a decade or 2 of Moore's law.’
I don't think you are correct with you view on beowulf clusters. They are the same as a 100 core die except their latency is much higher.
If you look at how computers develop you will see that functionality is first added by an external machine. It then moves to a daughterboard, then on the motherboard and finally on the die. A beowulf cluster is seen in that light just an external machine. A two socketed motherboard is the daughterboard generation.
The first core dual were different dies assembled in one package so that would be the motherboard generation and the two cores on one die is the integrated on die generation
That is the problem- You think, but do not educate yourself before speaking. 1- I never said anything about Beowulf clusters- THERE ARE MANY MORE OUT THERE!!! Besides, latency is caused by a lack of properly synched Software that links the controllers! You said ‘functionality is first added by an external machine’ ??!! No, external machines and devices (the real word you were looking for) add fault-tolerance, expandability, and redundancy…NOT FUNCTIONALITY! Your comparison of dual processor capable motherboards (a 15 year old + technology) to dual core (more modern), is even more ignorant, and does nothing to further your point. Same concept, different means of accomplishing it.
Posted by: brokerdavelhr | March 02, 2008 at 07:05 AM
You are right. It is ternary computing. The cache on some cpu chips have a different, read lower, voltage than the circuitry on the rest of the die so making a chip that has parts that are ternary is a real possibility. Cache would imho be a likely target as it simple to make ternary and would benefit from the lower energy use.
Making a fan that is cheap and as dependable as a chip is difficult. The cheapest fans are probably ballbearing fans and those need replacement after 3 years according to you (and you are right) and they have a cost that is of the same order as a $3 chip
It could be that a FX 3700 is just a souped up Geforce 4 but it is definitly not a souped up TNT. I can't remember which generation changed from being a chip that accelerates OpenGL and DirectX to a chip that contained many parallel processors that accelerates OpenGL.
The theoretical best latency for a meter (when the information moves with the speed of light) is already of he order of the core duo intracore latency so it is definitly not a software problem. And my model is correct if you look at harddisk controllers, modem, graphic cards etc.
Posted by: charly | March 02, 2008 at 04:35 PM
Do things like software and technology mediated social innovations count? It seems that one day I have never heard of something like facebook, the next I am on it and the day after that I am reading about how it is being used for societal and political purposes.
While a specific gadget that is noticeably unique has not hit the market for a while, the material science research the engineering principles needed to take advantage of them and the industrial procedures needed to mass produce them seem to be moving forward as fast as ever.
I suspect that those major innovations that form the data points on the graph of the speed of adoption are just the visible part of the trend. It probably does not matter how close together the innovations are if the technology, and economics behind them is continuing to develop.
Posted by: Saul Wall | March 02, 2008 at 07:26 PM
You aren't paying attention dear!
In 1903 airplanes were invented and by 1915 hundred thousands were flying in WW1. Now what was "invented" in 2003 and became widespread thereafter? Yeps, the place I exclusively inhabit - second life. And yes, I see SL, or a similar medium achieve 25% market penetration of US population in the next 6-10 years. But in that area are several other technologies that have similar impact potential; augmented reality being a big contender.
Posted by: Khannea Suntzu | March 05, 2008 at 11:07 AM
GK,
Sorry, but I'm in a rush so I didn't take time to read if anybody had made this point yet. You question whether or not these technologies ought to be considered as reaching 25% household in the U.S. or worldwide to qualify. I don't think their reaching 25% U.S. makes it U.S.-centric in the way of being biased; instead, I think those technologies still qualify, and future ones should be considered based on 25% household in their home markets under similar circumstances. International economics and politics complicate things too much to disqualify something based on not reaching 25% globally household technologies within 6 years. This worked in the U.S. because all those technologies were primarily and initially available to consumers in countries with the existing technological infrastructure to fit them into daily life, especially due to that leapfrogging you mentioned.
Posted by: Roger | March 05, 2008 at 05:35 PM
Sorry, that last phrase "especially due to that leapfrogging you mentioned" should have gone after the sentence that ended "within 6 years". Just reread it and realized it sounded very strange as it stood.
Posted by: Roger | March 05, 2008 at 05:37 PM
Suntzu,
SL is not a new technology either. It is a software program that functions the same way as any other online MUD. It is in no way shape or form, a new technology.
New technologies deal with topics such as bio-engineering, nano technologies, matter-anti-matter applications, light and energy develpoment, etc. What you are referring to is a game that people play to escape their otherwise boring life. And no, there are no new technologies in that sector either.
Posted by: brokerdavelhr | March 05, 2008 at 08:06 PM
Suntzu- PS,
Do not tell someone esle to pay attention when you yourself are not even close to the topic in discussion :-)
Posted by: brokerdavelhr | March 05, 2008 at 08:08 PM
Charly makes an excellent point: The chart itself is highly misleading due to cherry-picking both of examples and of starting points. Radio, for instance, reached 25% US household penetration in 1925, just six years after the first commercial station went on the air. What's probably true is that we've become much quicker at bringing technologies to market, but it would be mistaken to assume that this has to do with some natural process of technological discovery. It could result entirely from financial and credit markets that are vastly different than they were a century ago.
Posted by: pianoguy | March 10, 2008 at 05:04 PM
Piano,
I fail to see how newer technologies are being brought to the market faster these days. Also, our banking system has been around since the medieval times at least.
Posted by: brokerdavelhr | March 11, 2008 at 08:31 PM
In this era of blog ,we easily get nice & updated information for research purposes... I'd definitely appreciate the work of the said blog owner... Thanks!
Posted by: Alpha male | September 13, 2010 at 11:07 PM
This is a sort of blog we can have loads of information i would like to appreciate the intelligence of this blog's owner
Posted by: Alpha male | September 13, 2010 at 11:08 PM
Was the iPhone.
Posted by: Max Howell | December 26, 2019 at 01:54 PM