The idea of convergence, of one device replacing several, has long been a popular theme in forecasting high-tech gadgetry. It's also something that doesn't happen as often as predicted.
Some of the reasons relate to design and technology. It's hard to make a multitool as elegant for each individual function as specialist devices are. A form factor that's optimized around, say, being a phone demands serious technical compromises when it comes to a totally different function, such as taking a picture. And rapidly evolving technology means some functions in a device are inevitably behind the technology curve.
Increases in computing power and storage density alleviate some problems over time, as perhaps computational photography will in the case of camera phones. And buying, feeding, and caring for fewer devices is usually preferable to dealing with more--to the point that compromises are often acceptable, especially for occasional or casual use.
But device categories still seem to collapse together less often or more slowly than predicted.
Economics is one reason. The same forces leading to faster gadgets with more storage have also--in concert with streamlined supply chains and offshore manufacturing--made them ever cheaper. In short, consumer electronics are not especially expensive by any historical standard, so lopping 20 percent off the bill by awkwardly mixing multiple functions together just isn't that big a win.
It strikes me, though, that the bigger reason why certain classes of devices remain so distinct is that we tend to interact with them in fundamentally different ways. And that's all too often overlooked in an industry that frequently views things through the engineering lens of what's possible rather than the user experience lens of what's natural.
For example, I don't know how much money has been squandered pretending that a TV is a big computer monitor that sits in front of a sofa. But it must be billions. WebTV, Intel's Viiv brand, and--who knows?--Google TV are just a few of the bones littering this computer landscape. There are a lot of complexities here, not least of which is content licensing and protection. But perhaps the biggest issue is that we don't use TVs the same way that we use computers.
There's even industry lingo for the difference. TVs are a lean-back, or 10-foot, experience. Computers are a lean-in, or 3-foot, experience. One is largely passive; the other is intensely interactive. This is a difference that I doubt would be bridged by a better remote control. Yes, viewers do increasingly want to select their shows rather than just accept what's coming over a broadcast stream, but that's a different statement from saying they want to tweet and comment and otherwise be part of the content in real time.
Given that video content increasingly comes from the Web in some form, I do think it makes sense to find easier ways to "throw" a video from laptop device to the TV hanging on the wall. But that's a different model than interacting on the TV itself.
Nor is it a coincidence that tablets suddenly went mainstream right at the time they came to market with user interfaces designed specifically for phones/tablets and not PCs.
After all, tablets are not new. A former IT analyst colleague was toting his Fujitsu tablet around conferences at least five years ago. Certainly, the size, weight, and cost of various components had to reach a certain point for tablets to be broadly viable. And, at least arguably, needed a company like Apple to make a market for a new product category that pushed the envelope of what was possible.
But I'd argue that the bigger change was that the tablet broke from an interaction model that was rooted in a PC operating system and therefore was keyboard, mouse, and stylus centric to one that's multitouch-centric. The tablet as it has evolved isn't a PC without a keyboard; it's something fundamentally different. Better at some things and not as good at others. And fundamentally different.