A Theory on When Computer Processing Will No Longer Need to Evolve

In both of the fields I am considering getting a degree in – in Business or in Computer Science – there is an endgame to which the concept of “upgrades” will no longer apply.  For business people, most have reached a plateau: with only a few areas of the business field truly needing the latest & greatest in technology, most people will find that the few advancements will only make their PowerPoint a smidge prettier and their Excel documents and calculations a fraction of a second faster at this point – until the next big evolution in the business field comes along, most of the heavy work is on the end user, the business person.  Computer people will be hitting a plateau soon as well, when you start to think about things:  the biggest complaint currently is how much less in graphical improvement the latest Nest-Gen hardware by Sony and MS (and Nintendo, who went down a different path yet again) has in comparison to the last time they released hardware.  While hardcore geeks and programming buffs can point to the technical improvements made, a lot of people haven’t seen the kind of graphical improvement they’ve become accustomed to seeing in the past.
We’re going to get to a point, within the next couple of generations of gaming console, where the improvements will no longer be something the computer will be able to see.  Yes, the systems themselves will get smaller, with less of a need for an optical drive and more powerful processors that, at some point, will make our current processors look like clumsy giants.  However, at some point the sale of a new system is going to change from “look at how graphically enhanced we are” to “look at what our system can do and handle at the same time!  All while being in this shiny, small box!”
What I’m saying will make sense to even the most mentally-challenged of people:  we’re within a number of years and generations before we will need a technological advancement to argue the need for an upgrade.  The Occulus Rift and 4K TV are going to be a great help for this, but even those are within a few generations hitting that plateau.
At some point, you have to look at the big picture, find the signs not for what is currently there but where you want to be, and if it’s far out, figure out what you will need to build to it.  The ideas may come from established science fiction (as a lot of innovations do), or they might come from the imagination; whatever the case, you might need to figure out the endgoal for what you’re interested in will be, as well as the stopgaps in between, to prepare for and work towards the future.
I think Science Fiction has provided two possibilities on where that end goal might be, both of which are similar enough in function, that provide what may be that end game for entertainment-based computing:  Life simulators, either in the form of wearable/attachable “devices” (such as the plugs in “The Matrix” films or the head devices worn by people in “Surrogates”) or my ever-favorite “Holodeck” from “Star Trek: The Next Generation.”
While there are a million forms of technology, as well as more research and development of current technology, needed to develop such devices, my thought is that a lot of it could arguably be done today, albeit with a caveat: The size and heat of these machines would necessitate something of a skyscraper to build just the computer if we did it today.
The thoughts I have behind this are relatively simple enough.  First, and probably easiest, you would need enough information to create and translate objects into a form human beings can understand.  Each way is different:  whereas “The Matrix” could get away with only focusing on how the mind interprets that object, a Holodeck would have to complete all of the details, such as grain of wood, weight, how the rough or smooth, soft or hard that object is.  While programming tricks combined with increased speed would allow for advance tricks similar to what we have in games today (where a game engine only creates what it needs) and the internet can compensate for some of the work (particularly if we’re still on Earth or another world where an established network is set up), either method would take up a lot of processing power per object.  A fully-completed wooden chair, for example, with weight, definition and full usability, could take up a core or a full machine right now to compensate for everything that user might do with it – making sure it smashes right, for example, if the user picks it up and throws it.  (This is one area where “Surrogates” gets off easy: rather than having to recreate the object, it only needs to translate the sensations the robot gets into something the human understands.)
Likewise, the physics necessary to recreate an environment would take up a large amount of computers as well.  If that chair is thrown, something needs to translate the air-drag and speed, and if it hits something, how it breaks.  (If it hits someone else, how the perceive that hit must also come into bearing.  Databases would be needed for each object in the simulated environments, whether it’s just understand the sensations experienced by a machine acting as a “vehicle”, if you’re inside of a virtual world, or if that world is being recreated for you.  AI would be needed to simulate any of the “life” in that world, be it plant, animal, or intelligent life, and all of these systems must tie together.  In addition, you have all of the translators and machines going between to create the physical parts of that world.
I’m not here to explain how it all works – I’m here to explain why I think this is the direction we are going, and more importantly, what we can start doing now to move this forward.
Think back about 4 decades, if you’re old enough.  Most people in the last quarter century will have no idea of how far we’ve advanced, and even people my age only have a surface view of our advancements in technology; it’s hard to believe, in some ways, that less than a century ago the closest things we had to the modern day computer was the abacus and the mechanical calculator; that at one time the sports halls and stadiums of our day was the size needed to house 1 computer; or that it was right around the British Invasion in the 60’s when the first computer game was made.  Not that many people thought we’d be going from something like “Pong” and “Asteroids” to the virtual worlds we see today, or the advances we’d go through in robotics, biotechnology, or new fields such as virtual reality.  Very few people predicted the internet as it is today.
I could point to helmets like the “Occulus Rift,”  but the games that will more likely be converted to the helmet are more than enough to prove my point.  The government has seen this, and even created their own game for it.  You can only get so realistic in graphics and sound, and anyone could see the potential translating these worlds could be, not just on an entertainment system, but also an educational one:  Training soldiers and surgeons in a safe environments, giving kids field testing without putting themselves into harms way.  How better to train the next cook, field operative, or oil rigger than in an environment that allows breakdowns and dangers that would cost a company potential millions of dollars otherwise?
We already have some of that technology.  Obviously AI’s developed for games would be more than capable of simulating other life forms – maybe not as complex as a dog or cat, but complex enough to know how to recreate a mouse or small bird.  Solid-state drives and RAIDS may not be common-place in the home, but there’s not much more speed to be gained by these devices.  Graphics cards have not only become more sophisticated in how much they can draw, but also how fast and easily they can interpret data – there’s a good reason besides money graphics are getting integrated into the CPU.  Theoretically, any of the pieces could be created on one modern-day computer – in some cases, to the exacting speeds and memory needed.
The other big argument I find in why this is the direction we’re going is to look back at history, particularly the relationship between science fiction and modern technology. While we don’t yet have Aliens, Time Machines or Robots and Flying Cars beyond the scope of a movie, we have tablets, cell phones and computers that continue to decrease in size and increase in usefulness; TV’s and theatres have all but replaced books for retelling modern tales, and digital files the records of yesterday (which themselves aren’t very old, either.)  At some point all of this stuff is going to come to a convergence, where we no longer read or watch something, or are limited either by programming constraints or accessibility – we fully experience it, whether it’s a concerto in moonlight in history, or a futuristic action game.
The question I pose is not whether anyone agrees with these thoughts, but where we are at, at least in the scope of computation.  At what point do you see computers powerful enough to do all of this work at a price and size reasonable enough to be accessed by even the poorest of consumers?  How soon do we go from needing the skyscraper I believe we really need to simulate 1 world today to needing maybe a corner office to power 100 worlds?  Where do we need to focus our technological advances on the computer side to achieve this end goal?
Welcome to the starting line of the end of the evolution of the computer.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s