Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it I can sing it for you.
Which makes today the 20th anniversary of HAL’s first booting up as a production system. Except, of course, that it didn’t actually happen anything like that.
It’s interesting the ways that living in the future is not what people thought it would be.
I’m not talking about the absence of flying cars or jet packs. Let’s be honest: nobody really wants flying cars. Most people can’t drive in two dimensions. Why on earth would we want them driving in three?
I’m thinking more along the lines of the things so many science fiction authors in the 1960s were positive we’d have by now, that we don’t; as well as the things we have that almost all of them missed!
Since the trigger for this post was the realization that today was HAL’s 20th birthday, this semes like a good place to start.
In the late 1960s, most authors seemed pretty certain that artificial sentience (which I deliberately separate from the computer science concept of artificial intelligence) was right around the corner. Any day now, some clever, hapless, or mad scientist was going to connect the right, or wrong, things together and some computer was going to “wake up”, and dire consequences were going to follow.
This idea was born out of the notion that we were also just around the corner from actually understanding how humans think, and what sentience is in the first place. In practice, while we do understand a great deal more about the mechanisms of the brain than we did back then, we still don’t have a clue what makes us sentient, what sentience really is in a tangible, measurable sense, or whether or not sentience is in fact entirely an illusion, as one biology professor I had back in college was absolutely convinced.
It was almost an article of faith in 1960s science fiction that, by the late 90s (if we hadn’t already blown ourselves up), we would have a permanent base, if not in fact an entire small city, on the Moon. 2001, with its matter-of-fact approach to its glimpse of the future, doesn’t even bother telling us how or why a Moonbase came into existence. It’s just…there. Like New York, or Minnapolis, or perhaps more like Great Lakes Naval or the Pentagon, since there’s a decidedly “government” feel about it. It’s a fact of life, nothing special.
Yet, here we are, 13 years later than the scene of Dr. Heywood Floyd getting his eardrums pierced by the Monolith, and not only do we not have a Moonbase, we no longer have a meaningful space program! No orbital hotel, no regular scheduled flights to orbit and beyond, not even a space shuttle program. This is an area where technology simply has not advanced fast enough to make it economically viable.
Well, this one’s coming close to true, but not the way Arthur C. Clarke and his generation of writers really thought it would — with videophone kiosks replacing pay phones out in public, and stationary videophone consoles at home. Of course, in 1968, no one could really imagine that AT&T would be broken up and that pay phones would be obsolete and that even wired home phones were fading away.
On the other hand, the much maligned Space: 1999 had everyone carrying around multifunction wireless devices that acted as videophones, which is much closer in concept (if not form factor) to what we’re seeing in the real future, with smartphones and front-facing cameras…
This is one they got wrong in a different way. In the 1960s, everyone figured that “computer” would always mean “mainframe”, with constellations of refrigerator-sized housings for CPUs, memory, disk drives, and so on. Those authors that assumed that computing would be ubiquitous still thought of it in terms of massive central computers pulling in information via terminals and feeds and so on.
The notion that we’d be carrying around in our pockets devices that make the most powerful computers of the 1960s look like adding machines completely eluded almost all of them.
Only Star Trek in the late 60s and, again, Space: 1999 in the mid-70s, came close, having characters carrying around portable electronic devices that were as-yet impossible in the real world. Several writers for Star Trek hinted that both communicators and tricorders were tied in to the ship in what today we would recognize as a network, rather than just using the ship as a communications satellite. Of course, an iPhone can do infinitely more than we ever saw them doing with a communicator, but still, the basic idea was there.
It wasn’t until the 1980s, though, with Star Trek: The Next Generation, that anyone thought to take that a step further: if my tricorder is tied into the ship’s network, then in theory it can do anything a full-sized standing console can do. One should be able to fly the ship from it, or from a PADD.
But in the 1960s, nobody seems to have really seen the full implications of the microchip revolution coming, largely because few people even knew anyone was working on the concept of such miniaturization! The Intel 4004 didn’t hit the market until 1971!
Ubiquitous Flat-screen Displays
This is one where I think science fiction did not so much predict as inspire.
Walk into any remotely successful sports bar in the United States.
Count the number of screens.
Tell me it doesn’t look like something right out of Star Trek!
Even Classic Trek had displays everywhere — it was just that the technology at the time didn’t allow them to look all that realistic. Many of them had static graphics or just blinking lights, but we all knew they were meant to be real displays.
We now live in an era where dynamic visual display devices — televisions, monitors, projectors, what have you — are cheap and plentiful. We carry around high-definition displays in our pockets! We live in a world of increasingly rich visual information…and visual distraction!
The Cold War
This one particularly fascinates me. So few writers in the 60s, 70s, or even 80s could bring themselves to believe that the Cold War might one day just…stop. Not with a bang, not even really with a whimper. Just…one day, it wouldn’t be there any more, and we’d spend more than a generation trying to figure out what to do with ourselves without a Big Bad Enemy to push against.
The result is an odd kind of anachronism in a lot of future-fiction. Most glaring, now (since we started with Clarke) is 2010, which assumes, particularly in its cinematic form, that the Cold War would extend right up to the eponymous year, and ultimately only come to an end because an external agency gave us something else to think about.
What’s your favourite prediction of the future that has, or has not, come true?