?

Log in

No account? Create an account
~生まれた町で夢見てきた...~
"In the city of my birth, I had a dream..."
*urk* Not feeling so good today... 
15th-Nov-2005 11:20 am
Winter
I was feeling distinctly nauseous last night, and it definitely hasn't abated today. At least I got cuaght up with my work yesterday, and provided I'm feeling well enough to write my sci-fi film essay tonight, I'll actually be ahead of the game come Wednesday. If not, I'm gonna be behind again. *sighs* Only two more applications to go, too. Coming down to the wire, aren't we?

Anyway, here's why I'm kind of ahead of the game. I got this novel finished last night, even though class isn't until Thursday. I definitely want to read more of Philip K. Dick's work, now.

Dick, Philip K. Blade Runner (Do Androids Dream of Electric Sheep?). New York: Del Rey, 1982. (First Printing: 1968)
Summary: Rick Deckard is a bounty hunter working for the Los Angeles police, ordered to bring in six rogue Nexus-6 androids newly escaped from Mars. In spite of himself, he manages to do it all in one day and in the process develop a level of sympathy for the androids that he hunts.
Comments: The novel plays at length with the question, "What is reality?" So, you get bombarded by variations of that question throughout the novel. Are androids human? Are they at least equal to humans if not exactly the same? Are the bounty hunters actually androids? Is that electric animal actually electric? Is Mercer a god or just a second-rate actor on a low-budget set? Ultimately, you aren't left with any clear-cut answers, and what is "reality" for the androids might be totally different but just as valid from the "reality" of humans. We get this impression twice late in the novel--Rachel's murder of Rick's black goat and the androids' reaction to the revelation about Mercer. Though I definitely liked the novel more than the film in the first half, I'm not sure if I liked the resolution of the novel more. The novel definitely trivializes the androids' existence in the end; as far as Rick is concerned, Roy Baty is so stupid that he can't tell humans apart. In fact, I'm convinced that Dick is not ready to embrace the possibility of full humanity for the androids as Michael Cunningham was in Specimen Days; their existence in the end is only understood by the level to which the humans around them are able to empathize with them. Is this speaking to the Civil Rights Movement? Possibly. But if it is, it's suggesting that androids are an inferior lifeform but that humans ought to sympathize with them anyway. Replace the word "androids" with "blacks," and you get something almost unspeakably condescending. Regardless, there's plenty to enjoy here, from mood organs to Mercerism, though even this "futuristic" technology is more about the present than it is about the future. When's the last time you used the verb "to dial" in normal conversation? Dials on telephones have become obsolete; we don't dial anymore. And yet, you hear the word all the time on Stargate SG-1 and see it in Blade Runner. I guess the future is always simultaneously better and worse than we hope it will be.
Notes: mass market paperback, movie tie-in
Rating: 7/10 - Entertaining and definitely worth a read, particularly to see how our expectations of the future have changed.
Comments 
15th-Nov-2005 06:36 pm (UTC)
Heh, I couldn't even remember the ending of the book....

Hmm, I wonder if at some point in the future, robots will arguing over whether or not we have rights ^^;;
16th-Nov-2005 01:25 am (UTC)
The ending of the book was, I thought, less powerful than the ending of the film.

Speaking of electronic animals, have you ever heard of those Dogz and Catz programs? Those thing manipulate people into believing that they're real--and in the end, that's all that counts.
16th-Nov-2005 02:05 am (UTC)
Huh, no never heard of those programs.. I wonder if/when the argument will really come up.. At what point will an artificial intelligence demand its civil rights ^^;
16th-Nov-2005 02:17 am (UTC)
I had a trial edition of the Dogz program. It would howl when you ignored it. My mother used to make sympathetic admonitions at me whenever it howled.

Let's make a bet about android civil rights: Not in our lifetime. :P
16th-Nov-2005 02:36 am (UTC)
It would howl when you ignored it.

I have real cats that torment me enough, I don't need a computer program for that! Even more horrific, how about a whining baby simulator ^^;;

Let's make a bet about android civil rights: Not in our lifetime.

Yeah, probably not.. But then of course, they'll just get pissed off and kill all humans ^^;; It would be cool to see the first android to be considered an equal though ^_^

16th-Nov-2005 02:42 am (UTC)
*laughs* The dog was really cute, though. Did things like wave its paw and do somersaults.

But then of course, they'll just get pissed off and kill all humans ^^;;

The android in Specimen Days has an interesting failsafe to prevent that. He's programed for self-preservation AND an automatic shutdown mechanism if he starts endangering a human life.
16th-Nov-2005 02:55 am (UTC)
He's programed for self-preservation AND an automatic shutdown mechanism if he starts endangering a human life.

But that can be overcome, especially if androids are creating new androids through many generations and software evolves independently... Well, in theory ^_^
16th-Nov-2005 03:00 am (UTC)
Why do you assume that androids would be able to create themselves? Even if you're as smart as a person, well, not everyone can put a computer together. This guy couldn't--and, in fact, the company that built him went bankrupt. *snickers* Which actually seems like a likely scenario.
16th-Nov-2005 11:26 am (UTC)
Well, almost certainly robots will be building other robots.. They already build our cars ^_^ And if you have a sophisticated enough artificial intelligence, it could be charged with developing improvements for software and circuitry, etc..
You just watch, years from now when you're in that nursing home, your nurse will be an android ^_~
16th-Nov-2005 01:28 pm (UTC)
I'll believe self-repairing androids when your PC starts replicating and you perform open-heart surgery on yourself. The problem with technology is that it never works as well as we hope or fear it's going to.
16th-Nov-2005 01:59 pm (UTC)
Well, not self repairing really, but one repairing the other almost certainly.. To some extent we already have that. If you take a new car in because of a problem, it will be plugged into a computer which will do a diagnosis and tell the lowly mechanic what to do.. Considering they're currently developing robotic systems that can perform delicate surgery (with human help), it's just a short jump to robots independently fixing other robots ^_^
But I would guess humanoid type robots would be a minority.. Why make one general purpose type when each could be specifically adapted to a given purpose? More intelligent systems would likely be more like HAL, like a big brain with control over many bodies, and interacting with other big brains..
16th-Nov-2005 02:02 pm (UTC)
But who is going to repair to repair robots? See, it's an endless circle. We can't assume infallibility in machines because we have yet to ever even come close to achieving it. The more complicated it gets, the more things go wrong, and the harder it is to diagnose it. Nobody would ever create a truly independent system of machines because we CANNOT.
16th-Nov-2005 05:32 pm (UTC)
But we don't have anyone looking after all of us either, and we get along.. People repair other people, and other people can repair them if needed ^_^
17th-Nov-2005 11:50 am (UTC)
You just proved my point! If we don't take care of ourselves very well, how can we ever create perfect machines? No matter how complicated and "advanced" things get, they still break down--and some would argue that the more advanced it gets, the less reliable it becomes. Because we're fallible, everything we create simply replicates our weaknesses. Machines will not surpass us completely until we figure out how to surpass ourselves.
17th-Nov-2005 12:43 pm (UTC)
and some would argue that the more advanced it gets, the less reliable it becomes.

Well, cars have gotten incredibly complex, and increasingly more reliable and durable ^_^

Machines already surpass us in many aspects; speed, accuracy, memory, endurance, etc.. That's why fewer and fewer people are needed in factories.. Really the limiting factor now is intelligence, and there are already systems that can actively learn on their own. I don't think it's such a stretch that we could create something that exceeds us. It doesn't have to be perfect, it just has to excel a little more than us ^_^
17th-Nov-2005 01:07 pm (UTC)
Yet cars still break down all the time. There's always something wrong with ours. Moreover, we don't yet understand how the human mind works. How can we create a machine that successfully replicates it, then? Even if we do create something that SEEMS sentient, how long do you really think it will last? How long do complex machines like computers and cars last before something goes very wrong? One year? Five years? Imagine if your brain went horribly wrong after just five years! Could you even read? People thought we'd have sentient robots by now...but look at the robots we do actually have! Hardly lives up to the fantasy.

You're like one of those people who worries about people using genetic engineering to create a master race--when the fact of the matter is, the more we discover, the more we realize how untenable complete technological control is.

No, when we become dependent on machines, it's our own choice, not machines willfully trying to dominate us.
17th-Nov-2005 01:54 pm (UTC)
Yet cars still break down all the time. There's always something wrong with ours.

*grins*

That's because you have two Chryslers ^_^

Moreover, we don't yet understand how the human mind works. How can we create a machine that successfully replicates it, then?

But we don't have to understand how, we just have to replicate key functions.. Creating a biological brain would be vastly more difficult I think, an electronic brain with a digital mind is a different life form.. If you can create one that can really think and analyze, it may be able to direct the evolution of its own species.. And, eh, what does it matter how long one individual last? Our lifespan is pitiful, and we spend our whole lives trying to accumulate existing knowledge and pass it along. Digital minds could essentially be continuous, moving from body to body and transferring all accumulated knowledge in a flash...

Actually, I think the fear that intelligent machines will try to dominate us says more about people than about the potential machines.. We're afraid our creations will act too much like we do....

Imagine if your brain went horribly wrong after just five years!

Some would argue it did ^_^;;
29th-Nov-2005 11:25 pm (UTC)
Review archived.
This page was loaded Jul 21st 2018, 3:43 pm GMT.