Firstly, I found it generally unimaginative. Hanson seems to constrain himself too narrowly -- his description of the em society does not feel "radically different" from present-day society, and the society he envisions does not make full use of the technology available to it.
Some examples to illustrate this observation:
- Ems are shown to be absurdly human-like, like "intellectually" rubber-forehead aliens. He writes: "even em minds are likely to age with subjective experience..." (p. 128) A claim like this ought to be based on some foundational fact about how an AI stores memories. But there is none -- there is no mathematical law forbidding AIs from being retrained, or that requires AIs to behave similar to human brains in this sense. Similar comments apply to "em suicide" (p. 127-139) and the considerations regarding Em reproduction (p. 285): there is no reason why an em's drive or aggression must be reduced due to a suppression of its libido -- an em does not have hormones!
- Aspects of Em habitation/organization, such as "cities" and "offices" are just "copied" from human society. He writes, "It’s reasonable to guess that such habits will continue with ems." (p. 104) But it's not. There is no reason for Ems to behave in this way as humans.
- Humans and ems are shown as binary. Humans are biological and have self-ownership, ems are technological and do not. But I don't see why this ought to be so -- I would very much like to have the desires, preferences and emotions of a human, but the abilities/efficiency, immortality and unlimited VR leisure scenarios available to an em. There would still be unfeeling, specialized AIs, of course -- much like there would be computers that aren't even AIs, devices that don't even have CPUs, etc. -- but eventually almost all humans would opt for a massively extensible, upgradable robot body than a static mortal body.
- There's just a lot of interesting aspects of the civilisation that are not sufficiently explored. E.g. transportation, cybercrime.
Indeed, these may be considered acceptable in science fiction, but it is important to be less "conservative" when attempting a non-fictional, encyclopedic description of a society.
Perhaps a more specific objection I have is with the entire premise of "brain scans" as the future of AI. This seems completely at odds with the direction that current AI research is headed. To use a somewhat cliche analogy, we didn't need to study how birds fly to invent airplanes. There is no reason to believe that the most efficient architecture for a "software" brain would be the same as the architecture that biological, hardware brains have evolved.
The general answer to how a software brain should work is that it should be a function approximator, such as the "neural networks" (trainable computational graphs) that are currently popular.
This point is important, as it addresses Bryan Caplan's critique re: carrot vs stick as incentive for the ems. The question of carrot and stick assumes some "natural" state of affairs that a human being will go through without intervention by the employer -- the "carrot" is an intervention that improves this state, while the "stick" is an intervention that worsens this state.
But a neural network does not have a natural state of affairs. There is no difference between training a neural network to minimize a loss function, and training a neural network to maximize a reward function: these are completely identical. There is no distinction between carrot and stick.
Here's a description that I find more satisfactory: see Age of Gen.
No comments:
Post a Comment