I was commenting at CSA the other day, when Tige Gibson responded to commenters who mentioned the problem of induction:
Shady thinkers who pat themselves and each other on the back drive away people who would be interested in more interesting contributions. Some of us might faintly hope that these people actually learn something, but when even a pointed statement (I mean how can I be the first one to point out solipsism) doesn’t wake them up to their own error, there is little hope of this conversation becoming elevated. Solipsism is a concept that is not merely absurd because of its own definition, but because it justifies inserting absolutely anything in place of reality, since you have purposely sabotaged your ability to know anything with any certainty.
How do insecure fundamentalists reply when their dogmas are questioned? Why, I think it’s safe to say they often respond exactly like Tige Gibson and John W. Loftus: some combination of misconstruing the argument, insulting their interlocutor, and acting as if they’ve been attacked. This is beside my point.
I’ve been thinking about AI for the past few days, and I find the following questions interesting:
1) If Ian Pearson is correct and we are able to download human consciousness onto machines by 2050, wouldn’t this effectively prove that consciousness can exist outside a human brain, e.g., that some type of mind-body dualism is correct?
2) This is more of a technical question, but, what, exactly, would we be downloading? The original, so to speak? A replica? A set of algorithms that recreates the original?
3) Could we falsify the claim that any given machine is conscious?
4) Wouldn’t claims of conscious machines have to be assumed, in the same way we assume the existence of other minds?