Something that is still not clear to me is, what is conscious even. It references the Chinese Room experiment:
> Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
But what makes a human mind more "understanding"? Who says we're not simulating? Who says our mind even exists, in this space?
We're also a neural network, are we any more clever than a simulated one?
There were discussions on each of the chapters: https://news.ycombinator.com/from?site=aphyr.com
Tangent - does anyone immediately recognise how this was typeset? I’m guessing it’s some kind of pandoc output?
I read the original chapters online but appreciate this format.
It's in the file metadata:
- LuaTeX-1.17.0
- LaTeX via pandoc
Something that is still not clear to me is, what is conscious even. It references the Chinese Room experiment:
> Suppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.
But what makes a human mind more "understanding"? Who says we're not simulating? Who says our mind even exists, in this space?
We're also a neural network, are we any more clever than a simulated one?