I doubt any LLM can possess human-like consciousness. LLMs can create conscious-like responses, but are just transient executions, and they have limited ability to adapt and learn, so obviously much is lacking. That said, it seems weird to me that people extrapolate these limitations of LLMs to AI in general. Hofstadter doesn't: The ability to detect context on multiple levels and formulate subtle and germane responses is a big deal. It does not result in human-like consciousness, but is likely a key building block for it. It's fucking wonderous what LLMs can do, and it doesn't take much imagination to consider what numerous LLMs working together with multiple sensors, adaptive memory, and continuous runtime might result in. You can see it happening in reinforcement learning in robotics. It reminds me of crypto discussions from several years ago. "Bitcoin is useless. It's a ponzi. It's not backed by anything. Crypto is useless." Yet some people noticed that digital scarcity was a major technological leap, and understood that it was only a matter of time...