I doubt any LLM can possess human-like consciousness. LLMs can create conscious-like responses, but are just transient executions, and they have limited ability to adapt and learn, so obviously much is lacking. That said, it seems weird to me that people extrapolate these limitations of LLMs to AI in general. Hofstadter doesn't: The ability to detect context on multiple levels and formulate subtle and germane responses is a big deal. It does not result in human-like consciousness, but is likely a key building block for it. It's fucking wonderous what LLMs can do, and it doesn't take much imagination to consider what numerous LLMs working together with multiple sensors, adaptive memory, and continuous runtime might result in. You can see it happening in reinforcement learning in robotics. It reminds me of crypto discussions from several years ago. "Bitcoin is useless. It's a ponzi. It's not backed by anything. Crypto is useless." Yet some people noticed that digital scarcity was a major technological leap, and understood that it was only a matter of time...
Looking back on that TEN YEAR OLD THREAD it is only just now coming to me that our entire posture around AI is based around a fuckin' homeschooled dipshit with a blog who couldn't even recognize Harlan Ellison's most famous story.
I'm willing to bet I follow this stuff as close or closer than you do and I have yet to see anyone with the knowledge to explain an LLM extrapolate to "AI in general." The basic issue is that LLMs have become AI in all but the most academic of discussions, and in those discussions, the argument is invariably that AI, as the researchers know it, cannot meaningfully move forward until society moves past AI as the boosters know it. There's nothing weird about it - From. The. DROP every dipshit in any LLM-adjacent field has been screaming about the singularity while the press thoughtfully strokes its collective chin about how Saltman wants a trillion dollars so he can invent fusion so he can feed the maw of his magical AGI machine. Meanwhile marketing has pushed nVidia to a four trillion dollar market cap and news organizations write serious articles about how the only hope Meta has is to steal guys from the people who brought you Siri. Hofstadter says four things in that article: 1) Maybe LLMs will stop making so many mistakes soon (they haven't) 2) Maybe the human brain isn't such a privileged organ (it isn't) 3) Maybe machines are already more intelligent than humans (they aren't) 4) T1000 with a shotgun (AYFKM) Here's the most useful bit in that interview: "which is to make rigid systems act fluid." Deep Blue? Brute-force. Alphago? Brute-force. Every single LLM? Brute-force. There's no "acting fluid" to any of it; "acting fluid" means "let the machine pick its training data" and that is absolutely positively one hundred percent totally not happening. None of that shit is happening. There's no contextual detection what-so-fucking-ever; take it from a watchmaker, the fact that every single timepiece an AI cooks up shows 10:10 illustrates that the people patchworking their responses don't give a fuck about watches so it's where the gadgets fall down. Where they're good? Is where people who want to believe in this crap want to take it and they're rapidly running out of credibility. Douglas Hofstadter is an 80-year-old computer scientist whose last contribution was in 2007. My father is an 84-year-old computer scientist and he lost the script at Windows 7. Yet both of them can tell when a fishing lure is a fishing lure. As far as crypto, whatever utility it has is busily being buried under schemes and scams as everyone with any ability to actually move the ball forward has fucked off to their own weird little social network while the rest of the world launches four thousand shitcoins a day on Solana, much as "takes your 11th grade multiple choice exams for you" has taken over "solves problems that aren't pre-existing."Computers were very rigid and I grew up with a certain feeling about what computers can or cannot do. And I thought that artificial intelligence, when I heard about it, was a very fascinating goal, which is to make rigid systems act fluid. But to me, that was a very long, remote goal. It seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take enormous amounts of time. I felt it would be hundreds of years before anything even remotely like a human mind would be asymptotically approaching the level of the human mind, but from beneath.
The ability to detect context on multiple levels and formulate subtle and germane responses is a big deal.
As far as crypto, whatever utility it has is busily being buried under schemes and scams as everyone with any ability to actually move the ball forward has fucked off to their own weird little social network while the rest of the world launches four thousand shitcoins a day on Solana, much as "takes your 11th grade multiple choice exams for you" has taken over "solves problems that aren't pre-existing."
That's an assertion based on a wish, not a proof. The purpose of currency is to pay for the state monopoly on violence. You want police? Police need to get paid. The people who pay them need to rely on their fealty which requires a stable rate of exchange. Extend to all public works. There's a whole book about this. Governments exist to protect commerce and that protection has currency issue as a perquisite. Full stop. Been that way since Sumer. The purpose of a stablecoin is to streamline the onramp between "currency" and "something in desperate need of stabilization." "Stabilization" has traditionally been done with large state- or state-sanctioned banks, and it's a system that has worked admirably well since well before the Federal Reserve. A stablecoin, on the other hand, puts Jeremy Allaire and his shareholders in the shoes of JP Morgan for no reason other than they lobbied for it. The way you stabilize a commodity in heavy flux is by buying it all up and then selling it slowly. This is what central banks do. This is why they have played an integral role, in one form or another, since the invention of fractional reserve banking in the Renaissance. If they were any fucking good they wouldn't need legislation and you know it.