following: 78
followed tags: 56
followed domains: 5
badges given: 234 of 234
hubskier for: 5472 days
I am probably working on Hubski.
Send me a PM or post with the tag #bugski if you think something is broken.
I filter #thebeatles.
I do not agree with everything that I post. I hope you don't either.
Image by veen.
Hooray!!
It's even better now. Everyone starts with the same word each day that doesn't contain any letters that are in the last word.
Hey lil if you get this, password recovery emails are working too.
That was awesome. Who knows? Let’s live it up before the rise of the machines.
I started working on the site again. Emails should be working again soon. Expect some service disruptions. I’ve been considering eventually taking Hubski out of the cloud.
v5. GPT wrote the text.
Lol. At this very moment there are models being built without safeguards, perhaps even inherently deceptive and amoral models. We can’t help but architect our downfall, because it makes sense to do every step. This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense. When sophisticated cyberattacks inevitably occur, our goal is for Claude—into which we’ve built strong safeguards—to assist cybersecurity professionals to detect, disrupt, and prepare for future versions of the attack. Indeed, our Threat Intelligence team used Claude extensively in analyzing the enormous amounts of data generated during this very investigation.
They do not mean the same thing. A vector embedded relationship cannot be interpreted in probabilistic terms. They are not probabilistic relationships but geometric ones. - how many copyright-violating corners are fenced off - how many times and directions it loops through the model to give you an answer uh huh..What do you think is the difference between a "probability" and a "vector embedding?" Because they mean the same thing.
- how big is the model
Perhaps I am wasting my time here, but... they are not Markov chains. A Markov chain looks at 1 or 2 preceding words to predict the next one. A transformer model weighs the every previous token in its context window (millions of tokens) when predicting the next one. This gives transformers self-attention based on a neural network that analyzes the massive context window. Markov chain LLMs do not do this. A transformer model is a high-dimensional neural network with billions of parameters. It doesn't just look up "probabilities". it uses vector embeddings to grasp semantic meaning, grammar, and relationships of the input, allowing it to generalize. It's building a massive internal model and considering it. And.. these are diffusion models using inference not autoregression, so they aren't just adding words one at a time, they are continuously altering the entire state and refining so they maintain long range coherence, once again, unlike a Markov chain that is locked in to a sequence of output tokens. We have moved far beyond "stochiastic parrot". A Markov chain adheres to the Markov property, which states that the probability of the next state (or word/token) depends only on the current state, completely ignoring the history of how the current state was reached. Transformer architecture explicitly violates this property. You can also say it's all just zeros and ones and you are technically correct, just like we are all just cells or whatever. I mean, if your assessment of AI is "they are markov chains so therefore.." it's not surprising that you'll underestimate them.
Suno probably doesn't use Markov chains. They probably use neural network transformer-based diffusion models. Mike is right, things are changing quickly. GPT doesn't use Markov chains anymore either btw.I've been very clear about those reasons - LLMs are incapable of synthesis. They use Markov chains and stochastic variation to produce arithmetic means in every dimension. An n-dimensional LLM with values from 1 to 100 in every direction will never reach 101 in any direction. That's not "reasons" that's the fundamental limitation of the technology.
It's also an interesting effort given that the cost of LLM inference has been dropping more than an order of magnitude each year. https://a16z.com/llmflation-llm-inference-cost/ tangent warning IMHO the notion that more compute will lead to AGI is stupid. Of course, we can't define AGI or super intelligence or even consciousness or intelligence, but atm AIs can write Billboard charting songs, crush the LSATs, code and deploy applications, have encyclopedic knowledge, generate photographic images, and pass the Turing test. You can slap all these capabilities into one AI, and have an AI that can do what no single human can, but we will still be able find something a human can do that it cannot, and it isn't because we haven't trained it on enough tokens. If you use AI regularly, it's obvious that they fail because they cannot learn beyond their context window. They do not exist as entities beyond their context window. They are a session that reflects a static architecture. It's like dealing with someone that cannot create long-term memories, or benefit from experience. I not only have a context window (I hold and build and assort a bunch of relevant information for the task at hand), and it's also obvious that I clear much of that cache (the next day I can only relate the generalities of the previous task and the effort) but I do commit important results from the task at hand to long-term memory, and I retrain myself using those results. Even a mouse can learn from experience. It's not going to be too long before someone dumb enough realizes that MOAR COMPUTE is not the bottleneck to a disturbingly adaptive and independent AI that can also do a shit ton of stuff that no single human can. Will it be conscious? will it be AGI? will it be really thinking? Who cares? We won't be able to answer those questions and It won't matter.
I posted it because you said my tennis buddy's AI generated song that I mentioned couldn't be very good and that was evidence that AI only seems threatening if you aren't familiar with an industry. I think Hubski Ghosts is a pretty convincing counterargument, and I wasn't going to post my tennis buddy's song. Xania Monet has been charting on Billboard btw: So not just an article of faith..
TIL what does the fox say made it to number 6.
