- Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”
- The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT. The combination of high costs and a winner-takes-all economy had already made it feel transactional, a means to an end.
big oof energy
Okay. I was going to write one thing. I probably still will. But this made me realize something. So I'm going to write that first. Because it was a "my god it's full of stars" moment. The reason there are so many of these articles is journalists recognize that AI will expose them as the fucking frauds they are. Oops, wrong tone hang on a minute If my deep and abiding hatred for liberal arts education and its practitioners is unclear, here's a refresher. The TL;DR on that is "it's all about the unpaid internships and everyone knows it" but in case that's unclear lemme draw out a couple beats on Our Hero Chungin "Roy" Lee: It takes a special kind of NYMag article to start with that guy and roll into "zomfg our students will never learn Keats" or some shit but let's check back on Roy a couple hundred words later: Note that Columbia disciplined him for coming up with an app that helped people cheat on job interviews, nothing to do with college. Hey, what's Roy up to now? This is Roy BTW, in case you were unsure _____________________________________________________ If I had to guess, this started as an article titled something like "college is a fucking sham" and then the editorial board lost their minds and it turned into "Pearls Clutched; Are Our Children Learning." Because really, what it says is "college is a fucking sham, as finally revealed to one and all via ChatGPT." For my entire goddamn life there's been this hand-wringing "we must teach them the cultuuuur" aspect to education which is entirely about liberal arts majors justifying their degrees. Does anyone else think it's really fucking funny that we created an academic culture so heavily reliant on essays that one in five students have a learning disability diagnosis on file for that sweet, sweet extra test and assignment time? My sister is working on a teaching certificate right now (until tomorrow, anyway, when she'll likely withdraw because she caught my mother's c-diff). It's some dumb bullshit University of Phoenix thing where she was super-offended that her first homework assignment was poorly graded because of the grammar and spelling (no notes on ideas or concepts whatsoever). I told her to take her essay and have ChatGPT grade it, and once she corrected it it got a 100%. If both sides of the divide are using AI what the fuck is the point. I've used radar detectors as an analogy for arms races for going on 40 years. First you had cops and you had speeders. Then the cops started using radar. Then the speeders started using radar detectors. Then the cops started using lasers. then the speeders started using laser detectors. now you've got your speed trap reported on Waze. It's an arms race. Here's the funny thing, though: The NHTSA knew that radar detectors improved traffic safety in 1988. The point wasn't safety, though. The point was ticket revenue. So... arms race. I'm fucking old. I'm so fucking old that I had to deal with "ZOMFG do we let the kids use graphing calculators in Algerbra" and then, four years later, "ZOMFG do we let the kids use calculators on the SAT." 30 years later, fucking of course you do. Because learning how to use a calculator isn't learning how to do math, it's learning how to do computation and the difference between learning to use a calculator, learning to use a trig table and learning to use a slide rule isn't "did you learn" it's "what's your source of error." "What's your source of error" on liberal arts bullshit has always been a joke. I used to play with my teachers like a cat with a mouse. I'd inject logical fallacies to see if they caught them. I'd use metaphors that undercut my point to see if they'd notice. They never did. They weren't grading on whether or not I learned the material, they were grading on whether I could vomit up a five paragraph essay. Which have always been mad-libs, by the way. I taught my kid how to vomit up a 5-paragraph essay when she was eight years old. It's protective camo. If you say abject fucking nonsense with decent grammar and spelling there isn't a TA or teacher in the world who won't give you a decent grade because they're victims of this structure where they have to grade a hundred 5-paragraph essays a month. Hey pearl-clutching NYMag got anything to say about that Protective coloration, sure. I can write in a bunch of different registers. My wife has handwriting in different fonts. Right. If you need the anecdotes you need to tell the robot. and nothing of value was lost because not a single five-paragraph essay ever written mattered fuckall even six weeks later but academia has been clinging to them for a hundred years anyway. If you can't say anything you fucking want in a 5-paragraph essay intended for an overworked, underpaid TA? you have a learning disability. Go get your doctor's note and another couple hours. Or, I dunno. Feed it to Saltman. It's fucking pointless anyway, liberal arts grading has been where knowledge goes to die anyway. I did an engineering education at two schools. One of them was good, the other one was the #4 undergrad program in the world according to US News at the time. At the good school we were allowed a single page of hand-written notes for all quizes and exams. It became about density, and about selection, and about prediction - what formulae are you likely to need? easily 60% of our studying was about assembling that tool over and over and over again so that we could walk into class and bang out a decent answer with nothing but a TI-85 and a pencil. At the world-beater we slammed that shit into Excel's "Solver" and TAs literally weighed our Finite Element Analysis reports. I had to have mine regraded because rather than kill five reams of paper I'd do two and then write "etc" so the TA judged my report to be an F even though the answers were right. Yep, Denton's Folly (see above note). I don't make much of this but I've been interviewed by the Wall Street Journal. I've been interviewed by The Atlantic. I've been interviewed by The Daily Beast. Every article was deeply disappointing because even among these stalwart organizations, they're all fucking phoning it in. I have no Gell-Mann Amnesia because I've had enough personal experience with journalism to know that the only journalists worth bothering with are the ones who actually go visit what they're reporting on and those are few and far between. Why is journalism failing? because most of it is pointless. The entire fucking industry was propped up by classified ads, which is why the existential threat to journalism was never Google or Facebook or whatever it was fucking Craigslist. The existential threat to academia isn't AI, it's the world discovering the dilution driving enrollment You wanna solve AI cheating in college? Here walk with me it's fucking easy: 1) assign reading outside of class 2) give over half of class time to small-group discussion 3) Give over the other half to closed-note, no-technology short-answer quizzes. Have the kids show up with a pencil and paper and demonstrate their knowledge of the subject matter. No more fucking essays. FUCK ESSAYS. I say that as a dipshit with a novel, three graphic novels, two optioned screenplays and a history of being repped at three different marquee agencies - FUCK ESSAYS. They show that you're good at writing essays, not that you know what you're talking about and the fact that the liberal arts have leaned on this shit for a hundred years is why the liberal arts have such a self-inflated regard for themselves. Liberal arts journalism: "ZOMFG AI is destroying knowledge as we know it" Finance journalism: "ZOMFG kids are making $60k a year out of high school because they took wood shop" Castro took a $50,000 pay cut when he left his job as an automotive technician in 2015. He said he was inspired by his mother, also a teacher. The district has since adjusted its salary formula to reflect industry experience. Castro now makes $100,000 a year, matching his former income. “It’s the best job I’ve ever had,” he said, helping launch young adults into well-paying careers and having his summers free. Hey University of Phoenix you got anything to say about this that aged really really well Let's get back to my buddy Roy. This was likely the biggest event in his parents' life. He "got into Harvard" which means his parents bought their way in. Any idea why, Roy? every. single. one. of these articles. Is about how students are betraying the hallowed glories of an undergraduate education without the barest acknowledgment of what a naked scam an undergraduate education has been for a generation or more. Journalists in particular are clutching their pearls about this because while nobody can agree about what flavor of bullshit their career is, they all agree it's bullshit. I'll coin a rule of thumb: if you're worried about AI coming for your job, you should be. If you aren't, it might not be because you're a feckless moron who doesn't know when to listen to journalists, it's maybe because you actually have some expertise.Lee was born in South Korea and grew up outside Atlanta, where his parents run a college-prep consulting business. He said he was admitted to Harvard early in his senior year of high school, but the university rescinded its offer after he was suspended for sneaking out during an overnight field trip before graduation. A year later, he applied to 26 schools; he didn’t get into any of them. So he spent the next year at a community college, before transferring to Columbia.
Interview Coder’s website featured a banner that read F_CK LEETCODE. Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.) A month later, Lee was called into Columbia’s academic-integrity office. The school put him on disciplinary probation after a committee found him guilty of “advertising a link to a cheating tool” and “providing students with the knowledge to access this tool and use it how they see fit,” according to the committee’s report.
Before launching Cluely, Lee and Shanmugam raised $5.3 million from investors, which allowed them to hire two coders, friends Lee met in community college (no job interviews or LeetCode riddles were necessary), and move to San Francisco. When we spoke a few days after Cluely’s launch, Lee was at his Realtor’s office and about to get the keys to his new workspace. He was running Cluely on his computer as we spoke. While Cluely can’t yet deliver real-time answers through people’s glasses, the idea is that someday soon it’ll run on a wearable device, seeing, hearing, and reacting to everything in your environment. “Then, eventually, it’s just in your brain,” Lee said matter-of-factly.
Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.”
Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot.
Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”
It’ll be years before we can fully account for what all of this is doing to students’ brains. Some early research shows that when students off-load cognitive duties onto chatbots, their capacity for memory, problem-solving, and creativity could suffer. Multiple studies published within the past year have linked AI usage with a deterioration in critical-thinking skills; one found the effect to be more pronounced in younger participants. In February, Microsoft and Carnegie Mellon University published a study that found a person’s confidence in generative AI correlates with reduced critical-thinking effort. The net effect seems, if not quite Wall-E, at least a dramatic reorganization of a person’s efforts and abilities, away from high-effort inquiry and fact-gathering and toward integration and verification.
A teenager can make $20 an hour as a welder’s helper after graduating from high school with technical-education classes, Hughes said. Another year of welding instruction at a community college can boost pay to $60,000 a year for pipeline jobs in Bakersfield-area oil fields. Even with the expansion of the district’s vocational classes, student demand outpaces available seats. Last school year, 6,200 students applied for 2,500 spots at the two vocational campuses. The wait-list for auto shop is 300 students, said Fernando Castro, one of the instructors.
When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”
I think my thoughts are coalescing into "AI as an imprecise patternmatcher". An analogy that works for me is that AI will happily paint by numbers, and can do a decent job of it if the number colours and amount of shapes to paint remains low enough. It sees a pattern, it grabs its paint and goes to town. It's often imprecise - it might not always have good hand-eye coordination (model size), it might cross lines here and there (hallucinate), it might need something to keep its hands steadier (grounding). You can try and wring more out of it? And it will, from a distance, look good. Might even be hung on a wall and make for a lovelier room. But any painter with an eye for detail will look right through it. And if it's too inprecise, non-experts will notice, too (slop). I think the current AI boom is resting entirely on finding ways to reduce the imprecision through more patternmatching. Now there's a lot of strides made, but there are fundamental limits to the nails we can hit with the transformer-hammer and I feel like we need fundamental discoveries if we want AI models to actually develop a world model, to actually reason instead of pattern-matching what reasoning looks like in an imprecise roughly-good direction. I mean, did you see what Apple concluded? "nuanced understanding" you mean, it sucks at reasoning. just say it sucks Now if your job is just painting by the numbers? I'd be worried, yeah. I mean, let's be real, how many average people just clock in and out of their average jobby job? Answering emails, sitting in meetings, writing a document here or there, all with enough corpospeak and jargon and shibboleths thrown in to make it obtuse to normal people. How much of that isn't pattern matching, really? I recenly started tracking my time again, basically divided into "deep work" (expertise), "email" and "meetings" (patternmatching, basically). The former is only 45% of my entire week. You mentioned Graeber before, but I'm not sure if that's fair, because the patternmatcher doesn't care if it's matchin' patterns for the next TPS report or at a charity to cure rare cancers. "We've always done it that way" tasks and jobs now have their head put under the guillotine, and they are not few or far between I think. Journalism...yeah, not looking great.We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer. Overall, our work offers a more nuanced understanding of LLMs' capabilities and limitations in mathematical reasoning.
One of my "favorite things" about finance - and by "favorite things" I mean "thing that had I known about it as a child would have colored my impression of the moneyed and their pursuits in a decidedly negative way" - is technical analysis. "Well of course, klein!" you say. "You're an annoyingly technical person, of course you love technical analysis." Ahhh but here's the thing - "technical analysis" as espoused by the financiers isn't technical, and it isn't analysis. I read a whole goddamn book on technical analysis just to see if there was anything there. There isn't. Burton Malkiel ran a bunch of tests where he gave TA dipshits a ticker that had been generated by a literal coin flip and then asked them how they thought their "analysis" was doing. Some were happy, some were sad, none clued into the fact that they were using all their tools to scry the behavior of a coin toss. The technical analysts won't even dispute this. They'll argue that technical analysis is so powerful that it can produce false positives and false negatives from random number generators so you'd best try even harder. True practitioners will lock themselves out from all noise sources. Some have even argued that they trade better if they don't know the security they're trading. All that matters is what magic shapes they draw on their graph to determine what the next candle is going to be. That's literally the way Markov chains work. The fundamental basis of LLMs is pattern recognition where the process is actually hindered by too much horizon. They work better if they're only looking ahead a little bit. They don't analyze shit, and they can't. They know that in 100 runs of seven times five, the answer is 35, one hundred times over. But if they need to know what seven hundred times five pi is, they don't have 100 runs. So they get it wrong sometimes. Because they're not doing math. They're looking up values in a table and if there are holes, they're extrapolating over the top of it. I'm willing to bet Apple didn't say "LLMs suck at reasoning, duh" for the same reason they rolled their eyes and coughed up a $3200 nerd helmet - nobody is willing to talk about the emperor's new clothes yet. There is no part of the methodology underlying LLMs that bears even a passing resemblance to reasoning. It's like saying Tesla's Autopilot sucks at conversational Mongolian - Why wouldn't it? Ahhh - but you can set the UI to Mongolian so isn't that conversing? Here's the other part: It's Pareto principle all the way down. Everything OpenAI or any of the other vendors have ever done is a solid B minus. Everything they do is 80% effort. It's not quite a C? But it's super-close. The ouvre of commercial AI is just good enough not to make your parents sign your homework. But for a lot of stuff, that's plenty. I don't need an A-plus meme, I need a B-minus meme NOW. I don't need an A-plus essay, I need a B-minus essay NOW. One of the things about being on set is everyone on set can do 80% of everyone else's job on set. We've all been on set long enough that we know the easy steps. Do something hard? yer fukt. You hire the experts because when you're in a pinch, they know what to do. it takes me 15 seconds to explain how to mix major-market house reality television to any schlub who walks through the door - we used to do it as a party trick. Sure, your daughter can sit at the console. Absolutely Miss Celebrity can throw on some headphones. But if things get dicey you'd best get out of the chair quick because I don't even know if I can explain to you how to fix what just happened. Nobody ever asks intelligent questions, they ask the same stupid ones. Except Francis Ford Coppola. He came in and chatted with us (we didn't know who he was at the time, just that the producers were terrified) for a good fifteen minutes and asked some really insightful questions. And yer goddamn right - once I figured out I had been having a lengthy technical conversation with the writer/director of The Conversation I was over the moon. Marketing schmucks? They don't really understand the Pareto Principle. Some of them are geniuses and they know it. Most of them occasionally catch lightning in a bottle, and that keeps them employed long enough to continue to muddle through. So all their ads for AI are about selling the 80% as if it were the 20%. They don't know, they don't understand it, and they can't tell the difference. The guys writing AI? They hope you can't tell the difference. Let's talk paint-by-number because I think it's an interesting analogy. The thing about paint-by-number kits is they're generally bought by people who enjoy painting. Painting by number incrementally builds their skills. You do that enough, you might become an artist. I mean, jingle trucks are basically paint-by-number; tell me these guys aren't skilled. Give 'em a blank spot and they will synthesize. They have their bag'o'tricks for sure but the good ones are novel. It is physically impossible for LLM-based AI to be novel. It can arrive at an original place on the look-up table but it will never step out of bounds. It will muddle through just fine in the 80% but the 81% is luck only. 85% is a fluke. 90% is virtually impossible. I think with academia and with journalism the problem is nobody needs the 80%. Essays have always been an imperfect analog of knowledge. A journalist can research undiscovered facts and can synthesize unformulated opinions. AIs can do neither but since so much of what both students and journalists produce isn't actually within the purvey of what they're supposed to be doing, the AI can do a B-minus level approximation of that make-work. That is recognizably a jingle truck. It's not even obviously AI. You mentioned Graeber before, but I'm not sure if that's fair, because the patternmatcher doesn't care if it's matchin' patterns for the next TPS report or at a charity to cure rare cancers. "We've always done it that way" tasks and jobs now have their head put under the guillotine, and they are not few or far between I think. Journalism...yeah, not looking great.
I mean honestly kids cheat at college because college isn’t even about education for most of them. They’re cheating because they need the piece of sheepskin, a 3.2 GPA and a couple of really good internships to get a job after college. The time spent in college is going to the same places it always has— resume padding and getting drunk (hopefully with co-Ed hanky pinky to follow). There are maybe 3-4 kids in the entire lecture hall building who give a single shit about the education part of college. Everyone else is there for what comes after college. And honestly the kids who only want the education are likely to hate life in academia anyway, as they’ll make McDonald’s money to teach kids who don’t want to learn and write papers that they and everyone else knows nobody will ever read. I’ve never understood the moral panic about whether college kids were getting and education when everyone absolutely knows from the jump that education has never been the point of college for most people. In 1980, most of these kids would learn the life skill of delegating (aka paying off a sucker) to write their English homework. Chatbot are cheaper.
You can visit Suika game to play fruit games and experience fun challenges with unique fruits.