How I learned to start worrying and stop the AI marketing

Advocating for the use of AI as a teacher is dangerously misguided

In his most recent blog post "How I learned to stop worrying and love the LLM", Niko Matsakis boldly proclaims:

AI’s ability to explain and diagnose problems with rich context can help people get over the initial bump of learning Rust in a way that canned diagnostics never could, no matter how hard we try.

It's time to call a spade a spade: this is a lie. Let's see what state of the art so-called AI has to say about Rust fundamentals:

Sent by you: when you clone a Box in rust, does the data on the heap get replicated? - Sent by Copilot: When you clone a Box in Rust, the data on the heap does not get replicated. Instead, the Box itself is cloned, meaning a new Box is created that points to the same data on the heap as the original Box. This is because Box is a smart pointer, and cloning it only duplicates the pointer, not the data it points to.If you need to clone the data on the heap as well, you would need to implement the Clone trait for the data type and then explicitly clone the data inside the Box. Does that help clarify things?

In case you were wondering: this response is completely wrong - Box clones its contents.

Like the cryptocurrency grift that came before it, the AI hype machine dangles a golden carrot in front of us, promising an unbearably bright and shiny future that's just around the corner, we're almost there, you just gotta believe and throw more money or other resources at the idea!

Strangely this future never arrives. We've tried this before (I also recommend reading the follow-up discussion. It's long and infuriating, but worth it). Niko more or less admits:

AI-based code assistants right now don’t really have this property. I’ve noticed that I kind of have to pick between “shallow but correct” or “deep but hallucinating”.

... but this also contains another lie. Specifically: "shallow but correct" isn't a thing for LLMs. The technology fundamentally cannot do anything except hallucinate, since it is based purely on tokens which carry no inherent meaning whatsoever - LLMs have zero idea what facts even are, whether they made a mistake, how to correct said mistake, or for that matter, what a citation is - as demonstrated by the screenshot above.

Rust’s emphasis on “reliability” makes it a great target for AI

No it doesn't! LLMs are optimized for plausibility, not correctness (they can't ever be correct, as argued above), and managing to successfully smuggle anything past rustc merely means you're not violating what the language dictates. It does emphatically not mean your program is correct. Especially a language and ecosystem where one of the main goals is mission critical safety and reliability should never get sabotaged by a text extruder that has been trained to merely sound convincing.

Let's not get deluded by people with skin in the game (Niko works for Amazon). The emperor has no clothes - or clues.