Seriously, though... bfv- lambda is a Markov chain bot, correct? And those are good for cleverbot stuff, but not actually useful. ELIZA was not a Markov bot, but it was better at conversation (if less creative). This bot here... I guess I want a mile high view of architecture and exclusivity and why different approaches aren't blended more. Also, Jameson's. Greetings from SeaTac Airport.
I don't actually know, but I imagine this is an expert system, which generally do work somewhat along the same lines as Eliza did in matching patterns against a set of pattern -> action pairs. You try to encode the rules of thumb an expert in some field uses in those rules, on the assumption that an expert doesn't start reasoning from first principals but by "this is like this other thing I've seen before, let's see if the same solution applies." They were the basis for the first AI boom, and fell out of fashion when it went bust. Most of the historical examples every AI student studies had conversational interfaces like a chatbot, and Peter Norvig's AI programming book actually starts with an Eliza clone and extends it for doing expert systems work. There are a few reasons you don't see them much anymore: * Liability. The big successes were mostly medical, and there are legal problems I don't entirely understand with software dispensing medical advice. * Disappointment from the first AI boom, which was all about expert systems. They got too much hype and didn't deliver on it. * Relatedly, it's old school and it's hard to get investment for something that's not the new hotness. Someone will launch a very successful product and bring it back eventually, and we'll all act like things every undergraduate learns about are exciting and new because we're the tech industry and that's how we do. * Interviewing experts and figuring out how to encode their rules of thumb in a tractable set of rules is a long, tedious, frustrating, expensive process, I say having done it a few times. What you get is awesome, because it can both give you an answer and explain how it got that answer, but it's easier to rephrase the problem into something you can throw a learning algorithm at and get a good enough solution from, and ignore the problems where you can't.