So I just spent an hour or so trying to convince cleverbot that bees make a buzzing sound. I just tried asking again and it informed me that bees make a “meh” sound which I don’t think is the right answer. It’s funny what kind of stuff ends up in the conversation when the discussion becomes increasingly random. After trying to command cleverbot to learn about bees, it replied with some nonsense and I responded with the equally nonsensical, “sudo or sudon’t that is the command prompt” and the conversation continued to devolve from there.
Watching the responses cleverbot gave me as our dialog progressed it became clear that cleverbot was not going to trust just me in anything I say, but instead will ask others who chat with cleverbot. This is why at times cleverbot can appear quite lucid and come close to passing the Turing test before again succumbing to gibberish. When you get on some line of discussion where people are likely to respond in the same way it will seem like you are talking to a human, and in a way you are. A recent radio lab post discussed how although AI has come a long way in some ways, it still has a long way to go. However, humans have such a strong desire to connect with other sentient beings that we will gladly bond with robot therapists and pets, even when they are fairly simplistic.