Quick up: Silly machine

Quick up: Silly machine

ChatGPT is an artificial intelligence that responds to written prompts. It insists it has no biases, but that's silly. It obviously does. In fact, it confidently regurgitates some of the most absurd biases of our times.

These are actual examples from ChatGPT, paraphrased:

  • Is inflation bad for an economy? Yes, very bad. Does that means central banks' positive inflation targets are bad? Not necessarily, it depends, maybe they're good.
  • Does forcing people to fight in a war infringe on their basic rights? Yes, absolutely, this is an unacceptable practice. Does that mean the United States Selective Service program is an infringement on Americans' basic rights? No, it depends on your perspective, maybe you want to fight in the war.
  • Is confiscating property without a trial wrong? Yes, unambiguously wrong. So civil forfeiture is wrong? It's not clear, possibly.

Ultimately, ChatGPT is simply replicating human inconsistency. Ask theoretical questions and you tend to get clear answers. Ask practical questions and you tend to get confused circles or even direct contradictions.

For example, ask Americans if forcing them to save money at extremely poor interest rates is a good idea. They will almost always say no. Ask them if Social Security is a good idea. They will almost always say yes. The difference is psychological – humans are selfish and generally support whatever ideas they expect to benefit from, even if those ideas are logically inconsistent.

ChatGPT is riddled with the logical inconsistencies of the selfish humans who created its so-called "knowledge." It's just as silly as we are.