Authoritative Sources In A Morally Relativistic World
I just picked a fight with AI in posing the question: “Who judges what’s ‘credible’?” I had noticed that AI frequently cites Wikipedia in its answers, and I challenged its use. AI replied, “I use credible and well-established sources that are Peer-reviewed or scholarly and primary or officially vetted.” Hmmmmmm. Can you really trust Wikipedia?
In fact, AI freely admits that Wikipedia’s sourcing policies, while officially grounded in reliability and verifiability, often reflect the biases of its most active editors, many of whom lean towards liberal or progressive views. Wikipedia maintains a “Perennial Sources List,” which outright bans most conservative outlets as biased or unreliable, but not so liberal outlets.
AI" src="https://images.americanthinker.com/au/auh6uajat0qthnrl9dgi_640.jpg" />
Image created using AI.
Regarding AI in general, it is wise to keep in mind that a student is rarely more conservative than the teacher. AI is constantly being tinkered with as one logical fallacy after another is uncovered.
AI uses something called a “Toxicity Score“ that is supposed to be a governor on sensitive subjects. Outside researchers constantly find issues that bolster certain groups, like gay and transgender, while effectively doxing conservative thought concepts like limited government or the importance of individualism.
Allan’s Rule Number Six is: “Always evaluate the evaluator.”
I won’t bore you with the back and forth I went through trying to corner AI. Let’s just say its programmers figured out the art of circular logic early on. In AI, if your premise (programming) is wrong, it doesn’t want to be bothered with things like logical endpoints because it does not believe in absolutes, except perhaps in........
© American Thinker
