An AI for epistemics would be able to reason to truth from noise, such as when all “reliable sources” are wrong. That’s what my brain does. We need a “universal explainer” AI and it seems like neural networks cannot ever be that. LLMs take wrong and make it wrong squared. They have not improved at all since 2022 in their ability to describe female anatomy correctly or even coherently or ability to resolve incoherence without making up new crazy shit more wrong than any human ever would. Chat GPT says it cannot do this and that symbolic and Bayesian ai would need to be incorporated
An AI for epistemics would be able to reason to truth from noise, such as when all “reliable sources” are wrong. That’s what my brain does. We need a “universal explainer” AI and it seems like neural networks cannot ever be that. LLMs take wrong and make it wrong squared. They have not improved at all since 2022 in their ability to describe female anatomy correctly or even coherently or ability to resolve incoherence without making up new crazy shit more wrong than any human ever would. Chat GPT says it cannot do this and that symbolic and Bayesian ai would need to be incorporated