On Sun, Jan 29, 2023 at 09:03 AM, Tim O'Connor wrote:
Of course, this "AI" is capable of learning (i.e. adding to its knowledge base and its inferences). So having read, say, annotated criticisms of its own output, it can make further investigations and refine what it 'knows'. It should also be able to "footnote" every inference that it makes by walking back the inputs that got it there. So it should be "self-correcting" as long as its output has referees; which is not that much different from how we learn stuff -- and go way off track, when we have no referee. :-)
I read it on the Internet, so it must be true :-) And therein lies the problem of web based knowledge; the incorrect blather grows exponentially while the actual source data remains constant so eventually the AI bots will be quoting each other as authoritative sources :-(