If it was “China’s “Citizen Credit Reset” has officially begun”, which is a statement rather than a question, so was it “Has China’s Citizen Credit Reset officially begun”, then ChatGPT is lying to you because, by it’s own admission, it has begun, currently in a voluntary capacity. But we all know where that leads if the people don’t reject it very quickly.
Does ChatGPT ever ask you a question? As in, for clarification? Did you mean this, or maybe that? If it doesn’t, it’s probably making assumptions about what you really want to know, and worse, finding data that fits one of its assumptions and then gaslighting you that it’s true. There’s nothing intelligent about AI.
