In recent years the machine learning field has takes quite the leaps forward. Despite all these advances we however aren't much closer to a real artificial intelligence. The latest iterations like the ChatGPT sure can fake such things quite convincingly.
It's actually quite misleading calling such systems an AI (or then we would need to redefine the whole term). Underneath it's all just statistics and there is no real understanding or awareness within the machine.
Even though they might be the thinking machines yet doesn't mean they can't be useful (or harmful in wrong hands). There is a lot of debating going around at the moment for example about using those models to write school assignments.
Sure in the simple things they can give a plausible sounding answer, but anything more complex still requires curating the generated output. But when you know what you are looking for and understand the topic it can be hugely helpful having the machine write out the text for you. What's left is then just to edit the pieces of text into a coherent whole.
And no matter how good the output of those systems become it will never replace the benefit of using writing as a tool for thinking.