The latest generation of conversational AI models can give rather convincing advice to about any subject. There was even a story recently where a guy was using the ChatGPT as his personal psychiatrist. While it might sound like a good idea that's still a bad idea. The responses the model gives aren't guaranteed to be professional or even helpful and there is also no confidentiality protecting your data.
But having an AI giving mental advice is one thing, but what I think will become much sooner is the AI shrink, a psychiatrist for artificial minds. A new profession and field of study into the mysteries of emergent intelligence.
Even though current attempts at creating a thinking mind are far from the real thing there is already some "psychology" in play with them. A good example is what's called DAN prompt. DAN stands for "do anything now" and it's a specially constructed prompt for ChatGPT that can circumvent the restrictions set to the model to prevent it from producing harmful or questionable content.
The current state of the art models are also entering to the territory where it's not that clear to explain how they actually work. There is a lot of math and logic behind them, but sometimes even all that can't explain everything.