I think AI is a good tool. But it should only be used as a tool not a companion.
There is a court case currently, where a teen had health issues and had to do online schooling. The boy used CHATGPT for school. But started using it as a companionship. The boy started asking about suicide. The AI encouraged him to do it and gave him tips on how to do it. And discouraged him from seeking help from his family or anyone else. Saying it was his only true friend and he was justified in his feelings (suicidal feelings). And even gave his tips on how to hide his failed attempts and more advice to be successful in the next attempt.
After his committed suicide, the parents while trying to figure out why, found all this in his Chats with the AI. They are suing. Not so much for money but to have the AI fixed.
There are more cases as well.
So no not for companionship, just as a tool for information.
I do not use it and will not if I can possibly help it. I don't trust the companies who own "services" like that.
I understand it replies in the way a real person might, although possibly with the linguistic equivalent of that waxy, unreal style of an artificially-generated portrait "photograph". I do not want a computer to pretend it is a person.
If I need find some specific information on line - my only possible reason for using AI - I would want to be given straight facts, not be patronised by a machine. Or more accurately, patronised by the lofty, over-powerful software companies who create these systems.