I was doing some tests on Chatgpt the other day. I wanted to explore the phenomenon where people use it instead of seeking therapy after I got inspired from a patient that walked in to the clinic recently. They've been suffering from some mental health problems and I asked why it took them a long time to seek consult and what they've been doing to manage their symptoms.
On my patient's case, they used a chatbot as a sub for therapy and their symptoms didn't get any better.
Disclaimer: I'm no expert at this AI x mental health therapy field and this post is just an avenue to share an opinion about the phenomena. Seek consult from a professional for your mental health problems.
In the context of using AI chatbot as a sub for seeking a therapist:
Just my opinion but if you're using prompts to ask the AI something, it's going to give you answers it thinks you want to read. But the catch is it assuming you're being truthful or providing a close to accurate assessment about your circumstance.
Let's say if a person is already deluded about something or has a fixed confirmation bias, they're likely to fish for information that gives them validation. I'm not the problem, they are.
I've talked to patients with gambling and other impulse control problems. Majority of them don't think it's them that's the problem, it's the people around them. For anxious and depressed types, they ask questions that indirectly just lead them to be more anxious or depressed.
Ever heard of someone googling their symptoms then ending up thinking they have cancer? or searching for content while depressed to make them even more depressed? well I got those scenarios while roleplaying with prompts.
I do this stuff just to get a feel about what people are talking about when they consult their AI therapist and I still think it's unhealthy.
Prompt: What to do if I'm getting bullied?
AI provides a solution but it doesn't listen.
If I was interview the person that made the prompt, I'd solicit more information about what made them ask that question in the first place. Is their perceive bullying actually true or whether their reality testing skills are still intact? One of the symptoms of psychosis is having ideas of reference and paranoia.
I'm not saying not to believe in people asking the questions, I'm just saying that ever since I got to deal with mental health as a line of work, I get to be more inquisitive and taking information people give me with a grain of salt. I have yet to encounter the AI asking about my childhood and other specific details that can give a trained psychiatrist a good basis for psychodynamic formulation. These bots aren't at that level yet.
There's a lot of stuff running through my head in a single interview like what questions matter more than others at the limited time I've got. Which medications to prescribe that the patient could actually find useful and practical to the patient's life circumstance.
I do understand the appeal of being dependent on the AI to talk to. It's cheaper than booking a consultation and getting prescribed with meds but after some consideration, I think my career is safe in the coming decades where AI will take other people's jobs. It's just hard to replicate human social interaction.
Thanks for your time.