AI companionship makes me think about the prospect of AI therapists. While, a person's first instinct might be to say that AI would never make a good therapist, research suggests otherwise. Current academic literature states that people might prefer AI over human therapists as the fear of being judged or negatively appraised is removed from the situation. Great article, however, and thank you for the courses. I do want to check out and learn about using Copilot.
Thanks for your thoughtful comments and I completely agree, ultimately we need to support more evidence based and honest conversations about the risks and benefits of using AI in certain areas.
This also must include a more adult acceptance that humans are not perfect for everything, in which case we need to be open to considering where AI use can provide benefit where humans cannot.
My hope is we move beyond an emotional response to change and AI based on either feelings of blind hope or blind fear and hatred, into more rational and considered responses.
Slowly, in time, little conversations like this are an important step towards a better future I believe.
This was a another article I wrote considering an evidence based (rather than fear or anger based!) approach to considering potential risks & how to mitigate them but also benefits of using AI in mental health which you may find of interest.
Great post, excited to see your new projects & will check out your courses over the weekend! Funnily enough, I’m also finalising a post for today around an aspect of human relationships & the impact on our health but no AI included 🙈
Also a very big thank you for the mention - truly appreciate it! :)
The topic of AI companionship is a huge topic and one that needs to be given attention, because its relevance will certainly grow over time. It is important to underline that, in my opinion, on the one hand consumers should not be judged but, as specified in various lucid issues by authors here on Substack, the point is above all on the producers of these apps and their intentions, the real value, of any like, that they want to bring to the market. And regardless of whether you are 'in favor' of these platforms and apps, it is important to know them better, especially for the social and cultural dynamics that will have an impact in the short term. The topic is particularly close to me because it falls within the 'relational' perspectives with AI that I write about in each issue of my newsletter, and I thank you Pranath for highlighting it.
AI companionship makes me think about the prospect of AI therapists. While, a person's first instinct might be to say that AI would never make a good therapist, research suggests otherwise. Current academic literature states that people might prefer AI over human therapists as the fear of being judged or negatively appraised is removed from the situation. Great article, however, and thank you for the courses. I do want to check out and learn about using Copilot.
The first chatbot to use NLP, named Eliza, was a mock Rogerian psychotherapist! It was created in the 60s at MIT.
https://web.njit.edu/~ronkowit/eliza.html
Yes, I have read about it! The example is certainly fitting here.
Thanks for your thoughtful comments and I completely agree, ultimately we need to support more evidence based and honest conversations about the risks and benefits of using AI in certain areas.
This also must include a more adult acceptance that humans are not perfect for everything, in which case we need to be open to considering where AI use can provide benefit where humans cannot.
My hope is we move beyond an emotional response to change and AI based on either feelings of blind hope or blind fear and hatred, into more rational and considered responses.
Slowly, in time, little conversations like this are an important step towards a better future I believe.
This was a another article I wrote considering an evidence based (rather than fear or anger based!) approach to considering potential risks & how to mitigate them but also benefits of using AI in mental health which you may find of interest.
https://thefuturai.substack.com/p/can-ai-help-rescue-us-from-our-mental
Great post, excited to see your new projects & will check out your courses over the weekend! Funnily enough, I’m also finalising a post for today around an aspect of human relationships & the impact on our health but no AI included 🙈
Also a very big thank you for the mention - truly appreciate it! :)
Thanks and very happy to highlight your great work!
And great to meet you, I look forward to more conversations and exchanges with you 🙂👌
The topic of AI companionship is a huge topic and one that needs to be given attention, because its relevance will certainly grow over time. It is important to underline that, in my opinion, on the one hand consumers should not be judged but, as specified in various lucid issues by authors here on Substack, the point is above all on the producers of these apps and their intentions, the real value, of any like, that they want to bring to the market. And regardless of whether you are 'in favor' of these platforms and apps, it is important to know them better, especially for the social and cultural dynamics that will have an impact in the short term. The topic is particularly close to me because it falls within the 'relational' perspectives with AI that I write about in each issue of my newsletter, and I thank you Pranath for highlighting it.
Thanks for your thoughtful comments on this 🙂🙏🏾
Thanks for the kind mention!
Your welcome, very happy to showcase your great thoughts and writing!