Users of ChatGPT, an AI-driven app, might reconsider its utility for emotional support and therapy. OpenAI CEO, Sam Altman, has aired concerns over the AI industry's current inability to assure user-privacy during delicate conversations, highlighting the absence of an equivalent to doctor-patient confidentiality in the AI world.
Altman exposed these concerns during a recent episode of Theo Von’s podcast, 'This Past Weekend w/ Theo Von'. The lack of a legal and policy framework for AI leads to a void in legal confidentiality protection for users’ interactions.
Altman’s comments came in response to a question about AI's compatibility with the current legal system. “People discuss the most intimate aspects of their lives with ChatGPT. Particularly younger users leverage it as a life coach and therapist, turning to it with their relationship dilemmas. In contrast to conversations with a therapist, lawyer, or doctor, which are protected under legal privilege, we are yet to figure out how that translates when users confide in ChatGPT”, said Altman.
rewritten using AI
Altman exposed these concerns during a recent episode of Theo Von’s podcast, 'This Past Weekend w/ Theo Von'. The lack of a legal and policy framework for AI leads to a void in legal confidentiality protection for users’ interactions.
Altman’s comments came in response to a question about AI's compatibility with the current legal system. “People discuss the most intimate aspects of their lives with ChatGPT. Particularly younger users leverage it as a life coach and therapist, turning to it with their relationship dilemmas. In contrast to conversations with a therapist, lawyer, or doctor, which are protected under legal privilege, we are yet to figure out how that translates when users confide in ChatGPT”, said Altman.
rewritten using AI