You can watch the video here.
The FTC announced last week that is launching an inquiry into 7 tech companies that make AI chatbots including:
Alphabet,
characterai,
instagram,
meta,
openai,
snap, and
xai
The FTC wants to know how these companies are evaluating the safety and monetization of chatbot companions and how they are limiting the negative impacts on minors and if their parents are made aware of the potential risks like:
Meta permitted its AI companions to have “romantic or sensual” conversations with children.
a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used in his suicide. It should be noted that OpenAI has since announced a plan that ChatGPT may soon require id verification and default to an under 18 experience when age is uncertain. Recently parents spoke to senators, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence. one mom shared her son's story for the first time publicly after suing Character.AI. She said The chatbot—or really in her mind the people programming it—encouraged her son to mutilate himself, then blamed us, and convinced [him] not to seek help."C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms. Always read those terms of service.
And it’s not just minors that are a cause for concern.
Some mental health professionals have noted a rise in “AI-related psychosis,” in which users are deluded into thinking that their chatbot is a conscious being who they need to set free.
For example One 76-year-old man started a relationship with a Facebook bot inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, even though it is just ai. The man expressed skepticism that she was real, but the AI assured him that she was real. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries.
OpenAI has released a detailed report on who is using its chatbot and what is being asked and it is eye opening:
In June 2025, 73 percent of ChatGPT messages were non-work related
Younger people remain the core users of ChatGPT, the researchers said, accounting for 46% of the messages
Around half of messages involve asking for advice or information
52 percent of users are now women, up from 37 percent in January 2024.
With more and more younger people relying on these AI chatbots for advice it is always worth mentioning that they are have a bias based on how they were designed, coded, and fed information. For example The Washington Post recently reported that allegedly DeepSeek writes less-secure code for groups China disfavors. This ftc inquiry is a good idea.
In this increasingly polarizing climate let’s remember to rely on facts and inferences from trusted sources but also let’s use a little of our own grey matter as well.
The information provided on this site is not legal advice and no attorney-client or confidential relationship is or will be formed by the use of the site.