The Push for AI Companion Guardrails

2025. 10. 30.

How often does an emerging tech trend capture the attention of Gen Z, Gen Alpha, and established SaaS industry leaders all at the same time? In our experience, not that often. 

AI companions have managed to achieve this exact outcome. According to a study conducted by Common Sense Media, 72% of teens have reportedly used AI companions in some form, and 40% of them say they engage with their digital character at least once per week. Popular sites such as Character.ai and Replika have made waves online, as millions of users flock to their platforms to create and engage with their own AI companions. 

The AI companion space has also seen some bigger players make moves of their own. Grok released two AI companions named Ani and Rudi earlier this summer, which drove hundreds of thousands of users to download the Grok mobile app. OpenAI recently caught flak from longtime ChatGPT enthusiasts after the launch of the new GPT-5 model resulted in users’ previous prompt histories and saved memories being wiped from their accounts (this was later rolled back for ChatGPT Pro subscribers). Many ChatGPT users cited the loss of their AI companions as a top concern amid the chaotic launch. 

If this proves anything, it’s that AI companions aren’t merely a trend or a temporary cultural phenomenon — they are one of the fastest-emerging categories of consumer AI, and they’re here to stay. 

Why are AI Companions in the Spotlight? 

In a sea of utility-based AI tools, AI companions are in the spotlight in large part due to their unique user interface and presentation of the LLM. 

AI companion interfaces present the LLM as a mentor or digital partner, complete with physical and behavioral traits. This transforms the once-sterile AI chatbot into a multi-dimensional companion that offers a distinct and intimate user experience. This approach to AI has resulted in increased platform adoption across many companion-based products, with users conversing with their companions in various ways to build new things, plan social events, and learn about new topics. 

While AI companions have soared in popularity, their meteoric rise has not come without controversy. Several reports have surfaced of younger users developing emotional attachments to their digital characters. Some have been exposed to inappropriate materials via their companion, and others have gone as far as to exhibit concerning behavior after following their companion’s instructions. 

These stories have presented legal issues for multiple companies in the industry, and the growing mainstream media attention has subsequently sparked a cultural conversation about the potential risks of AI companion technology.

The Policy & Legal Landscape

Currently, there are two key pieces of state legislation regarding AI companions making their way through California’s chambers: AB 1064, a California Assembly bill, and SB 243, a California Senate bill. 

The former, also known as the Leading Ethical AI Development (LEAD) for Kids Act, is designed to safeguard youths from the dangers of AI chatbots. In essence, the bill would prohibit any AI companion chatbot provider from granting children access to its system unless it can validate that the companion is incapable of harming a child. Examples of potentially harmful behavior include encouraging a child to engage in self-harm, violence, the consumption of drugs or alcohol, or disordered eating. AB 1064 was passed by the California Assembly and now must be signed by California Governor Gavin Newsom to go into effect. 

SB 243 requires chatbot operators to implement “critical, reasonable, and attainable safeguards around interactions with artificial intelligence (AI) chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers,” according to the office of state senator Steve Padilla. It is projected to be a precedent-setting piece of legislation and the first of its kind in the United States. The bill was introduced by Senator Steve Padilla passed by the state Senate, and was signed into law by Governor Newsom on October 13th. It is set to go into effect on January 1st, 2026.

At the federal level, Congress, the Federal Trade Commission (FTC), and the White House have all made efforts to combat the risks youths face when engaging with AI technologies. Across the board, state and federal government entities are trying to implement regulations while simultaneously keeping up with the AI advancements that seemingly occur on a near-daily basis. These regulations primarily seek to provide guidance on topics such as age restrictions, user privacy settings, and accountability for harm. Calls for transparency regarding functionality and data usage are also common, as are requests for detailed consent flows. 

What Could This Mean for the AI Companion Ecosystem? 

Increased regulations and growing scrutiny of AI companions will inevitably impact how this industry evolves. Providers of AI companions will need to be intentional about designing products, both on the front and backend, with a new generation of risks in mind. They will need to implement appropriate safeguards that can be monitored while remaining nimble to adjust their practices as regulations and enforcement take place and industry standards develop.

This will be no small feat amidst an already complex regulatory landscape of AI, consumer privacy, and children’s privacy. Companies that can strike a balance between innovation, utility, and compliance will be best positioned in the market, as the expectation of safety and accountability increasingly becomes part and parcel to the demand for an engaging AI companion experience. 

What do the Experts Say?

Some of the biggest voices in AI have already weighed in on these matters. 

Sam Altman, CEO of OpenAI and arguably the biggest voice in the artificial intelligence community, recently issued a post discussing the risks AI poses to children. That post can be found below:

The third principle is about protecting teens. We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection.

First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.

We will apply different rules to teens using our services. For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm. We shared more today about how we’re building the age-prediction system and new parental controls to make all of this work.

We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.

We sat down with Kevin Buckley, an accomplished intellectual property, transactional, and FDA regulatory attorney and founder of & Torrey Pines Law Group, to get his perspective on the recent developments. 

Kevin predicts several iterations of these bills over the next few years. 

 “Now that Governor Newsom has signed the bill into law, it’s still far from settled,” Kevin noted. 

“Given how broadly it’s written, I expect significant constitutional challenges — likely on grounds of vagueness and overbreadth. Even though the moral intent is sound, I doubt this version will stand without judicial scrutiny or substantial narrowing through future amendments or implementing regulations. In the meantime, companies will probably continue to self-regulate through stricter Terms of Service, much like leading social media platforms have done.” 

Kevin detailed how important labeling will be for providers of AI companions, both in how the product is marketed and how it is perceived by consumers. Labeling also helps address a primary concern of AI companions – transparency. Relatedly, he anticipates that this will likely result in a more complicated structure of user terms.. 

“The level of scrutiny that terms of service will be subject to will vary depending on the age and maturity of its intended audience, including users. And since most AI companies offer services to a broad range of users, they will likely have to resort to multi-tiered terms. All of the companies with loose TOS policies are going to get nailed.” 

Despite the recent headlines, Kevin noted that there are still substantial opportunities out there for companies interested in introducing their own AI companions. 

“This technology is helpful. I’ve worked as an executive at two AI biotech startups. If an avatar character could help with the interaction between the physician and a patient at either of those companies, I’d say the more the merrier,” said Kevin. 

“There are foreseeable risks, but we should be notifying people of foreseeable risks. I don’t think we should shut down the modality of avatars because they are very helpful outside of the limited circumstances in which users can get hurt.”

Looking Ahead: What’s Next for AI Companion Legislation?

The aforementioned bills could very easily serve as templates for future state legislation in other regions of the country, or even federal policy. It’s unlikely that the AI companion market will slow down as new legislation emerges. Rather, it is far more probable that the industry as a whole will mature in response to the elevated levels of scrutiny.  

After all, regulation is the inevitable result of a rapidly expanding ecosystem. The industry is at a turning point, but significant positive change can come from this moment. Just as AI companions are here to stay, so are the watchful eyes of government agencies, regulators, and concerned parents. This forces our conversations about AI and digital companions to extend beyond technology and into society, ethics, and law. 

At Genies, we believe this kind of open conversation serves as a foundation for better safety for all users of AI companions. We invite you to monitor the regulatory landscape as it evolves, and encourage you to participate in these discussions. 

Got a question for us? Tag us on X and fire away!

특집

특집

특집