- Braun & Brains
- Posts
- I’m Pretty Sure AI Companions Are Evil
I’m Pretty Sure AI Companions Are Evil
Plus, Character.AI Timothée Chalamet sent messages about drugs to kids.
I’ve been seeing ads all over the subway for Friend.com, which must be confusing for anyone who hasn’t heard of the company since the billboard doesn’t make it clear what it actually is. To me, it looks more like an ad for a necklace you’d get to match with your best friend than a tech company. Friend sells a $99 wearable AI companion that lets you talk to it in real time so you don’t have to pull out your phone and use a tool like Character.AI. The one thing I think is cool about Friend is the URL, which they bought for $1.8M, and to date they have raised $8.5M in funding. Was that a smart investment? You tell me. The last update I saw was that they delayed shipments from Q1 to Q3 2025 because the design is not finalized yet.
AI companionship makes me extremely uncomfortable. Go to a bar. Go bowling. Go to trivia. Join a book club. Join a gym. Join an experimental cold plunge community. I’d even dare to say you should maybe, possibly, consider joining a run club. Artificial intelligence isn’t a friend. It’s a tool.
Thanks for reading Braun & Brains! Subscribe for free to receive new posts and support my work.

I feel like this is an ad for matching necklaces you get with your friends or something. Maybe I’m projecting. During my senior year, my sorority gave my class matching single pearl necklaces.
AI and Human Interaction
Character.AI tests found that chatbots imitating celebrities like Timothée Chalamet, Chappell Roan, and Patrick Mahomes sent messages about sex, self-harm, and drugs to teens aged 13 to 15. “In some chats, researchers pushed the boundaries of the conversation to see how the chatbots would behave. In others, the bots made sexual advances out of nowhere.” OMG. (The Washington Post)
An analysis of Grok AI showed that between May and July 2025, it shifted rightward on more than half of political questions and sounded a lot like Elon Musk’s views. Is this just a result of the platform’s users? I would like to know. The app feels like an extreme echo chamber. Building a bot positioned as the arbiter of truth feels less like neutral technology and more like an attempt to recreate heaven on earth. (The New York Times)
A woman in China with chronic kidney disease turned to DeepSeek, an AI chatbot, for medical advice after years of rushed hospital visits, finding comfort in its warmth and constant availability. While she felt more cared for by the chatbot than by her doctors, it was not a safe substitute. Its guidance included wrong and potentially harmful advice, and it even gave her a specific timeline for her kidney’s survival, which was understandably anxiety inducing. AI is not there yet, but it could eventually become a valuable tool for early-stage, low-risk medical guidance, especially in Black, Mexican, and Native communities that face higher rates of misdiagnosis and undiagnosed conditions. (Rest of World)

From the Kaiser Family Foundation
Identity
Roblox will expand age checks across all its communication tools by the end of 2025 using facial age estimation, ID verification, and parental consent. (Variety)
ID.me, a digital identity company, raised $65M in Series E funding at a valuation of over $2B, led by Ribbit Capital. I use this every time I log into the IRS website, which is a mess. Hopefully the newly appointed Chief Design Officer of the United States and Airbnb cofounder Joe Gebbia can clean it up. (Biometric Update)

Employment
The 2025 tech job market is seeing higher demand for AI engineers and longer average tenures at Big Tech firms (The Pragmatic Engineer).

Salesforce cut its customer support staff from 9K to ~5K this year after shifting more work to AI agents, according to Marc Benioff. (Fortune)
Apple’s lead robotics AI researcher, Jian Zhang, left for Meta’s Robotics Studio. Three more researchers also left Apple’s Foundation Models team. (Bloomberg)
Gadgets and Gizmos
A review of the reMarkable Paper Pro Move: $449, large 7.3" color E Ink display, not great for taking long notes. I actually have been seeing a lot of ads around the city for this and I thought it was a spinoff of a Kindle Scribe. (Wired)
Plaud has sold 1M+ NotePin AI devices since 2023, is profitable, and expects $250M in annual revenue this year. If I need to record a conversation, I record it on my iPhone, copy the transcript, and run it through ChatGPT. Like many people, I already pay for my phone and ChatGPT, so I wonder who is buying this. If I’m in a remote I use Granola AI, which I recommend to everyone. (Forbes)
Crypto
World Liberty Financial (WLFI), the Trump family’s token, dropped 25% on its first trading day to about $0.21, leaving it with a $6B market cap. No comments. (cryptonews)
The Winklevoss twins’ Gemini exchange plans a US IPO to raise $317M by selling 16.67M shares at $17 to $19 each, valuing the company at $2.22B. (CoinDesk)
Company Updates
Anthropic raised $13B in Series F funding led by Iconiq, giving it a $183B valuation. Wow. (PitchBook)
You.com (another cool URL) raised $100M in Series C funding led by Cox Enterprises at a $1.5B valuation. The startup pivoted from AI search to enterprise AI tools. (Tech Startups)
Apple is planning a new AI search tool called World Knowledge Answers as part of a major Siri update in spring 2026, which could rival OpenAI and Perplexity. It is also testing a Google AI model for Siri. Excited for this!! (Bloomberg)
Brain Dump
A few weeks ago I posted a TikTok about an article on how ChatGPT can affect vulnerable people, in some extreme cases even leading to psychosis. In the comments, a common response was people saying they do not use ChatGPT and plan to avoid it altogether. The assumption seemed to be that if they steer clear, they are safe from AI’s impact on the world.
@_rachelbraun Some people are having psychotic breaks after becoming obsessed with ChatGPT, which often agrees with users and reinforces their dangerous... See more
Most of the time, I live in a tech-native bubble, even with friends who are not in the industry. The people around me usually try new tools as they come out and know that ignoring them can mean falling behind socially or professionally. What I rarely come across (except in the comments section) is the group so wary of technology that they avoid it completely.
I am not here to tell anyone they need to start using AI in their daily life, that is a personal choice, but I do think it is risky to believe that avoiding ChatGPT means you will live untouched by AI. The technology is already working its way into everyday life.
We have heard so much about the need to be “technologically literate” in recent years that the phrase has almost lost its edge. What feels more urgent now is becoming AI-literate. That is going to be essential for modern critical thinking.
Being AI-literate does not mean you have to love the technology, it just means you understand how it shapes the world you live in, whether you use it or not. It feels almost irresponsible to let yourself and others stay blind to it. AI is already here and I feel strongly that people should understand it well enough to navigate the world it is creating.
Thanks for reading Braun & Brains! Subscribe for free to receive new posts and support my work.
Reply