- Braun & Brains
- Posts
- Reality TV’s Newest Cast Member is... ChatGPT
Reality TV’s Newest Cast Member is... ChatGPT
Plus: Charlie Kirk’s online aftermath, Keith Rabois calling for 85% layoffs, and internet communication in Nepal.
I'm not a reality TV person. I'm barely a TV person, to be honest. Up until about two months ago, until my boyfriend had enough, I had the same TV from college (the size of my computer monitor) whose sole purpose was to play the nightly news, a ritual that’s slowed with the departure of Lester Holt. I can’t get into dating shows; they don’t make sense to me and no one’s actions feel real. I don’t dislike reality TV because I think I’m better than it. It makes me feel like I don’t understand some underlying social language that everyone else gets. How people interact on reality TV just doesn’t click.

When I saw my friend Juliet post about a founder appearing on a reality TV show with her sister, I decided to tune in. Juliet’s the kind of friend who supports in any way she can, often posting her friends’ music and independent projects. I naively thought this was something similar. The weekend after she posted about her entrepreneurial friend, I had some furniture to build and thought reality TV would be the best background for the task. When I turned it on, I learned quickly that it wasn’t someone’s pet project. It was a highly produced Hulu series… and it was the first reality show that, in my mind, ever made sense.
The show is called Love Thy Nader (Jimmy Kimmel is an executive producer!) and it follows model Brooks Nader and her three younger sisters in New York City. They’re from Louisiana, each of them the kind of gorgeous you’d think was grown in a lab, and all of them very different. I have almost nothing in common with them, but the fact that I’m also the oldest of four kept me watching. They have a family dynamic that feels more authentic than the Kardashians and fight in the same way my sister and I would fight. I could relate, even though our lives are so different.
The reason I want to talk about this isn’t the sisters’ chemistry. I want to talk about the use of AI, which also felt strangely authentic and relatable.
Throughout the show, the sister Juliet posted about Mary Holland Nader, who talks to ChatGPT like her confidant. Mary Holland’s the founder of her startup, Mary & Pip, which helps independent workers navigate finances. It’s something she’s passionate about after watching her sisters pursue nontraditional careers while she spent a few years on Wall Street. In the show, she’s framed as the brains of the sisters, the reasonable and hardworking one. Using AI in this light makes sense, especially when it’s shown as a reasoning tool.
She asks ChatGPT what to do in certain situations or uses it to help her sisters craft texts back to their ex-boyfriends after they cheat. They call it “Chat” and use it through voice. It acts like a fifth sister, involved with some of their hardest personal issues.
Seeing Chat used this casually shows how deeply the technology is becoming part of everyday life, not just in school or work, but also in the way people socialize. I could see an AI tool sponsoring a show like this soon, similar to how Samsung and Brooks have a partnership that is visible throughout the show.
I could also see AI chatbots showing up as the “fifth sister” in more shows and media. Not in sci-fi roles like Jarvis in Iron Man, but in everyday formats where we want an unbiased-seeming character who brings everyone back down to earth. Of course, it won’t actually be unbiased. It’ll reflect the way questions are asked and often tell people what they already believe, polished into something that sounds logical.
Large language models like ChatGPT are trained on huge amounts of text from the internet, books, and other sources. That means they inherit bias from the data. If you frame something one way, ChatGPT will usually lean into that framing instead of challenging it. It feels neutral because it sounds balanced, but really it’s a mirror. On a reality show, if characters treat “Chat” like an unbiased fifth sister, the audience might too. That creates the illusion of reliability when what’s actually happening is prediction and reflection.

Think about characters like Alfred in Batman, Ben Wyatt in Parks and Recreation, or even Hermione in Harry Potter. They’re framed as the rational anchors, the ones with the facts. But their “objectivity” always comes through a lens—rules, numbers, personal experience. ChatGPT works the same way. It feels like the steady guide, but its answers are shaped by training data and user intent. Rationality here is more performance than truth.
Theater’s picking this up, too. At Lincoln Center, Ayad Akhtar’s McNeal put Robert Downey Jr. on stage opposite ChatGPT. The play followed a novelist who leans on AI for his work until it becomes impossible to tell what he created himself. AI is seen as more than software and as something that moves the story forward. The model’s a shortcut that corrodes the main character’s sense of authorship, and it shows how AI’s being cast, literally, as part of the drama.

How AI’s being used in media reminds me of the progression of dating apps. At first, they felt odd and impossible, something people joked about and assumed would never catch on. Then people started to use them, but they rarely talked about it. It was treated as embarrassing, like they were less human for not meeting someone in real life. Now it’s the opposite. Most of my friends met their partners on apps, and those stories are told openly at weddings, complete with screenshots of first conversations. AI chatbots like ChatGPT have had a similar path, moving from novelty into background, from tools into characters.
I don’t think AI will be the main plot driver outside of horror or dystopia films like Ex Machina, The Creator, M3GAN 2.0, or The Artifice Girl, but I do think it’ll appear more often as a voice of reason or as a mirror reflecting what’s already in its training, twisting neutrality. Similar to how a laugh track tells you when to laugh, ChatGPT as a character can signal to audiences what’s rational or acceptable. Just like when a laugh track in The Big Bang Theory comes on at moments that aren’t actually funny, ChatGPT will show up at times that aren’t rational, trying to make us believe something is true or right even if it’s not.
I’ll be on the lookout for more AI cameos until it has its very own reality TV show.
Tech News
AI
Thousands of small farmers in Malawi are using an AI chatbot called Ulangizi, built by the nonprofit Opportunity International, to adapt after cyclones and droughts destroyed harvests. Backed by the government, the WhatsApp-based tool has helped farmers boost yields and incomes. One grower made $800 from potatoes, showing how AI could support agriculture in a country where 80% of 21 million people rely on farming but face barriers like weak internet and low literacy. (AP News)
California lawmakers are voting on SB 53, a bill that would require frontier AI developers like OpenAI, Google, and Anthropic to publish safety reports and disclose “catastrophic risks,” defined as events causing 50+ deaths or $1B in damages. The bill matters because California is home to 32 of the world’s top 50 AI companies, and if signed into law it could set the national template for AI safety regulation at a time when Washington is gridlocked. (Vox)
The Wall Street Journal published an article titled “AI Startup Founders Tout a Winning Formula—No Booze, No Sleep, No Fun.” If you’re reading this, you probably already get the gist of the San Francisco vibe, but the piece detailed how AI founders are working 12-hour days, six days a week, living in $700 pods, and forgoing social lives. Stay safe. (Wall Street Journal)
Big Names
Mira Murati’s Thinking Machines Lab, valued at $12B and backed by $2B in seed funding, has revealed its first research focus: making AI models produce consistent, reproducible responses. The team argues that controlling GPU kernel orchestration could reduce randomness in outputs. That would be a breakthrough that could improve reinforcement learning and reliability for businesses and researchers. (TechCrunch)
Opendoor’s new board chair Keith Rabois said the company is “bloated” and should cut up to 85% of its 1,400 employees, keeping no more than 200. The comments follow the appointment of former Shopify exec Kaz Nejatian as CEO and sent Opendoor shares soaring 78% before sliding 13%, with Rabois arguing layoffs are needed to stop cash burn and restore culture. (CNBC)
throwback lol
Money Talk
Transit software company Via went public at $49.51 a share, above its $46 IPO price, giving it a market cap near $4B. Best known for its ridesharing roots in New York City, the company ended its NYC service in December 2021 to focus on powering public transit networks. Today its software supports systems in more than 35 countries. I used Via back when I was an intern. It was like Uber Pool and unbelievably cheap. (Reuters)
Higgsfield.ai raised $50M in a Series A led by GFT Ventures, with backing from Menlo Ventures and NextEquity. The five-month-old AI video startup has already hit 11M users and 1.2B impressions with its “Click-to-Video” tool, which lets creators instantly produce cinematic clips. The growth positions it ahead of rivals in the $600B short-form video market. (Pulse 2.0)
Charlie Kirk
WIRED reports that graphic videos of Charlie Kirk’s shooting spread almost instantly on TikTok, Instagram, X, and other platforms, often autoplaying without warnings. Researchers say platforms are failing to enforce their own moderation rules after cutting human reviewers, with some clips reaching over 17M views on TikTok and others framed with conspiratorial captions. The footage sits in a policy gray zone, age-gated and labeled by Meta and tagged as sensitive on X, but not removed, raising concerns about how weakened trust and safety programs let violent content circulate widely. (Wired)
Utah Gov. Spencer Cox said social media played a “direct role” in Charlie Kirk’s assassination, calling tech companies “conflict entrepreneurs” that profit from division. He compared algorithms to fentanyl, arguing they addict users to outrage, and linked platforms to a string of political attacks, as lawsuits mount against Meta and TikTok over youth mental health harms. (Business Insider)
There has been talk online about creating databases of Charlie Kirk’s campus content, making it available for students to reference in the future. Another database called Charlie Kirk Data Foundation is reportedly being developed to track accounts that celebrated Kirk’s death, with its creators claiming it is intended to “analyze the prominence of support for political violence in the interest of public education.” Even if these projects are not serious or never come to fruition, they raise broader questions about how people treat digital content as eternal and whether this kind of cataloging amounts to a form of self-surveillance.

Nepal
Context for what’s happened in Nepal: A government ban on 26 social media platforms on Sept 4 triggered youth-led protests over corruption and inequality. The unrest left at least 72 dead, parliament dissolved, and Prime Minister K.P. Sharma Oli ousted. An interim government led by former chief justice Sushila Karki is now in place, with elections scheduled for March 5, 2026.
Jack Dorsey’s decentralized chat app Bitchat surged after the ban, logging 48K Nepal downloads on Sept 8 alone, 38% of its installs, as people sought peer-to-peer tools to communicate without internet. (Forbes)
Over 145K citizens have since joined a Discord server run by Gen Z activists to debate the country’s future and propose Karki as interim leader, turning the platform into a virtual parliament. (New York Times)
British vlogger wehatethecold went viral after uploading three street-level videos of the coup, with one surpassing 20M views, making his motorbike channel an unexpected record of the upheaval. (Dexerto)
@wehatethecold First time 💦 gas review 🥰
Reply