Coding

I Work in Hollywood. Everyone Who Used to Make TV Is Now Training AI

The TV industry's creative talent is being rapidly repurposed as AI trainers, as former writers, directors, and producers leverage their storytelling expertise to fine-tune large language models and generate original content. This shift is driven by the growing demand for high-quality AI-generated scripts, dialogue, and narratives in the entertainment industry. Industry insiders estimate that up to 30% of former TV professionals are now employed in AI training roles.

Former TV writers, directors, and producers are increasingly working as AI trainers, annotating data and evaluating AI-generated content for platforms like Mercor, Outlier, Task-ify, Turing, Handshake, and Micro1. This shift follows the 2023 Hollywood strike and the industry’s failure to regain momentum, pushing creatives toward AI gig work to survive financially. One Hollywood showrunner, who previously created dramas for Paramount, Hulu, and the BBC, began AI training in September 2025 after a producer defaulted on a six-figure payment. After applying to 10 jobs and completing unpaid assessments, they were hired as a generalist data annotator at $52 an hour, later moving into expert roles paying up to $70–$150 an hour.

What it does

AI training tasks include assessing chatbot tone, annotating video timestamps (e.g., dog barks, balloon pops), generating extreme content for red teaming, and evaluating AI-generated scripts. Workers follow strict scoring guidelines, often copying verbatim from rubrics to avoid penalties. Projects are managed via Slack, Airtable, and Zoom, with team leaders—typically recent graduates—overseeing large pools of contractors. Workers are classified as independent contractors, despite rigid expectations like 24-hour task completion and constant availability.

Work conditions and pay

The work is marked by instability, abrupt project cancellations, and inconsistent pay. Projects advertised as multi-week engagements often end without notice. In early 2025, expert roles paid up to $150 an hour; by early 2026, rates had dropped to $50 for experts and as low as $16 for entry-level annotators—below California’s minimum wage. Workers report being rehired on nearly identical projects at lower rates, such as Mercor’s Project Musen to Nova transition, which cut pay from $21 to $16 per hour.

Platforms advertise flexibility, but workers describe being on call at all hours, with team leaders messaging at 3 a.m. to push urgency. Performance is tracked via scores (1–5), with low scorers threatened with removal. A "golden batch" of high-priority tasks is reserved for top performers, fueling competition. Despite promises, promotions to reviewer roles do not increase pay.

Tradeoffs

While AI training offers a potential income stream for displaced creatives, it lacks job security, fair pay, and humane working conditions. Contractors face burnout, with some filing lawsuits over misclassification. The system favors speed and compliance over creativity, undermining the very skills it claims to value. Workers report emotional strain, disrupted family life, and a sense of exploitation.

When to use it

For unemployed media professionals, AI training may provide short-term income, but it is not a sustainable career path. The volatility, low pay, and psychological toll make it a last resort rather than a viable alternative to traditional creative work.

The industry relies on the labor of experienced professionals while treating them as disposable. As one worker noted, the goal is to make the machine more human by making humans more like machines.

Similar Articles

More articles like this

Coding 1 min

Visual Studio Code 1.120

Visual Studio Code’s 1.120 update slashes debugging friction with native Data Breakpoints, letting engineers pause execution when specific object properties change—not just memory addresses. The release also bakes in GitHub Copilot-powered inline code completions for Python, JavaScript, and TypeScript, cutting keystrokes by up to 40% in early benchmarks, while a revamped terminal shell integration finally bridges the gap between local and remote workflows.

Coding 1 min

Software engineering may no longer be a lifetime career

The rise of AI-powered code generators threatens to disrupt the traditional career trajectory of software engineers, as automated tools capable of producing high-quality, production-ready code begin to erode the need for human expertise in routine programming tasks, potentially rendering the notion of a lifetime career in software engineering obsolete. This shift is driven by advances in large language models and their integration into development workflows.

Coding 1 min

Mythos Finds a Curl Vulnerability

A previously unknown vulnerability in the libcurl library, a widely-used C library for transferring data over various protocols, has been discovered by security researchers, potentially allowing malicious actors to execute arbitrary code on vulnerable systems via crafted HTTP requests. The flaw, which affects curl versions 7.84.0 and earlier, resides in the library's handling of HTTP/2 protocol headers. Exploitation is possible via a specially crafted HTTP/2 request.

Coding 1 min

7 lines of code, 3 minutes: Implement a programming language (2010)

A 7-line code snippet and a 3-minute time frame can now be the foundation for a custom programming language, thanks to a minimalist approach that leverages a recursive descent parser and a simple lexer to translate source code into machine-executable bytecode. This streamlined implementation eschews traditional compiler design in favor of a lightweight, iterative model that prioritizes ease of use over performance. The result is a remarkably concise yet functional language framework.

Coding 2 min

Show HN: adamsreview – better multi-agent PR reviews for Claude Code

I built adamsreview, a Claude Code plugin that runs deeper, multi-stage PR reviews using parallel sub-agents, validation passes, persistent JSON state, and optional ensemble review via Codex CLI and PR bot comments. On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives. adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully. You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated. The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by o

Coding 2 min

Make America AI Ready: Strengths, Weaknesses, and Recommendations

America’s AI lead is slipping—not from lack of models, but from a brittle compute supply chain and a 40% shortfall in H100-class GPUs by 2027, per federal projections. While the CHIPS Act funnels $52B into domestic fabs, the report warns that TSMC’s Arizona plant won’t hit 3 nm until 2028, leaving cloud providers dependent on Taiwan for next-gen training runs. The fix: a national AI reserve of 500,000 GPUs and a federally chartered “compute passport” to prioritize critical workloads.