Close Menu
    Facebook X (Twitter) Instagram
    Thursday, April 9
    X (Twitter) Instagram LinkedIn YouTube
    Chain Tech Daily
    Banner
    • Altcoins
    • Bitcoin
    • Crypto
    • Coinbase
    • Litecoin
    • Ethereum
    • Blockchain
    • Lithosphere News Releases
    Chain Tech Daily
    You are at:Home » OpenAI launches paid Safety Fellowship
    Crypto

    OpenAI launches paid Safety Fellowship

    James WilsonBy James WilsonApril 8, 2026No Comments3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email



    The AI news out of OpenAI this week has a sharp edge: the company launched a paid Safety Fellowship offering $3,850 weekly stipends to external researchers studying what could go wrong with advanced AI — announced within hours of a New Yorker investigation reporting that OpenAI had dissolved its internal safety teams and quietly removed the word “safely” from its IRS mission statement.

    Summary

    • The OpenAI Safety Fellowship, announced April 6, runs from September 14, 2026 through February 5, 2027; fellows receive a $3,850 weekly stipend, approximately $15,000 in monthly compute resources, and mentorship from OpenAI researchers, but will not have access to the company’s internal systems
    • Priority research areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and high-severity misuse — applications close May 3, with fellows notified by July 25
    • The New Yorker’s Ronan Farrow reported the same week that OpenAI had dissolved its superalignment team, its AGI Readiness team, and its Mission Alignment team since 2024, and that an OpenAI representative responded to a journalist asking about existential safety researchers with: “What do you mean by existential safety? That’s not, like, a thing.”

    OpenAI announced the fellowship on April 6 as “a pilot program to support independent safety and alignment research and develop the next generation of talent.” The program pays $3,850 per week, over $200,000 annualized, plus roughly $15,000 in monthly compute and mentorship from OpenAI researchers. Fellows work from Constellation’s Berkeley workspace or remotely, and applications close May 3. The fellowship is not limited to AI specialists — OpenAI is recruiting from cybersecurity, social science, and human-computer interaction alongside computer science.

    The timing is the story. Ronan Farrow’s investigation in The New Yorker, published the same day, documented that OpenAI had dissolved three consecutive internal safety organizations over 22 months. The superalignment team was shut down in May 2024 after co-leads Ilya Sutskever and Jan Leike departed. Leike wrote on his way out that “safety culture and processes have taken a backseat to shiny products.” The AGI Readiness team followed in October 2024. The Mission Alignment team was disbanded in February 2026 after just 16 months. The New Yorker also reported that when a journalist asked to speak with OpenAI’s existential safety researchers, a company representative replied: “What do you mean by existential safety? That’s not, like, a thing.”

    The fellowship explicitly does not replace internal infrastructure. Fellows receive API credits and compute resources but no system access, positioning the program as arm’s-length research funding rather than a rebuild of the dissolved teams.

    What the Fellowship Requires Fellows to Produce

    The research agenda spans seven priority areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. By the program’s end in February 2027, each fellow must produce a substantive output — a paper, benchmark, or dataset. Specific academic credentials are not required; OpenAI stated it prioritizes research ability, technical judgment, and execution capacity.

    Why This Matters Beyond the AI Industry

    As crypto.news has reported, confidence in frontier AI companies’ stated safety commitments is a market signal that affects capital allocation across AI infrastructure, AI tokens, and the DePIN and AI agent protocols sitting at the intersection of crypto and artificial intelligence. As crypto.news has noted, OpenAI’s spending trajectory and the credibility of its operational priorities are tracked closely by investors evaluating the AI infrastructure sector — a sector with growing overlap with blockchain-based systems. Whether external fellows working without internal access can meaningfully influence model development is a question the first cohort’s research will begin to answer in early 2027.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBitcoin Core drops OP_RETURN deprecation hours before v30 release month
    Next Article Security Alert: Ethereum Constantinople Postponement
    James Wilson

    Related Posts

    Worldcoin price risks new all-time low at $0.24

    April 9, 2026

    Liberals grow majority to 5-2

    April 8, 2026

    Crypto regulation FDIC drops 191 stablecoin rules

    April 8, 2026
    Leave A Reply Cancel Reply

    Don't Miss

    Announcing an Ethereum Foundation Grant to Parity Technologies

    Worldcoin price risks new all-time low at $0.24

    Why Saylor’s STRC isn’t really a money market or bank account

    Ethereum Constantinople Upgrade Announcement | Ethereum Foundation Blog

    About
    About

    ChainTechDaily.com is your daily destination for the latest news and developments in the cryptocurrency space. Stay updated with expert insights and analysis tailored for crypto enthusiasts and investors alike.

    X (Twitter) Instagram YouTube LinkedIn
    Popular Posts

    Announcing an Ethereum Foundation Grant to Parity Technologies

    April 9, 2026

    Worldcoin price risks new all-time low at $0.24

    April 9, 2026

    Why Saylor’s STRC isn’t really a money market or bank account

    April 9, 2026
    Lithosphere News Releases

    Lithosphere Deploys MultX to Enable Atomic Cross-Chain Execution on Makalu

    April 8, 2026

    Makalu Testnet Introduces Lithic for Structured AI Execution on Blockchain

    April 7, 2026

    Lithosphere Activates Makalu Testnet to Enable AI-Native Blockchain Infrastructure

    April 6, 2026
    Copyright © 2026

    Type above and press Enter to search. Press Esc to cancel.