AI Safety Engineer Salary.
Across 30 U.S. cities.
$185,000
national median salary
$140,000 to $240,000. Last updated April 2026.
Highest Paying
$260,000
San Francisco, CA
Best Purchasing Power
$193,000
San Francisco, CA
Lowest Paying
$143,000
Charleston, WV
Salary data sourced from SEC filings, H-1B Labor Condition Applications (DOL), Bureau of Labor Statistics Occupational Employment and Wage Statistics, and aggregated job postings across 50+ platforms. Ranges reflect 25th to 75th percentile for full-time positions. Cost-of-living adjustments use Bureau of Economic Analysis Regional Price Parities (2025 index). Last updated April 2026.
The average AI Safety Engineer salary in the United States is $185,000 in 2026, with the full range spanning $140,000 at the 25th percentile to $240,000 at the 75th. San Francisco pays the most at $260,000, while San Francisco offers the best purchasing power after cost-of-living adjustments. Red teaming experience, alignment research knowledge, and ability to build safety evaluation frameworks are the primary salary factors.
AI Safety Engineer salary by city
What you should know
Red teaming experience, alignment research knowledge, and ability to build safety evaluation frameworks are the primary salary factors. Engineers with published work on adversarial robustness, jailbreak prevention, or constitutional AI methods command premiums. This role sits at the intersection of ML engineering and policy, and people who span both worlds are rare.
Junior AI safety engineers start at $125,000 to $155,000, reaching mid level at $165,000 to $215,000 in two to three years. Senior AI safety engineers earn $215,000 to $280,000. Head of Safety or Director of Trust and Safety roles at AI companies can exceed $400,000 in total compensation.
Equity at AI safety focused labs can add $40,000 to $180,000+ annually, reflecting the strategic importance of the role. Bonuses of 15 to 25% are standard. Benefits uniquely include ethics review board participation, safety research publication support, and funded collaborations with academic AI safety groups.