• About
  • Advertise
  • Contact
Tuesday, February 3, 2026
  • Login
No Result
View All Result
NEWSLETTER
The NY Journals
  • Home
  • Business
  • Technology
  • Entertainment
  • Sports
  • Lifestyle
  • Health
  • Politics
  • Trending
  • Home
  • Business
  • Technology
  • Entertainment
  • Sports
  • Lifestyle
  • Health
  • Politics
  • Trending
No Result
View All Result
The NY Journals
No Result
View All Result
Home Technology

Opinion: California’s AI safety bill is under fire. Making it law is the best way to improve it

by Sarkiya Ranen
in Technology
Opinion: California’s AI safety bill is under fire. Making it law is the best way to improve it
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


On Aug. 29, the California Legislature passed Senate Bill 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — and sent it to Gov. Gavin Newsom for signature. Newsom’s choice, due by Sept. 30, is binary: Kill it or make it law.

Acknowledging the possible harm that could come from advanced AI, SB 1047 requires technology developers to integrate safeguards as they develop and deploy what the bill calls “covered models.” The California attorney general can enforce these requirements by pursuing civil actions against parties that aren’t taking “reasonable care” that 1) their models won’t cause catastrophic harms, or 2) their models can be shut down in case of emergency.

Many prominent AI companies oppose the bill either individually or through trade associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological progress, that it’s unreasonable to hold them responsible for harmful applications that others develop, and that the bill overall will stifle innovation and hamstring small startup companies without the resources to devote to compliance.

These objections are not frivolous; they merit consideration and very likely some further amendment to the bill. But the governor should sign or approve it regardless because a veto would signal that no regulation of AI is acceptable now and probably until or unless catastrophic harm occurs. Such a position is not the right one for governments to take on such technology.

The bill’s author, Sen. Scott Wiener (D-San Francisco), engaged with the AI industry on a number of iterations of the bill before its final legislative passage. At least one major AI firm — Anthropic — asked for specific and significant changes to the text, many of which were incorporated in the final bill. Since the Legislature passed it, the CEO of Anthropic has said that its “benefits likely outweigh its costs … [although] some aspects of the bill [still] seem concerning or ambiguous.” Public evidence to date suggests that most other AI companies chose simply to oppose the bill on principle, rather than engage with specific efforts to modify it.

What should we make of such opposition, especially since the leaders of some of these companies have publicly expressed concerns about the potential dangers of advanced AI? In 2023, the CEOs of OpenAI and Google’s DeepMind, for example, signed an open letter that compared AI’s risks to pandemic and nuclear war.

A reasonable conclusion is that they, unlike Anthropic, oppose any kind of mandatory regulation at all. They want to reserve for themselves the right to decide when the risks of an activity or a research effort or any other deployed model outweigh its benefits. More importantly, they want those who develop applications based on their covered models to be fully responsible for risk mitigation. Recent court cases have suggested that parents who put guns in the hands of their children bear some legal responsibility for the outcome. Why should the AI companies be treated any differently?

The AI companies want the public to give them a free hand despite an obvious conflict of interest — profit-making companies should not be trusted to make decisions that might impede their profit-making prospects.

We’ve been here before. In November 2023, the board of OpenAI fired its CEO because it determined that, under his direction, the company was heading down a dangerous technological path. Within several days, various stakeholders in OpenAI were able to reverse that decision, reinstating him and pushing out the board members who had advocated for his firing. Ironically, OpenAI had been specifically structured to allow the board to act as it it did — despite the company’s profit-making potential, the board was supposed to ensure that the public interest came first.

If SB 1047 is vetoed, anti-regulation forces will proclaim a victory that demonstrates the wisdom of their position, and they will have little incentive to work on alternative legislation. Having no significant regulation works to their advantage, and they will build on a veto to sustain that status quo.

Alternatively, the governor could make SB 1047 law, adding an open invitation to its opponents to help correct its specific defects. With what they see as an imperfect law in place, the bill’s opponents would have considerable incentive to work — and to work in good faith — to fix it. But the basic approach would be that industry, not the government, puts forward its view of what constitutes appropriate reasonable care about the safety properties of its advanced models. Government’s role would be to make sure that industry does what industry itself says it should be doing.

The consequences of killing SB 1047 and preserving the status quo are substantial: Companies could advance their technologies without restraint. The consequences of accepting an imperfect bill would be a meaningful step toward a better regulatory environment for all concerned. It would be the beginning rather than the end of the AI regulatory game. This first move sets the tone for what’s to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

Herbert Lin is senior research scholar at the Center for International Security and Cooperation at Stanford University, and a fellow at the Hoover Institution. He is the author of “Cyber Threats and Nuclear Weapons.”



Source link

Tags: advanced aiai industryanthropicAugBillBoardCaliforniaCaliforniasCEOCompanyFireGovernmentImproveLawMakingModelOpenAIOpinionRiskSafetysb
Sarkiya Ranen

Sarkiya Ranen

I am an editor for Ny Journals, focusing on business and entrepreneurship. I love uncovering emerging trends and crafting stories that inspire and inform readers about innovative ventures and industry insights.

Next Post
Sexual assaults, robberies surging in Canada’s cities: report

Sexual assaults, robberies surging in Canada's cities: report

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Taylor Swift’s Entire Dress Coming Off During Concert Proves She Can Do It With a Wardrobe Malfunction – E! Online

Taylor Swift’s Entire Dress Coming Off During Concert Proves She Can Do It With a Wardrobe Malfunction – E! Online

2 years ago
Wayne Brady Details NSFW DMs He’s Gotten Since Coming Out as Pansexual – E! Online

Wayne Brady Details NSFW DMs He’s Gotten Since Coming Out as Pansexual – E! Online

2 years ago

Popular News

    Connect with us

    The NY Journals pride themselves on assembling a proficient and dedicated team comprising seasoned journalists and editors. This collective commitment drives us to provide our esteemed readership with nothing short of the most comprehensive, accurate, and captivating news coverage available.

    Transcending the bounds of New York City to encompass a broader scope, we ensure that our audience remains well-informed and engaged with the latest developments, both locally and beyond.

    NEWS

    • Business
    • Technology
    • Entertainment
    • Sports
    • Lifestyle
    • Health
    • Politics
    • Real Estate
    Instagram Youtube

    © 2025 The New York Journals. All Rights Reserved.

    • About Us
    • Advertise
    • Contact Us
    No Result
    View All Result
    • Home
    • Business
    • Technology
    • Entertainment
    • Sports
    • Lifestyle
    • Health
    • Politics
    • Trending

    Copyright © 2023 The Nyjournals

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In