• About
  • Advertise
  • Contact
Saturday, December 20, 2025
  • Login
No Result
View All Result
NEWSLETTER
The NY Journals
  • Home
  • Business
  • Technology
  • Entertainment
  • Sports
  • Lifestyle
  • Health
  • Politics
  • Trending
  • Home
  • Business
  • Technology
  • Entertainment
  • Sports
  • Lifestyle
  • Health
  • Politics
  • Trending
No Result
View All Result
The NY Journals
No Result
View All Result
Home Technology

A safety report card ranks AI company efforts to protect humanity

by Sarkiya Ranen
in Technology
A safety report card ranks AI company efforts to protect humanity
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter


Are artificial intelligence companies keeping humanity safe from AI’s potential harms? Don’t bet on it, a new report card says.

As AI plays an increasingly larger role in the way humans interact with technology, the potential harms are becoming more clear — people using AI-powered chatbots for counseling and then dying by suicide, or using AI for cyberattacks. There are also future risks — AI being used to make weapons or overthrow governments.

Yet there are not enough incentives for AI firms to prioritize keeping humanity safe, and that’s reflected in an AI Safety Index published Wednesday by Silicon Valley-based nonprofit Future of Life Institute that aims to steer AI into a safer direction and limit the existential risks to humanity.

“They are the only industry in the U.S. making powerful technology that’s completely unregulated, so that puts them in a race to the bottom against each other where they just don’t have the incentives to prioritize safety,” said the institute’s president and MIT professor Max Tegmark in an interview.

The highest overall grades given were only a C+, given to two San Francisco AI companies: OpenAI, which produces ChatGPT, and Anthropic, known for its AI chatbot model Claude. Google’s AI division, Google DeepMind, was given a C.

Ranking even lower were Facebook’s Menlo Park-based parent company, Meta, and Elon Musk’s Palo Alto-based company, xAI, which were given a D. Chinese firms Z.ai and DeepSeek also earned a D. The lowest grade was given to Alibaba Cloud, which got a D-.

The companies’ overall grades were based on 35 indicators in six categories, including existential safety, risk assessment and information sharing. The index collected evidence based on publicly available materials and responses from the companies through a survey. The scoring was done by eight artificial intelligence experts, a group that included academics and heads of AI-related organizations.

All the companies in the index ranked below average in the category of existential safety, which factors in internal monitoring and control interventions and existential safety strategy.

“While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” according to the institute’s AI Safety Index report, using the acronym for artificial general intelligence.

Both Google DeepMind and OpenAI said they are invested in safety efforts.

“Safety is core to how we build and deploy AI,” OpenAI said in a statement. “We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts. We share our safety frameworks, evaluations, and research to help advance industry standards, and we continuously strengthen our protections to prepare for future capabilities.”

Google DeepMind in a statement said it takes “a rigorous, science-led approach to AI safety.”

“Our Frontier Safety Framework outlines specific protocols for identifying and mitigating severe risks from powerful frontier AI models before they manifest,” Google DeepMind said. “As our models become more advanced, we continue to innovate on safety and governance at pace with capabilities.”

The Future of Life Institute’s report said that xAI and Meta “lack any commitments on monitoring and control despite having risk-management frameworks, and have not presented evidence that they invest more than minimally in safety research.” Other companies like DeepSeek, Z.ai and Alibaba Cloud lack publicly available documents about existential safety strategy, the institute said.

Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not return a request for comment.

“Legacy Media Lies,” xAI said in a response. An attorney representing Musk did not immediately return a request for additional comment.

Musk is also an advisor to the Future of Life Institute and has provided funding to the nonprofit in the past, but was not involved in the AI Safety Index, Tegmark said.

Tegmark said he’s concerned that if there is not enough regulation of the AI industry it could lead to helping terrorists make bioweapons, manipulate people more effectively than it does now or even compromise the stability of government in some cases.

“Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy this is to fix,” Tegmark said. “We just have to have binding safety standards for the AI companies.”

There have been efforts in the government to establish more oversight of AI companies, but some bills have received pushback from tech lobbying groups that argue more regulation could slow down innovation and cause companies to move elsewhere.

But there has been some legislation that aims to better monitor safety standards at AI companies, including SB 53, which was signed by Gov. Gavin Newsom in September. It requires businesses to share their safety and security protocols and report incidents like cyberattacks to the state. Tegmark called the new law a step in the right direction, but much more is needed.

Rob Enderle, principal analyst at advisory services firm Enderle Group, said he thought the AI Safety Index was an interesting way to approach the underlying problem of AI not being well-regulated in the U.S. But there are challenges.

“It’s not clear to me that the U.S. and the current administration is capable of having well-thought-through regulations at the moment, which means the regulations could end up doing more harm than good,” Enderle said. “It’s also not clear that anybody has figured out how to put the teeth in the regulations to assure compliance.”



Source link

Tags: AIai company effortai safety indexalibaba cloudartificial intelligence companyCardCompanycontrol interventionDeepSeekEffortsexistential safetyGoogleHumanitylife instituteMetaProtectRanksReportSafetytegmarkXAI
Sarkiya Ranen

Sarkiya Ranen

I am an editor for Ny Journals, focusing on business and entrepreneurship. I love uncovering emerging trends and crafting stories that inspire and inform readers about innovative ventures and industry insights.

Next Post
Canadians support sending troops to Poland if Russia invades: poll

Canadians support sending troops to Poland if Russia invades: poll

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Ruben Amorim's six-player transfer wishlist as Man Utd manager aims to build dream XI

Ruben Amorim's six-player transfer wishlist as Man Utd manager aims to build dream XI

1 year ago
Unai Emery handed clear wake-up call over Marcus Rashford decision

Unai Emery handed clear wake-up call over Marcus Rashford decision

8 months ago

Popular News

    Connect with us

    The NY Journals pride themselves on assembling a proficient and dedicated team comprising seasoned journalists and editors. This collective commitment drives us to provide our esteemed readership with nothing short of the most comprehensive, accurate, and captivating news coverage available.

    Transcending the bounds of New York City to encompass a broader scope, we ensure that our audience remains well-informed and engaged with the latest developments, both locally and beyond.

    NEWS

    • Business
    • Technology
    • Entertainment
    • Sports
    • Lifestyle
    • Health
    • Politics
    • Real Estate
    Instagram Youtube

    © 2025 The New York Journals. All Rights Reserved.

    • About Us
    • Advertise
    • Contact Us
    No Result
    View All Result
    • Home
    • Business
    • Technology
    • Entertainment
    • Sports
    • Lifestyle
    • Health
    • Politics
    • Trending

    Copyright © 2023 The Nyjournals

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In