Meta Proposes New Industry Principles on AI Development

Team IMTools
Team IMTools
Meta Proposes New Industry Principles on AI Development
[ad_1]

As AI models rapidly advance, and more and more developers look to get into the AI field, the risks of AI evolution also increase, in regards to misuse, misinformation, and worse – AI systems that might extend beyond human understanding, and go further than anyone could have anticipated.

The scale of concern in this respect can shift significantly, and today, Meta’s President of Global Affairs Nick Clegg has shared an opinion piece, via The Financial Times, which calls for greater industry collaboration and transparency in AI development, in order to better manage these potential problems.

As per Clegg:

The most dystopian warnings about AI are really about a technological leap – or several leaps. There’s a world of difference between the chatbot-style applications of today’s large language models and the supersized frontier models theoretically capable of sci-fi-style superintelligence. But we’re still in the foothills debating the perils we might find at the mountaintop. If and when these advances become more plausible, they may necessitate a different response. But there’s time for both the technology and the guardrails to develop.”

Essentially, Clegg’s argument is that we need to establish broader-reaching rules right now, in the early stages of AI development, in order to mitigate the potential harm of later shifts.

In order to do this, Clegg has proposed a new set of agreed principles for AI development, which focus on greater transparency and collaboration among all AI projects.

The main focus is on transparency, and providing more insight into how AI projects work.

At Meta, we have recently released 22 ‘system cards’ for Facebook and Instagram, which give people insight into the AI behind how content is ranked and recommended in a way that does not require deep technical knowledge.”

Clegg proposes that all AI projects share similar insight – which goes against the industry norms of secrecy in such development.

Meta also calls for developers to join the ‘Partnership on AI’ project, of which Meta is a founding member, along with Amazon, Google, Microsoft, and IBM.

“We are participating in its Framework for Collective Action on Synthetic Media, an important step in ensuring guardrails are established around AI-generated content.”

The idea is that, through collaboration, and shared insight, these AI development leaders can establish better rules and approaches to AI advancement, which will help to mitigate potential harms before they reach the public.

Clegg also proposes additional stress testing for all AI systems, to better detect potential concerns, and open sourcing of all AI development work, so others can help in pointing out possible flaws.

A mistaken assumption is that releasing source code or model weights makes systems more vulnerable. On the contrary, external developers and researchers can identify problems that would take teams holed up inside company silos much longer. Researchers testing Meta’s large language model, BlenderBot 2, found it could be tricked into remembering misinformation. As a result, BlenderBot 3 was more resistant to it.”

This is an important area of focus as we advance into the next stages of AI tools, but I also doubt that any type of industry-wide partnership can be established to enable full transparency over AI projects.

Projects will be underway in many nations, and a lot of them will be less open to collaboration or information-sharing, while rival AI developers will be keen to keep their secrets close, in order to get an edge on the competition. In this respect, it makes sense that Meta would want to establish a broader plane of understanding, in order to keep up with related projects, but it may not be as valuable for smaller projects to share the same.

Especially given Meta’s history of copycat development.

Elon Musk, who’s recently become Zuckerberg enemy number one, is also developing his own AI models, which he claims will be free of political bias, and I doubt he’d be interested in aligning that development with these principles.

But the base point is important – there are great risks in AI development, and they can be reduced through broader collaboration, with more experts then able to see potential flaws and problems before they become so.

Logically, this makes sense. But in practical terms, it’ll be a hard sell on several fronts.

You can read Nick Clegg’s op-ed on AI regulation here.

[ad_2]
Source link

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *