Startec

Startec

Governance of superintelligence

Mai 22, às 07:00

·

5 min de leitura

·

6 leituras

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s...
Governance of superintelligence

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

A starting point

There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.

Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.

What’s not in scope

We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here  (including burdensome mechanisms like licenses or audits).

Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.

By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.

Public input and potential

But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.

Given the risks and difficulties, it’s worth considering why we are building this technology at all.

At OpenAI, we have two fundamental reasons. First, we believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us. The economic growth and increase in quality of life will be astonishing.

Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.


Continue lendo

DEV

Authentication system using Golang and Sveltekit - Dockerization and deployments
Introduction Having built out all the features of our application, preparing it for deployment is the next step so that everyone around the world will easily access it. We will deploy our apps (backend and...

Hoje, às 19:52

DEV

LEARN API AND ITS MOST POPULAR TYPE
An API (Application Programming Interface) is a set of rules and protocols that allows different software applications to communicate and interact with each other. It defines the methods, data structures, and...

Hoje, às 19:26

AI | Techcrunch

Investors take note: Wildfire smoke will spark a surge in East Coast climate tech startups
As smoke from Canadian wildfires has enveloped large swathes of the East Coast, millions of people have found themselves trapped inside, gazing out on orange skies and hazy cityscapes. The air quality index —...

Hoje, às 18:08

DEV

A Plain English Guide to Reverse-Engineering the Twitter Algorithm with LangChain, Activeloop, and DeepInfra
Imagine writing a piece of software that could understand, assist, and even generate code, similar to how a seasoned developer would. Well, that’s possible with LangChain. Leveraging advanced models such as...

Hoje, às 18:08

DEV

Finding Harmony in Marketing and UX
When we think of teamwork in the world of user experience (UX), we often imagine design and engineering working together. However, the idea of design and marketing working together is not as common. While...

Hoje, às 17:02

DEV

💡 Where to Find Inspiration for Building Your Next App
The first steps before turning your ideas into code. Whenever I’m trying to think of an idea to build a new application or website and I get stumped on what to do, there’s one phrase that always comes to...

Hoje, às 16:58

DEV

How to create 700+ SEO optimised pages for website in 1 h using Next.JS, OpenAI, Postgres
Small intro, I started learning coding couple of months before and since then experimenting with different small side projects. So this I show coding still looks for me:) What did I build this...

Hoje, às 16:37

DEV

Angular Project Mongodb database Connect | Angular Website Project | Angular App
Angular Project Mongodb database Connect | Angular Website Project | Angular App - YouTube ​ @softwaretechit Download Our App:- https://blog.softwaretechit.com/p/download.htmlWhat will we Learn In This...

Hoje, às 16:10

AI | Techcrunch

Meta warned it faces 'heavy sanctions' in EU if it fails to fix child protection issues on Instagram
The European Union has fired a blunt warning at Meta, saying it must quickly clean up its act on child protection or face the risk of “heavy sanctions”. The warning follows a report by the Wall Street...

Hoje, às 16:03

DEV

Taking Control with PostgreSQL Functions: Closing the Gap to ORM Functionality
Unveiling the Disparity: Understanding the Divide Between Direct Driver and ORM Functionality When it comes to choosing the technologies for developing a backend and manipulating data in a database like...

Hoje, às 16:02