中文字幕日韩在线久久,日韩精品乱码久久久久,国产精品自拍视频在线播放,丝袜 制服 国产 在线,黄色成人污污污在线观看,超碰在线中文字幕av,91亚洲国产成人精品久久久,亚洲少妇av乱码在线观看,精品人妻av在线一区二区三区

每日聚焦:Pause Giant AI Experiments: An Open Letter(暫停大型人工智能實(shí)驗(yàn): 一封公開信)

2023-03-29 22:29:26
Pause Giant AI Experiments: An Open Letter(暫停大型人工智能實(shí)驗(yàn): 一封公開信)

前幾天在 futureoflife 網(wǎng)站上有一封公開信,呼吁暫停大型人工智能實(shí)驗(yàn),并且可以加上你的簽名,目前看來包括馬斯克在內(nèi)的很多大佬、圖領(lǐng)獎(jiǎng)得主都已經(jīng)簽名了。

AI發(fā)展的速度確實(shí)太快了,而且如果不考慮金錢成本、不受限制的學(xué)習(xí)的話,這個(gè)速度將會(huì)是指數(shù)級(jí)的增長,在我們目前沒有做好完全足夠多的準(zhǔn)備情況下,是不是要繼續(xù)發(fā)展確實(shí)是個(gè)問題。


(資料圖片)

要發(fā)展,至少目前要有一個(gè)較為完善穩(wěn)妥的制度,大家都知道,一種新的事物出來的時(shí)候都是野蠻式的增長的,比如P2P、加密貨幣,最終的結(jié)局可能都不會(huì)太好。

我看見網(wǎng)上有說 openAI CEO 也簽名了,但是我在這份簽名單上貌似沒有找到的。

當(dāng)然,人心叵測,我們只是平民老百姓,大佬的想法跟我們完全不同,換個(gè)角度去想,這是不是商業(yè)上的打擊和競爭呢?

以下是原文和譯文,原文地址貼在文末,有興趣可以自己查看。

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

我們呼吁所有人工智能實(shí)驗(yàn)室立即暫停至少6個(gè)月的培訓(xùn)比 GPT-4更強(qiáng)大的人工智能系統(tǒng)。

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

具有人類競爭智能的人工智能系統(tǒng)可以對(duì)社會(huì)和人類構(gòu)成深遠(yuǎn)的風(fēng)險(xiǎn),正如廣泛的研究[1]所顯示的,并得到頂級(jí)人工智能實(shí)驗(yàn)室的承認(rèn)。[2]正如得到廣泛認(rèn)可的《阿西洛馬爾人工智能原則》所指出的,高級(jí)人工智能可以代表地球生命歷史上的一個(gè)深刻變化,應(yīng)該在相應(yīng)的關(guān)心和資源的支持下進(jìn)行規(guī)劃和管理。不幸的是,這種規(guī)劃和管理水平并沒有發(fā)生,盡管最近幾個(gè)月人工智能實(shí)驗(yàn)室陷入了一場失控的競賽,開發(fā)和部署越來越強(qiáng)大的數(shù)字頭腦,沒有人——甚至是它們的創(chuàng)造者——能夠理解、預(yù)測或可靠地控制它們。

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Shouldwe let machines flood our information channels with propaganda and untruth? Shouldwe automate away all the jobs, including the fulfilling ones? Shouldwe develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Shouldwe risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.This confidence must be well justified and increase with the magnitude of a system"s potential effects. OpenAI"s recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."We agree. That point is now.

當(dāng)代的人工智能系統(tǒng)在一般任務(wù)上正變得與人類競爭,我們必須捫心自問: 我們是否應(yīng)該讓機(jī)器用宣傳和謊言淹沒我們的信息渠道?我們是否應(yīng)該將所有的工作自動(dòng)化,包括那些令人滿意的工作?我們是否應(yīng)該發(fā)展出可能最終超過我們、智力超過我們、過時(shí)并取代我們的非人類思維?我們應(yīng)該冒失去文明控制的風(fēng)險(xiǎn)嗎?這樣的決定不能委托給未經(jīng)選舉產(chǎn)生的技術(shù)領(lǐng)導(dǎo)人。只有當(dāng)我們確信它們的影響將是積極的并且它們的風(fēng)險(xiǎn)將是可控的時(shí)候,才應(yīng)該開發(fā)強(qiáng)大的人工智能系統(tǒng)。這種信心必須是合理的,并隨著系統(tǒng)潛在影響的大小而增加。OpenAI 最近關(guān)于人工通用智能的聲明指出: “在某種程度上,在開始訓(xùn)練未來系統(tǒng)之前得到獨(dú)立的審查可能是重要的,而且對(duì)于最先進(jìn)的努力來說,同意限制用于創(chuàng)建新模型的計(jì)算機(jī)的增長速度也是重要的?!蔽覀兺狻D蔷褪乾F(xiàn)在。

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

因此,我們呼吁所有人工智能實(shí)驗(yàn)室立即暫停至少6個(gè)月的培訓(xùn)比 GPT-4更強(qiáng)大的人工智能系統(tǒng)。這種暫停應(yīng)該是公開的、可驗(yàn)證的,并包括所有關(guān)鍵參與者。如果這種暫停不能迅速實(shí)施,各國政府應(yīng)該介入并實(shí)施暫停。

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does notmean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

人工智能實(shí)驗(yàn)室和獨(dú)立專家應(yīng)利用這一暫停時(shí)間,共同開發(fā)和實(shí)施一套高級(jí)人工智能設(shè)計(jì)和開發(fā)的共享安全協(xié)議,由獨(dú)立外部專家進(jìn)行嚴(yán)格審核和監(jiān)督。這些協(xié)議應(yīng)該確保遵循它們的系統(tǒng)是安全的,不會(huì)有任何合理的懷疑。[4]這并不意味著一般意義上的人工智能開發(fā)的暫停,只是從危險(xiǎn)的競賽退回到具有緊急能力的更大的不可預(yù)測的黑匣子模型。

AI research and development should be refocused on making today"s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

人工智能的研究和發(fā)展應(yīng)該重新聚焦于使當(dāng)今強(qiáng)大的、最先進(jìn)的系統(tǒng)更加準(zhǔn)確、安全、可解釋、透明、健壯、一致、值得信賴和忠誠。

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

與此同時(shí),人工智能開發(fā)人員必須與政策制定者合作,大幅加快開發(fā)強(qiáng)大的人工智能治理系統(tǒng)。這些措施至少應(yīng)包括: 專門負(fù)責(zé)人工智能的新的、有能力的監(jiān)管機(jī)構(gòu); 監(jiān)督和跟蹤高能力的人工智能系統(tǒng)和大量計(jì)算能力; 幫助區(qū)分真實(shí)和合成以及跟蹤模型泄漏的出處和水印系統(tǒng); 健全的審計(jì)和認(rèn)證生態(tài)系統(tǒng); 人工智能造成損害的責(zé)任; 為人工智能技術(shù)安全研究提供充足的公共資金; 以及資源充足的機(jī)構(gòu),以應(yīng)對(duì)人工智能將造成的巨大經(jīng)濟(jì)和政治混亂(尤其是。

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let"s enjoy a long AI summer, not rush unprepared into a fall.

有了人工智能,人類可以享受一個(gè)繁榮的未來。在成功創(chuàng)造出強(qiáng)大的人工智能系統(tǒng)之后,我們現(xiàn)在可以享受一個(gè)“人工智能夏天”,在這個(gè)夏天里,我們可以收獲回報(bào),為所有人的利益設(shè)計(jì)這些系統(tǒng),并給社會(huì)一個(gè)適應(yīng)的機(jī)會(huì)。社會(huì)對(duì)其他可能對(duì)社會(huì)造成災(zāi)難性影響的技術(shù)已經(jīng)暫停使用。[5]我們可以在這里做。讓我們享受一個(gè)漫長的人工智能夏天,而不是毫無準(zhǔn)備地沖進(jìn)秋天。

簽署人(不一一列舉)

Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal

YoshuaBengio,Mila 的創(chuàng)始人和科學(xué)主任,圖靈獎(jiǎng)得主,蒙特利爾大學(xué)教授

Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach"

斯圖爾特 · 拉塞爾(Stuart Russell) ,伯克利大學(xué)計(jì)算機(jī)科學(xué)教授,智能系統(tǒng)中心主任,《人工智能: 現(xiàn)代方法》標(biāo)準(zhǔn)教科書的合著者

Elon Musk, CEO of SpaceX, Tesla & Twitter

SpaceX,Tesla & Twitter 的首席執(zhí)行官馬斯克

Steve Wozniak, Co-founder, Apple

史蒂夫 · 沃茲尼亞克,蘋果公司聯(lián)合創(chuàng)始人

...

原文地址:https://futureoflife.org/open-letter/pause-giant-ai-experiments/

關(guān)鍵詞: