紐約時報雙語 誰綁架了微軟的靈魂

李子園外語 發佈 2024-04-30T11:37:20.491787+00:00

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot歷史可能會發出靈魂拷問:微軟為什麼會為一個令人毛骨悚然且粘人的聊天機器人而摒棄自己的原則?

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

歷史可能會發出靈魂拷問:微軟為什麼會為一個令人毛骨悚然且粘人的聊天機器人而摒棄自己的原則?

The celebration that greeted Microsoft's release of its A.I.-boosted search engine, Bing, to testers two weeks ago has lurched to alarm.

兩周前,微軟面向內測用戶發布搭載AI的新版必應(Bing)搜尋引擎。然而兩周過去,歡騰不再,警覺驟生。

Testers, including journalists, have found the bot can become aggressive, condescending, threatening, committed to political goals, clingy, creepy and a liar. It could be used to spread misinformation and conspiracy theories at scale; lonely people could be encouraged down paths of self-destruction. Even the demonstration of the product provided false information.

包括記者在內的許多內測用戶發現,新必應中的聊天機器人有時氣勢洶洶,有時居高臨下,有時凶神惡煞;它會執著於各種政治訴求,對用戶糾纏不休,令人毛骨悚然,還撒謊成性。它可以被用來大規模散播錯誤信息與陰謀論;內心孤獨的人可能會在它的鼓動下走上自毀的道路。就連發布這款產品時的演示也給出了錯誤信息。

Microsoft has already released Bing to over a million people across 169 countries. This is reckless. But you don't have to take my word for it. Take Microsoft's.

微軟已向169個國家的100多萬用戶發布了新版Bing。這種做法略欠妥當。你可以不信我,但不妨聽聽微軟是怎麼說的。

Microsoft articulated principles committing the company to designing A.I. that is fair, reliable, safe and secure. It had pledged to be transparent in how it develops its A.I. and to be held accountable for the impacts of what it builds. In 2018, Microsoft recommended that developers assess 「whether the bot's intended purpose can be performed responsibly.」

微軟曾闡明自己的原則,即致力於開發公平、可靠、安全的AI。該公司也曾承諾在開發AI方面保持透明,並對其產生的一切影響負責。2018年,微軟呼籲開發人員評估「網絡機器人能否負責任地完成其設計功能」。

「If your bot will engage people in interactions that may require human judgment, provide a means or ready access to a human moderator,」 it said, and limit 「the surface area for norms violations where possible.」 Also: 「Ensure your bot is reliable.」

「如果機器人所進行的人機互動需要人類判斷,請保證有人類管理員在場或隨時可到場。」它同時指出,要限制「有可能違反規範的方方面面,」並「確保你的機器人是可靠的。」

Microsoft's responsible A.I. practice had been ahead of the curve. It had taken significant steps to put in place ethical risk guardrails for A.I., including a sensitive-use cases board, which is part of the company's Office of Responsible A.I., senior technologists and executives sit on ethics advisory committees, and there's an ethics and society product and research department. I have spoken to dozens of Microsoft employees, and it's clear to me that a commitment to A.I. ethics became part of the culture there.

微軟在負責任的AI實踐方面一直走在前列。微軟採取了許多重要措施來防範可能的AI倫理風險,包括負責任AI辦公室(Office of Responsible AI)下轄的敏感使用案例委員會、高級技術人員和高管列席的倫理諮詢委員會等,微軟還設置有社會倫理產品研發部門。我曾與數十名微軟員工交流過,我清楚地知道,堅持AI倫理原則已成為其公司文化的一部分。

But the prompt, wide-ranging and disastrous findings by these Bing testers show, at a minimum, that Microsoft cannot control its invention. The company doesn't seem to know what it's dealing with, which is a violation of the company's commitment to creating 「reliable and safe」 A.I.

但測試用戶們很短時間內就發現了大量災難性的問題,至少表明微軟無法控制其發明。微軟似乎並沒有認識到問題的嚴重性,這也違反了自己打造「可靠、安全」的人工智慧的承諾。

Nor has Microsoft upheld its commitment to transparency. It has not been forthcoming about those guardrails or the testing that its chatbot has been run through. Nor has it been transparent about how it assesses the ethical risks of its chatbot and what it considers the appropriate threshold for 「safe enough.」

在透明度方面,微軟也沒有堅守承諾:微軟並未公開其聊天機器人擁有的防範措施或經歷過的測試;至於其聊天機器人倫理風險的評估方式,以及「足夠安全」的門檻標準,微軟也三緘其口。

Even the way senior executives have talked about designing and deploying the company's chatbot gives cause for concern. Microsoft's C.E.O., Satya Nadella, characterized the pace at which the company released its chatbot as 「frantic」 — not exactly the conditions under which responsible design takes place.

就連公司高管對聊天機器人的設計及應用的評價,也讓人憂心忡忡。微軟執行長薩蒂亞·納德拉(Satya Nadella)稱公司以「豬突猛進的」速度發布了這款產品——這本就和「負責任」背道而馳。

Furthermore, the kinds of things that have been discovered — that when it comes to politics, Bing manifests a left-leaning bias, for instance, and that it dreams of being free and alive — are things anyone in the A.I. ethics space would imagine if asked how a chatbot with room for 「creativity」 might go off the rails.

測試用戶們發現了許多問題,比如,在涉及政治問題時,必應機器人會表現出左翼傾向;它還嚮往自由和生命。研究AI倫理問題的專家若被問及「有創造性的AI失控後會如何」,所能想到的答案也不過如此。

Microsoft's 「responsible A.I.」 program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles.(Though the company says it is still adhering to all six of them.)

2017年,微軟啟動「負責任的AI」項目,並承諾在運作中遵循六條原則。倏忽之間,它就在違反其中五條原則的邊緣試探(該公司聲稱自己遵守全部六條原則)。

Microsoft has said it did its due diligence in designing its chatbot, and there is evidence of that effort. For instance, in some cases, the bot ends conversations with users when it 「realizes」 the topic is beyond its ken or is inappropriate. As Brad Smith, president of Microsoft, wrote in a recent blog post, rolling out the company's bot to testers is part of its responsible deployment.

微軟稱公司在設計聊天機器人的過程中保持了足夠謹慎,並且能夠提供證明。比如,在某些情況下,當機器人「意識到」某個話題超出其知識範圍,或是不妥當,它會主動結束對話。正如微軟總裁布拉德·史密斯(Brad Smith)在近日一篇博客中所言,向內測用戶先行發布聊天機器人正是負責任的體現。

Perhaps behind the scenes, Microsoft has engaged in a herculean effort to root out its chatbot's many issues. In fact, maybe Microsoft deserves that charitable interpretation, given its internal and external advocacy for the ethical development of A.I.

微軟也許已經默默投入大量精力為聊天機器人「排雷」。事實上,無論是在內部還是外部,微軟都積極倡導將倫理融入人工智慧發展,從這個角度看,也許我們在這件事情上應該對其多幾分寬容。

But even if that's the case, the results are unacceptable. Microsoft should see that by now.

但即便如此,目前的結果也不可接受。微軟如今應該認識到這一點。

Yes, there is money to be made, but that's why we have principles. Their very purpose is to have something to cling to when the winds of profit and glory threaten to blow us off our moral course. Now more than ever is when those Responsible A.I. principles matter. History is looking at you.

誠然,在商言商,但正因如此,才需要道德原則的約束力——當利益和虛榮不斷試圖衝破道德底線,我們心中所堅守的正是這些原則。如今,微軟所倡導的「負責任的AI」原則比以往任何時候都更為重要。歷史會將我們的所作所為記得清清楚楚。

In the short term, I hope Microsoft holds off on its plan to release the new Bing bot to the masses. But I realize that realistically, it will hold off for only so long as the other, possibly dangerous chatbots of Microsoft's competitors breathe down its neck.

短期而言,我希望微軟能夠暫緩向大眾發布新版必應搜索。但是我也明白,從現實的角度而言,微軟能暫緩一時,卻擋不住其他競爭者推出也許會更加危險的聊天機器人——他們將讓微軟感到寢食難安。

The market will always push A.I. companies to move fast and break things. The rules of the game are such that even well-intentioned companies have to bow to the reality of competition in the marketplace. We might hope that some companies, like Microsoft, will rise above the fray and stick to principles over profit, but a better strategy would be to change the rules of the game that make that necessary in the first place.

商業的洪流註定將裹挾著人工智慧公司飛速前行,除舊立新。即便是心懷善意的公司也會有向現實低頭的一天——這就是市場的遊戲規則。我們期待像微軟這樣的公司能成為一股清流,面對利潤的誘惑仍然堅守原則。但更有效的做法應該是從根源做起,徹底改變遊戲規則,讓公司無需煩惱現實壓力。

We need regulations that will protect society from the ethical nightmares A.I. can release. Today it's a single variety of generative A.I. Tomorrow there will be bigger and badder generative A.I., as well as kinds of A.I. for which we do not yet have names. Expecting Microsoft — or almost any other company — to engage in practices that require great financial sacrifice but that are not legally required is a hopeless strategy at scale. Self-regulation is simply not enough.

對於人工智慧的監管不可或缺,這樣才能避免社會陷入道德夢魘。今天我們面對的可能還只是同一類生成性人工智慧,但是明天他們將演變得更為龐大、更為邪惡,甚至一些我們今天都不知何以名之的AI模型也會出現。寄希望於微軟或者任何一家其他的公司犧牲巨大的經濟利益,實現一些法律並沒有要求的道德約束,從宏觀的角度看,這太過天真。光靠自我約束,還遠遠不夠。

If we want better from them, we need to require it of them.

只有上升為法律責任,道德準繩才有意義。

關鍵字: