強人工智能

出自維基百科,自由嘅百科全書
2018 年《AI for Good》峰會嗰度展示嘅機械人蘇菲亞;佢內置人工智能,曉用語言同人傾偈,俾好多人覺得係強人工智能嘅第一步。

強人工智能koeng4 jan4 gung1 zi3 nang4英文strong artificial intelligence),又叫普遍人工智能pou2 pin3 jan4 gung1 zi3 nang4general artificial intelligence),可以話係人工智能領域嘅終極目標:人工智能係專門研究點樣人工噉創造智能體嘅領域;強人工智能係指能夠展現出所有自然智能有嘅特性嘅人工智能,會完美噉通過圖靈測試,理論上係人工智能領域會想創造嘅嘢[1]

到咗 2020 年,強人工智能仲係一個難以達到嘅構想:廿世紀嘅學界作出咗多次創造強人工智能嘅嘗試,但次次都衰收尾,遠遠噉低估咗呢個作業嘅難度,而到咗廿一世紀,一個典型嘅人工智能專家多數都會集中於解決一至兩個問題,而唔會大想頭到諗住創造出能夠好似人類噉普遍解決問題嘅人工智能程式[2][3]。雖然係噉,有好多人工智能專家都相信,呢啲淨係曉解決一至兩個問題嘅人工智能程式終有一日會俾人砌埋一齊,做一個強人工智能[4][5]

基本概念[編輯]

睇埋:認知科學同埋人工智能

智能[編輯]

內文:智能

強人工智能想做嘅係教電腦表現出好似人類噉嘅智能(intelligence),所以要達致強人工智能,第一條要問嘅問題就係:何謂「智能」?喺廿一世紀初嘅認知科學(cognitive science)上,智能係一個有啲含糊概念,有好多唔同嘅定義,包括思考、對邏輯嘅運用、理解自我意識理性計劃創意同埋解難等等嘅認知功能都俾人認定係量度智能嘅重要指標[6]。對於呢條問題,喺 1995 年由美國心理學會出版嘅報告《智能:已知同未知》(Intelligence: Knowns and Unknowns)就噉樣講[7]

英文原文:"Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions."

粵文翻譯:唔同嘅個體之間喺理解複雜諗頭嘅能力、有效噉適應環境、由經驗嗰度學習、做推理、用思考克服障礙(等方面)上有差異。雖然呢啲個體差異可以好巨大,佢哋(啲個體差異)唔係完全前後一致嘅:是但搵個人,佢嘅智能表現會隨時間、領域、同埋判斷嘅準則而有變化。「智能」呢個概念係對於闡明同組織呢一柞複雜現象嘅嘗試。雖然喺某啲地方(心理學界方面)做咗唔少嘅闡明,但係目前仲未有可以解答嗮所有重要問題嘅概念化,而且冇一個係個個學者都認同嘅。事實係,當早嗰排有兩打重要嘅理論家俾人叫佢哋定義智能嗰陣,佢哋俾咗兩打彼此之間有些少唔同嘅定義出嚟。

複雜性嘅處理[編輯]

將一隻鳩鴿擺喺個箱入面,俾兩個掣佢撳,兩個掣分別都掕住一盞燈。
睇埋:複雜性不確定性同埋機械學習

雖然話學界對於智能要定義冇一致嘅共識,但主流研究者都會認同,智能涉及學習嘅能力,智能高嘅智能體係理應要能夠由複雜(complex)嘅數據當中搵出數據背後嘅規律嘅。而家想像以下噉嘅實驗

  • 擺一隻動物喺個實驗室入面,有兩個掣俾佢撳,
  • 呢兩個掣會首先以一啲特定規律閃,
  • 閃完之後隻動物要撳「按嗰個規律,下一個會閃嘅掣」先可以有嘢食-例:L 表示左邊嗰個掣,R 表示右邊嗰個掣,如果家陣啲掣以 LRLRLRLR(左掣閃完到右掣閃,重複)嘅規律閃,下一個閃嘅掣應該會係左掣,而如果家陣啲掣以 LLRLLRLL(左掣閃兩吓到右掣閃一吓,重複)嘅規律閃,下個閃嘅掣應該會係右掣... 如此類推;

喺呢種實驗入面,隻動物需要了解啲掣閃嘅規律同埋預測下一個會閃嘅掣係乜;原則上,愈係智能高嘅動物就愈能夠應付複雜度高嘅「掣閃規律」。按照呢條思路,有研究者提出,要量度一個智能體(例如係一個人工智能)嘅智能有幾高,首先就要諗出一套方法嚟量化「複雜度」呢個概念,而如果有個人工智能曉處理嘅數據嘅複雜度同人曉處理嘅數據嘅一樣,嗰個人工智能就可以算係達到強人工智能嘅第一步[8]

演算法熵[編輯]

內文:演算法熵

演算法熵(algorithmic entropy,),又叫柯氏複雜性(Kolmogorov complexity,K-complexity),係理論電腦科學同相關領域上用嚟量度一件物件嘅複雜性嘅指標,一件物件嘅 係指要產生嗰件物件嘅程式嘅最短可能長度[9][10],舉兩個簡單嘅例子說明,想像以下呢兩串符號:

abababababababababababababababab(串 1)
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7(串 2)

呢兩串符號長度一樣,但喺複雜度上唔同:串 1 可以描述為「將『ab』寫 16 次」,即係 write ab 16 times 噉嘅-段碼淨係用咗 17 個符號;相比之下,串 2 冇乜明顯嘅規律,唔能夠用一句嘢簡單噉描述嗮,所以要部電腦死記住 write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 嘅碼-段碼有成 38 個符號。所以如果用 做準則嘅話,串 1 簡單過串 2 [10]。對於強人工智能研究嚟講, 理論上[註 1]有最少兩個用途[11]:p. 4-5

  • 用嚟量度一個智能體能夠應付幾複雜嘅規律(睇返上面「掣閃規律」);
  • 用嚟量度一個智能體能夠幾有效率噉解決一個問題:想像有兩個人工智能程式,佢哋喺解決問題 上嘅表現(以「答問題嘅準確度」等嘅指標量度)完全一樣,不過其中一個程式嘅源碼冇咁複雜(複雜度用 做指標量度),噉簡單啲嗰個程式比較有效率。可以睇埋奧坎剃刀(Occam's razor)。

圖靈測試[編輯]

圖靈測試嘅圖解;A 係部受試機械,B 係個對照人類,C 係個評判。評判唔會見到 A 同 B,只能夠通過文字同兩者溝通。
內文:圖靈測試

圖靈測試(Turing test,簡稱「TT」)係人工智能哲學上一個出名嘅議題。圖靈測試係由英國數學家亞倫圖靈(Alan Turing)喺 1950 年諗出嚟嘅一個測試,用嚟檢驗一部機械係咪展現到好似人噉嘅有智能行為。最基本嗰種圖靈測試步驟如下:一次測試會有一個人類負責做評判,跟住又有一個人類同部受試嘅機械,兩者分別噉同個評判講嘢;個評判唔會見得到個人類同個受試者,淨係有得用鍵盤同熒幕等嘅方法同受試者傾偈,最後個評判就要答,兩個受試者當中邊個係人邊個係機械-而如果搵咗一班評判返嚟之後,發現班評判嘅判斷嘅準確性明顯好過隨機靠撞(答中率等如 50%)嘅話,嗰部受試機械就算得上係通過咗圖靈測試,展現出同人類無異嘅智能[12][13]

圖靈測試喺人工智能哲學上引起咗廣泛討論。例如有學者批評圖靈測試指出,嚴格嚟講,就算一部機械通過咗圖靈測試,都只係表示佢曉喺一個人工環境下做某啲工作,但有智能嘅行為要求嘅係能夠喺自然環境下生存,所以圖靈測試喺測試機械智能上嘅功用有限[12][14]。因為噉,有學者諗出咗新版嘅圖靈測試,好似係所謂嘅「真正完整圖靈測試」(Truly Total Turing Test,TRTTT)噉,就認為一部機械要算得上展現人類智能,佢就需要能夠喺自然環境下達成人類能夠達成嘅重大成就,包括係-好似人類噉樣-創作出藝術品音樂遊戲以及語言等嘅文化產物[15]

註釋[編輯]

  1. 喺應用上,「演算法熵實際要點樣估計」可以係一個大問題。

重要概念[編輯]

  • 認知架構
  • 學習轉移(transfer of learning),指一個認知系統由之前嘅經驗嗰度學識處理一啲自己未處理過嘅問題;有強人工智能研究者指,現實嘅認知系統往往需要由有限嘅數據當中學識處理問題,而廿一世紀初嘅人工神經網絡等技術都仲係好依賴要用大量嘅數據做例子學習(好多時要用閒閒地幾日嘅時間讀取數據),所以強人工智能嘅重要一環將會係教人工智能做學習轉移-唔使吓吓遇到新情況都要首先用一段時間搵同讀數據[16]
  • 承擔特質嘅學習(learning of affordance),指唔淨只能夠感知周圍嘅事物嘅現時狀態,仲要能夠想像呢啲事物嘅狀態會點樣隨自身嘅行動而有所變化[16]
  • 情感運算(affective computing)
  • 內省(introspection),指觀察自己嘅思緒情緒嘅能力;有人工智能研究者指,人工智能喺做嘢嗰時可能會進入無限迴圈(infinite loop;簡單講就係係噉不停做同一樣嘢),但假如人工智能具有內省嘅能力,佢就會曉觀察自己嘅狀態,有可能變到識留意自己有冇進入無限迴圈[16]
  • 解釋得嘅 AI(explainable AI),指人工智能內部嘅資訊處理過程要能夠用言語解釋;人曉用口講嚟表達自己嘅思考過程,識得向身邊嘅人解釋「我係點樣諗到呢個答案嘅」,強人工智能都要能夠做到同樣嘅事[17]
  • 意識(consciousness)

睇埋[編輯]

文獻[編輯]

  • Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4 (PDF). arXiv preprint arXiv:2303.12712.
  • Dirckx, S. (2019). Am I just my brain?. The Good Book Company.
  • Hutter, M. (2004). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media. See powerpoint slide summary here.
  • Sinz, F. H., Pitkow, X., Reimer, J., Bethge, M., & Tolias, A. S. (2019). Engineering a less artificial intelligence (PDF). Neuron, 103(6), 967-979.

[編輯]

  1. Sample, Ian (14 March 2017). "Google's DeepMind makes AI program that can learn like a human". the Guardian. Retrieved 26 April 2018.
  2. Pennachin, C.; Goertzel, B. (2007). "Contemporary Approaches to Artificial General Intelligence". Artificial General Intelligence. Cognitive Technologies. Berlin, Heidelberg: Springer.
  3. Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence. 互聯網檔案館歸檔,歸檔日期2018年8月19號,.". Distillations. Vol. 2 no. 2. pp. 14–23. Retrieved 20 March 2018.
  4. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (26 February 2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533.
  5. Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December 2010). "A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures". Neurocomputing. 74 (1–3): 30–49.
  6. Legg, S., & Hutter, M. (2007). A collection of definitions of intelligence. Frontiers in Artificial Intelligence and applications, 157, 17.
  7. Neisser, Ulrich; Boodoo, Gwyneth; Bouchard, Thomas J.; Boykin, A. Wade; Brody, Nathan; Ceci, Stephen J.; Halpern, Diane F.; Loehlin, John C.; Perloff, Robert; Sternberg, Robert J.; Urbina, Susana (1996). "Intelligence: Knowns and unknowns" (PDF). American Psychologist. 51: 77–101.
  8. Hernandez-Orallo, J. (2000). Beyond the Turing test (PDF). Journal of Logic, Language and Information, 9(4), 447-466.
  9. Kolmogorov, Andrey (1963). "On Tables of Random Numbers". Sankhyā Ser. A. 25: 369–375.
  10. 10.0 10.1 Kolmogorov, Andrey (1998). "On Tables of Random Numbers". Theoretical Computer Science. 207 (2): 387–395.
  11. Hutter, M. (2004). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media.
  12. 12.0 12.1 Saygin, A. P., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and machines, 10(4), 463-518.
  13. French, R. M. (1990). Subcognition and the limits of the Turing test. Mind, 99(393), 53-65.
  14. Russell, Stuart J.; Norvig, Peter (2010), Artificial Intelligence: A Modern Approach (3rd ed.), Upper Saddle River, NJ: Prentice Hall, p. 2 - 3.
  15. Schweizer, P. (1998), The Truly Total Turing Test, Minds and Machines, 8, pp. 263–272.
  16. 16.0 16.1 16.2 Ng, G. W., & Leung, W. C. (2020). Strong Artificial Intelligence and Consciousness (PDF). Journal of Artificial Intelligence and Consciousness, 7(01), 63-72.
  17. Holzinger, A., Biemann, C., Pattichis, C. S., & Kell, D. B. (2017). What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.