浸大國際學院社會科學學部實踐哲學 課程團隊主辦的海外嘉賓交流講座「以人工智能增強人類的承諾與風險」於3 月 23 日順利舉行。是次交流講座特邀台灣國立陽明交通大學心智哲學研究所的副教授Dr. Alexandre Erler主講,Dr. Erler更找來其學生參加是次交流活動。交流講座主要為修讀科技哲學課的實踐哲學二年級生而設。活動除了擴闊學術視野,更旨在培養準畢業生與國際級的哲學家作即時交流討論的能力,為他們升讀學士課程做好準備。

(圖: 實踐哲學主修生跟 Dr. Erler 及其他參加者交流)
(圖: 實踐哲學主修生跟 Dr. Erler 及其他參加者交流)

活動後有主修生選擇撰文回應 Dr. Erler 的講課,作為課程作業之一。他們樂意在此刊出其文章選節,作為是次活動記錄。

“Dr. Erler talked about the meaning of human argumentation and the current applications of AI, where using AI for intellectual argumentation is a controversial issue in ethics. […]  By Dr. Erler’s definition, human argumentation is the “improvement of human physical & mental performance in a way that doesn’t necessarily involve altering someone’s general functioning” […]. Among his ethical concerns, I think human obsolescence is the most important issue as it may be a threat to humans “Human obsolescence” means that with AI becoming more intelligent and capable of doing things that humans cannot do, humans would be no longer useful and lose the meaning and value of life.” (by C. Paou)

“Professor Erler suggests that ChatGPT is unreliable. When he tested ChatGPT by asking it to provide some philosophy journals on the topic of artificial life, it provided a false reference list that with papers that do not exist at all. […] [I think] ChatGPT implants the mindset of “convenient answer” and impatient attitude in students, who might answer for the sake of answering. […] Educational values mainly focus on promoting critical thinking, creativity, and social-emotional learning, rather than simply optimizing efficiency or raising test scores. ChatGPT can create the delusion in which students believe that they acquired the knowledge but actually they do not. […] The technology perhaps undermines students’ critical thinking, problem-solving skill, and ultimately, they become “lazier” at work.  (by A. Lai)

“Dr. Erler points out that some people think that using AI means a better future for humans, but is it so? Take the promotion of centaur chess as an example: people tend to believe human-AI team can have superior performance that makes life better. […] It is important to show how people easily believe technology is a kind of neutral tool that can bring a better future for humans, though it is just developing. […] It reminds me of the factors accounting for the restlessness for modern technology raised by the German philosopher Hans Jonas. One of the factors is the “unprecedented belief in virtual infinity” [i.e. the belief that “there is always something new and better to find.” ] […] Jonas thinks that this uncritical vision that people believe is a post hoc vision, which is imparted by “the dazzling feats of technological progress” (1979, p.213). This is the same as the case of centaur chess.  (by A. Wu)