作者:
期刊名稱:AI & SOCIETY
發表日期:2026.03.24
發表文章:
When privacy yields to solidarity: national identity and the legitimacy of government AI public health surveillance in Taiwan
「疫情期間,政府運用科技協助追蹤疫情,已成為不少人熟悉的經驗。但如果監測不再只是匿名健康資料分析,而是進一步擴及移動路徑、社群媒體,甚至臉部辨識,民眾是否仍然願意接受?
陳端容老師最近發表在AI & Society的期刊論文,正是聚焦這項公共衛生與科技治理交會的重要議題:當政府基於防疫或公共安全考量使用 AI 進行監測時,人民為什麼會接受,甚至願意放棄一部分個人隱私?
研究結果顯示,民眾若越相信 AI 具備足夠能力,越傾向支持政府使用 AI 監測,而這樣的支持,也會透過降低隱私顧慮而進一步增強。尤其在臉部辨識、行為追蹤等較具侵入性的監測情境中,台灣國家認同還會進一步強化這種支持傾向。研究提醒我們,科技治理的正當性,不只是技術效能的問題,更涉及社會信任、政治認同,以及民主社會如何看待隱私與公共利益之間的界線。」
摘要/Abstract
While artificial intelligence (AI)-enabled surveillance provides governments with potent tools for crisis response, public acceptance across democracies remains highly uneven—a variation driven more by sociocultural factors than by technical efficacy. This study investigates how citizens in democratic societies leverage national identity to justify the normalization of government monitoring. Focusing on Taiwan as a strategic case where identity-based polarization and security threats intersect, we explore how the social construction of “national protection” reshapes the boundaries of privacy. Utilizing a near-nationally representative adult sample (N = 2861, August 2024) and a moderated mediation analysis (5000 bootstrap resamples), we tested whether national identity moderates the relationships among perceived AI capability, privacy concerns, and acceptance of government AI public health surveillance. Our findings reveal that perceived AI capability increases acceptance both directly and indirectly by mitigating privacy risks; however, this effect is amplified by Taiwanese national identification. Crucially, this moderating effect is exclusive to high-stakes applications, such as facial recognition and behavioral tracking, and does not apply to routine health data analysis. Taiwan’s experience demonstrates a critical sociotechnical tension: while a strong national identity can foster social solidarity, it may simultaneously erode the institutional safeguards and privacy expectations that are vital to democratic AI governance.