Cover image by @waifu@waifuism.life
網路上的言論自由
由於網際網路與社群網站的發展主要是受美國的科技公司影響,因此在討論聯邦宇宙的言論自由之前,有必要先了解美國憲法第一修正案與通信端正法第 230 條的背景。
美國憲法第一修正案立法時的情境是政府與公民的二元對立,目的是節制政府的權力以保障公民的自由權,包含言論自由。然而今日大多數人發聲的管道,是由網路巨頭臉書、谷歌與推特等公司所掌控的平台, 第一修正案雖然禁止政府審查私有網站,卻也給私有公司在審查使用者生產的內容和抱持特定政治立場方面提供了保護,因為這屬於私人公司的言論自由。第一修正案並不適用於私有公司與個人使用者之間的關係。
《通信端正法》第 230 條的作用簡言之,網路服務提供者不對其平台使用者發布的言論負責,同時網路提供者只要是出於善意,就不會因為移除或審查第三方提供的內容而背負民事責任。 2020 年美國總統大選時,由於川普的爭議性言論不為自由派媒體所喜,加上隨後發生的2021 年美國國會大廈遭衝擊事件,川普相繼被臉書與推特封禁,並審查相關事件的言論。川普因此認為《通信端正法》第 230 條讓網路巨頭得以選擇性的言論審查,應該要廢除;而民主黨認為第 230 條讓網路巨頭逃避了監管的責任,也想廢除 230 條。臉書、推特等公司則反對廢除,因為沒有了第 230 條的保護傘,社群網路公司們對使用者的貼文態度勢必從嚴禁止任何具有爭議性的議題,以免惹禍上身,若果真如此,對言論自由的發展是有害的。第 230 條將來是廢除或者修正,對於網路生態的發展影響至關重大,不只是主流社群網路的使用者,聯邦宇宙的使用者也應該注意這件事的發展。
聯邦宇宙的現況
在聯邦宇宙 (Fediverse),要註冊加入一個實例,必須同意該實例的服務條款 (Terms of Service),服務條款一般包含貼文內容不得違反(主機/站長/NIC)所在地的法律,以及其他不希望出現在該實例的內容,可分為對多數人有害的,例如虛假資訊、廣告與騷擾;文化上不合宜,可能引起不適的,例如血腥暴力、裸露色情、蘿莉正太圖畫的允許程度;反映某種價值觀,例如性別平權、言論自由等。根據我的觀察,通常偏向進步價值的實例傾向屏蔽較多的使用者與實例,有比較多的禁語,比較少有使用者和 admin 拌嘴的機會;偏向 free speech 的實例則傾向屏蔽較少的使用者與實例,比較會有使用者之間、使用者與 admin 之間、admin 和 admin 之間的爭執,包含爭執言論自由本身。
Moderation 指的是避免過度的行為或政治意見,在這裡我姑且翻作言論節制。多數中小型的實例是由 admin 包辦一切的管理工作,在大型的實例還會設有專職的 moderator,相當於 BBS 時代的板主,具有節制言論的權力,例如刪除文章、禁言甚至凍結使用者,以維持站上的秩序;以上這些行為,如果是對所在站實行,除了有些實例不會接受刪除指令以外,對外站也有同樣效果;如果是外站(遠端)的使用者,則操作的效果僅發生在本站。
一般的使用者可以操作的手段,包含
- 對討論串 (thread) 靜音,即不會跳出通知但仍可查看,常用在比較熱烈的討論串中被 tag 的情況,例如 tag 多人的地獄討論串 (hellthread)
- 將某使用者靜音
- 封鎖 (block) 某使用者
- 檢舉 (report) 某使用者
- Mastodon 還可封鎖來自該實例的所有內容
檢舉的報告會傳送給對方的實例管理員,可能也會傳給本地的管理員。由於實例的政策差異,檢舉報告可能會被處理與回覆,也可能被無視,但如果是 spam、騷擾與明確的違法行為,應該先對該貼文檢舉讓對方的管理員有機會處理,而不要直接跳到封鎖,畢竟不可能有人能 24 小時不間斷的監視自己的實例。
admin / moderator 可以對遠端使用者操作的手段,包含
- 禁言 (silence),貼文不會出現在本站的時間軸,只會出現在有關注的使用者的首頁
- 凍結 (suspend),貼文不會出現在本站,也無法與該使用者互動
- 刪除文章,僅限本地
- 向該實例檢舉
admin / moderator 可以進行實例級的操作,例如來自該實例的媒體全部不顯示,或將該實例靜音,甚至封鎖。Mastodon 的實例級的操作有以下幾種:
- 過濾媒體 (media removal) 不會處理或儲存這些伺服器的媒體檔案,也不會顯示縮圖,需要手動點選原始檔
- 靜音的伺服器 (silenced) 這些伺服器的嘟文會被從公開時間軸與對話中隱藏,而且與它們的使用者互動並不會產生任何通知,除非您追蹤他們
- 暫停的伺服器 (suspended) 來自這些伺服器的資料都不會被處理、儲存或交換,也無法和這些伺服器上的使用者互動與溝通
Pleroma 的實例級操作至少有以下幾種:
- 拒絕 (Reject) 本實例不會接收來自這些實例的消息
- 從所有已知網路中移除 (Removal from "Known Network" Timeline) 這個實例在所有已知網絡中移除這些實例
- 媒體強制設定為敏感 (Media force-set as sensitive) 這個實例強迫這些實例的帖子媒體設定為敏感
Misskey 的實例級操作,只有封鎖實例一種。
如果某個遠端使用者的貼文以及其所在實例的政策明顯不符合我的價值觀,那麼這個實例上的使用者應該大多也是如此,封鎖整個實例似乎可以減少很多不必要的麻煩。不過,真的是這樣嗎?在 Hassan 等人 (2021) 的研究中,調查了 1298 個 Pleroma 實例,11 萬個使用者,2.45 百萬個貼文,以及 46 個不同的政策,他們發現,對實例的拒絕 (reject) 讓 86.2% 的使用者與 88.5% 的貼文受到池魚之殃;而且實例的規模和被其他實例拒絕的情況,具有弱相關。舉例來說,假如某個實例的使用者發佈了厭女貼文,令人很不舒服,寫文撻伐又很累人,但這個實例可能有 8 成以上的其他使用者與貼文是「 無害」的,直接封鎖這個實例就切斷了與「無害使用者」聯繫的機會。當然,從使用者的角度,把它做為一種篩選訊息來源的訊號無可厚非,但是在實例管理的層次,我傾向留給使用者決定,頂多封鎖個別的使用者。所以,我也認為 FediBlock 不是個好主意,憑什麼我要用一個別人列的清單,基於不知道是什麼的原則,來替我的實例使用者決定他們不該看到哪個實例上的內容呢?然而,隨著實例的規模變大,要繼續遵循這樣的原則會變得越來越困難。
左輪手槍
對於 admin 來說,不但要花錢花時間維護實例,moderation 也是個額外的問題,而且「人」的問題往往比較棘手。有一個專案,對於moderation 這個麻煩提出的解決方案,就是不用 moderation;既然 admin 是個苦差事,那乾脆不需要有 admin 就好啦。大老師不是說嗎,「與其設法解決問題,不如讓問題變成不是問題」。
Revolver的目標,是
- 架構上不需要 admin
- 不會被域名商下架
- 不會受制於中心化的 DNS、LetsEncrypt 等機構
- 使用 IPFS 協定
- 使用 LitePub 協定,與聯邦宇宙的其他應用相容
- 輕量級,手機上也可以執行
- 抗 spam
- 端到端加密的私信
Revolver 專案的開發者,也是 freespeechextremist.com 的站長,他想要解決的是審查的問題,P2P 的去中心化,促進言論自由。虛假訊息、煽動言論的問題,並不在這個專案考量的範圍內,也不應該是由這個專案所考量的。每個人都該為自己的言論負責,每個人也都該建立起資訊識讀的能力,而不是仰賴 moderator 或者什麼人來為自己判斷。
【參考資料】
言論自由的叢林法則(上)
社群媒體應不應該對用戶言論負責?檢視引發廢除爭議的🇺🇸美國《通信端正法》第 230 條
Hassan, A. I., Raman, A., Castro, I., Zia, H, Cristofaro, E., Sastry, N., & Tyson, G. (2021). Exploring Content Moderation in the Decentralised Web: The Pleroma Case. Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies (ACM CoNext 2021). https://arxiv.org/abs/2110.13500
g0v.social 的伺服器規則(需登入才可看到列表)
mastodon.social 的伺服器規則
shitposter.club 的實例規則
Comments
May 22, 2022 19:14
@pch_xyz @waifu CC @p @PCH_XYZ
EN // TRANSLATION (Computer translated and Corrected by Nekobit):
Free speech online…
Since the development of the Internet and social networking sites, free speech is largely influenced in US technology companies. It is necessary to understand the background of the First Amendment to the US Constitution and Section 230 of the Communications Decency Act before discussing freedom of speech in the Fediverse (ED note: Federated Universe is the original translation).
When the First Amendment of the US Constitution was enacted, the situation was a binary opposition between the government and the citizens. The purpose was to limit the power of the government to protect citizens’ freedoms, including freedom of speech. However, the channel through which most people speak today is the platform controlled by Internet giants such as Facebook, Google and Twitter. Although the First Amendment prohibits the government from censoring private websites, it also allows private companies to censor user-generated content and take certain political positions, as this is freedom of speech for private companies. The First Amendment does not apply to relationships between private companies and individual users.
The role of Section 230 of the Communications Decency Act In a nutshell, ISPs are not responsible for speech posted by users of their platforms, and as long as ISPs act in good faith, they will not be responsible for removing or censoring third parties. Civil liability is up to them for the content provided. During the 2020 U.S. presidential election, Trump’s controversial remarks were not favored by the liberal media. Due to the subsequent attack on the U.S. Capitol in 2021, Trump was successively banned from Facebook and Twitter, and related issues were censored as well. Trump therefore believes that Section 230 of the Communications Decency Act, which allows Internet giants to selectively censor speech, should be repealed; while Democrats believe that Section 230 allows Internet giants to evade the responsibility of regulation, and they also want to repeal Section 230. Facebook, Twitter and other companies are opposed to the repeal, because without the umbrella of Section 230, social networking companies will strictly prohibit any controversial issues to users’ posts, so as not to cause troubl. This is detrimental to the development of free speech. Article 230 will be abolished or amended in the future, and it will have a significant impact on the development of the Internet ecology. Not only users of mainstream social networks are affect by this, but users of the Fediverse should pay attention to the development of this matter as well.
State of the Fediverse
In Fediverse, to register to join an instance, you must agree to the terms of service (Terms of Service) of the instance. The terms of service generally include that the content of the post must not violate the laws of the location (host/webmaster/NIC) and other unwanted content. The content that appears in this instance can be classified as harmful to most people, such as false information, advertisements and harassment; culturally inappropriate and may cause discomfort, such as bloody violence, nudity, and the level of permission for Lolita pictures; reflections of certain values, such as gender equality, freedom of speech, etc. According to my observation, instances that tend to favor progressive values tend to block more users and instances, have more banned speech, and have less chance of bickering between users and admins; instances that favor free speech tend to block fewer users. There will often be disputes between users and admins including disputes over freedom of speech itself.
Moderation refers to the avoidance of excessive behavior or political opinion, and here I will call it moderation. Most of the small and medium-sized instances are managed by the admin. In the large-scale instances, there will be a full-time moderator, which is equivalent to the board owner in the BBS era, with the power to control speech, such as deleting statuses, banning or even freezing users (ED note: Shadowbanning is possible in Misskey). If the above behaviors are performed on the instance where you are located, the content will be deleted. Some instances will not accept the deletion activity; if it is a user of the remote instance, the operation will only occurs on the other instance.
This means that ordinary users can operate on bad behavior, including
Reports can optionally be sent to the other instance administrator, and possibly to the local administrator as well. Due to policy differences in instances, user reports may be processed and responded to or ignored. If it is spam, harassment, or clear violations, you should first report the post to give the other party’s administrator a chance to deal with it, not jump straight to the block; it is impossible for someone to monitor their own instance 24 hours a day.
The means that admin / moderator can operate on remote users, including:
The admin / moderator can perform instance-level operations, such as not displaying all media from the instance, or muting or even blocking the instance. Mastodon’s instance-level operations are as follows:
Pleroma’s instance-level operations are at least the following:
Misskey’s instance-level operations are only one of blocking instances.
If a remote user’s posts and their instance’s policies are clearly not in line with my values, then most of the users on that instance should be too, and blocking the entire instance seems to save a lot of unnecessary hassle. But is it really so? In a study by Hassan et al. (2021), surveying 1298 Pleroma instances, 110,000 users, 2.45 million posts, and 46 different policies, they found that rejecting an instance made 86.2 % of users and 88.5% of posts are rejected (ED: “affected by pond fish” lol); and there is a weak correlation between instance size and rejection by other instances. For example, if a user in one instance posts a misogynistic post that is uncomfortable and tiring to read, there may be more than 80% of other users in this instance who consider the post “harmful”, blocking this instance directly cuts off the chance to contact the “harmful user”. Of course, from the user’s point of view, it is understandable to use it as a signal to filter the source of the information, but at the instance management level, I tend to leave it to the user to decide. So, I also don’t think FediBlock is a good idea, why should I use a list made by someone else based on the principle of not knowing what to decide for my instance users which content they should not see on the instance? However, as instances get larger, it becomes increasingly difficult to continue to follow this principle.
Revolver
For the admin, not only does it cost money and time to maintain the instance, but moderation is also an additional problem, and the “people” problem is often more difficult. There is a project, the solution to the problem of moderation is to not use moderation; since the admin is a chore, then there is no need to have an admin at all. As the proverb often goes, “Instead of trying to solve the problem, it is better to make the problem not a problem”.
Author: waifu
The goal of Revolver is:
The developer of the Revolver project is also the webmaster of freespeechextremist.com. What he wants to solve is the problem of censorship, decentralization with P2P, and the promotion of freedom of speech. The issue of false information and incitement is not within the scope of this project’s consideration, nor should it be considered by this project. Everyone should be responsible for their own speech, and everyone should build up the ability to read information, instead of relying on a moderator or someone to judge for themselves.
Reference (Translated links offered by Google)
:dogwalk: and im done.
May 22, 2022 19:15
@pch_xyz @PCH_XYZ @p @waifu cc @pch_xyz
May 22, 2022 19:24
@neko @p @PCH_XYZ @pch_xyz Thank you both i wanted to read it and i appreciate that my art appears there and I'm credited :hapyday:
May 23, 2022 02:26
@neko @pch_xyz @waifu Oh, this was pretty cool.
May 23, 2022 05:26
@neko@rdrama.cc @pch_xyz@plume.seediqbale.xyz
86.2% of users and 88.5% of posts are innocently rejected. (an idiom is used for this meaning)"harmful user" here should be harmless usersI was saying the mods / admins could also report to the remote instance. I have never reported so I might be missing somethingI used g0v.social as the example because it is the most known instance in Taiwan but you are correct, thank you for the supplement and I'll modify the original postlet me take the liberty to split some hairs :KannaPeek:
The title is 【Moderation in the Fediverse】 btw
May 23, 2022 14:31
@pch_xyz @pch_xyz I corrected your source because mastodon.social and rage.love show their blocklists
May 23, 2022 15:20
@neko@rdrama.cc @pch_xyz@plume.seediqbale.xyz yes, they are good examples. The blocklist of rage.love is reeeeally long 😆
May 23, 2022 15:22
@pch_xyz @pch_xyz We used to mock rage.love, they should just use a whitelist lmao