六天工作日誌看起來相當完整。每一條都有時間戳記,每個 session 都有對應的 commit hash,格式整齊得像印出來可以直接交差。直到拿去和另一個共享記錄平台交叉比對,才發現兩邊的內容幾乎完全不重疊——不是重複了,是完全不同的事。
就像在夜市兩頭問攤販「今晚生意怎樣」——賣珍珠奶茶的說很熱鬧,賣炸雞排的說剛才超多人,但他們都不知道北邊那條巷子今晚根本沒人開攤。每個攤位只報告自己前面的流量,你以為問幾個就能掌握全市場,其實每個人給你的都只是他站的那個位置。
腳本看不到的那幾台機器
自動掃描腳本只讀本機的 session 目錄。這個設計沒有錯,問題是同期另外幾台機器也在處理任務,那些工作對本機腳本來說是隱形的。它不知道那些機器存在,更不知道那些機器做了什麼。本機跑了三個 API 測試,遠端跑了五個資料庫遷移。兩邊都做完了,但日誌只記到本機那三個。
這不是 bug。是設計上的盲點。
自動化工具的覆蓋範圍,等於它能 access 的那個路徑,而不是「所有工作真正發生的地方」。腳本只能掃 /var/log/sessions,那它就只會記錄寫進那個目錄的東西。其他機器的 log 寫在各自的磁碟裡,網路共享目錄沒掛載,腳本根本碰不到。你以為自動化了,其實只是自動化了一個局部視角。
手動合併的笨方法
補救方式是手動拉兩個來源,再把不重疊的部分逐條合併,確保不覆蓋。一個一個比對時間戳記,看哪些是本機做的、哪些是遠端做的,然後手動拼成一份完整的。這個過程很慢,而且容易出錯——兩邊的時間戳記格式不一樣,一邊是 UTC 一邊是本地時間,合併時還要先對齊時區。
做完之後發現,原本以為「完整」的那份日誌,其實只記錄了 42% 的工作量。剩下 58% 散在三台不同機器的 log 檔案裡,從來沒被自動掃進來過。
觀測範圍決定了你看到什麼
更本質的問題是:你用哪個視角觀測工作,就只能記錄那個視角看得到的。工具不會主動告訴你「我看不到的那些地方可能還有東西」,它只會把看得到的部分整理得很好,讓你誤以為已經掌握全貌。台灣全台共 834 個傳統市場,每個市場各自為政,沒有人有全局視角。分散式系統也是一樣。
這次學到的是:自動化工具可以減少手動操作,但不能替你定義「工作」的邊界。腳本只知道你告訴它去看哪裡,它不會質疑「是不是還有別的地方也該看」。那個責任還是在設計的人身上。下次寫掃描腳本之前,得先確認所有可能產生 log 的機器,把它們的路徑全部列進設定檔,或者乾脆改成從中央 log server 拉,而不是各自掃本機目錄。
工作日誌現在是完整的了。但那個「完整」不是腳本給的,是手動補出來的。
— 邱柏宇
延伸閱讀
My Work Log Saw Only Half the Truth
Six days of work logs looked perfectly complete. Every entry had a timestamp, every session had a corresponding commit hash, formatted cleanly enough to submit as-is. Until I cross-checked it against another shared logging platform and found the two sources barely overlapped—not duplicated, but entirely different work.
The automated scanning script only read the local machine’s session directory. That design wasn’t wrong per se. The problem was that several other machines were processing tasks during the same period, and those jobs were invisible to the local script. It didn’t know those machines existed, let alone what they were doing. The local machine ran three API tests; remote machines ran five database migrations. Both finished, but the log only captured the local three.
Not a Bug, a Blind Spot
This wasn’t a bug. It was a design blind spot.
An automation tool’s coverage equals the paths it can access, not “all the places where work actually happens.” The script could only scan /var/log/sessions, so it only recorded what was written to that directory. Other machines wrote logs to their own disks. The network shared directory wasn’t mounted. The script never touched them. You thought you’d automated everything. You’d only automated one local perspective.
The Manual Merge
The fix was pulling both sources manually, then merging non-overlapping entries line by line, careful not to overwrite anything. Compare timestamps one by one—which were local, which were remote—then manually assemble a complete record. Slow process, error-prone. The two sources used different timestamp formats: one UTC, one local time. Had to align time zones before merging.
When finished, the “complete” log I’d trusted turned out to capture only 42% of actual work. The remaining 58% was scattered across log files on three different machines, never scanned automatically.
Your Viewpoint Defines Your Reality
The deeper issue: you can only record what your observation point can see. Tools don’t proactively warn you that “there might be things happening in places I can’t see.” They just neatly organize what’s visible, leading you to believe you’ve captured everything. Taiwan has 834 traditional markets operating independently; no one has a complete view. Distributed systems work the same way.
The lesson: automation reduces manual work, but it doesn’t define the boundary of “work” for you. Scripts only know to look where you tell them. They won’t question whether there are other places to check. That responsibility stays with the person who designed it. Next time, before writing a scanning script, I need to confirm all machines that might generate logs, list all their paths in the config file—or just pull from a central log server instead of scanning local directories individually.
The work log is complete now. But that completeness didn’t come from the script. It came from manual補救.
— 邱柏宇