人工智慧
Forrester觀點:AI資安的五個重要啟示
趨勢科技美洲地區企業營收長 (CRO) David Roth 與我們的來賓 Forrester 副總裁暨首席分析師 Jeff Pollard 最近針對 AI 的噱頭與事實以及如何保護職場 AI 安全進行了一場對談,本文摘錄其重點。
視您詢問的對象而定,生成式 AI (GenAI) 既可能是人類的救星,也可能是人類的終結者。尤其在網路資安方面,它既是靈丹妙藥,同時也是前所未有的新式威脅來源。
David Roth 和我們的來賓 Jeff Pollard 在最近一場「步調不算太快」的網路研討會中,從 AI 的一大堆噱頭當中抽絲剝繭地列出了有關 AI 的真相。David Roth 是趨勢科技美洲地區企業營收長,Jeff Pollard 則是 Forrester 副總裁暨首席分析師。
他們兩位都表示,即使 GenAI 對資安來說是新的東西,但資安領域對 AI 和機器學習並不陌生。此外,他們也探討了 GenAI 對今日網路資安團隊真正實用的一些領域。本文摘要說明他們對話的一些重點。
「我不敢說我們已經到達 AI 熱潮的巔峰,但我們應該很接近了,不然我們在業界的日子將會很難過,因為外面流傳著很多不實的承諾或期待是現在根本無法達成的。」
Jeff Pollard
#1 – 小心 AI 的噱頭
生成式 AI 在網路資安領域所掀起的熱潮,並非單純只是對新鮮事物的著迷。資安團隊迫切需要獲得解脫:他們壓力過大、資源不足,這些年來因為人才不足而被壓得喘不過氣,而且面臨的威脅還不斷增加和演變。
所以,可以理解的是,當 GenAI 出現時,許多人都開始幻想出一些全自主式資安營運中心 (SOC),它們會派出類似魔鬼終結者的探員來獵殺惡意程式。
然而,今日的 GenAI 系統還沒有強到能在沒有人類介入和監督的情況下運作。GenAI 不僅無法神奇地解決人才短缺的問題,短期內 GenAI 還可因為新的訓練需求而讓問題加劇。此外,文化也是一個因素,儘管經驗豐富的資安人員可能很快就能學會新的 AI 工具,但他們還是需要數週或數個月的時間來改變習慣,才能將這些工具整合至他們的工作流程當中。
儘管如此,目前的 GenAI 系統還是有一些相當不錯的資安應用案例。AI 可強化現有功能來協助資安團隊完成更多工作,並且減少工作上的無聊,帶來更好的成果,尤其在應用程式開發、偵測及回應等領域。
「能夠越快 [產生報告],您就有越多時間花在處理事件上。」
Jeff Pollard
#2 – 抓住能快速獲得成果的應用
資安團隊能夠從 GenAI 當中快速獲得成果的應用之一就是用來生成文件。例如,行動摘要、事件報告,以及其他類型的報告都相當繁瑣,而且需大量時間來製作,但這些工作卻是必要的。GenAI 可即時生成這些文件,讓資安人員將時間投入更多活動,處理更多事件。
但要注意的是,資安人員需具備良好的溝通技巧才能扮演好自己的角色。AI 生成的報告也許能節省時間,但卻不能取代專業能力的培養。
此外,GenAI 還能提供一些不錯的行動建議,它能以超越人類的速度查詢現有的知識庫來發掘可用資訊。在這樣的情況下,很關鍵的就是要確保 AI 的輸出符合企業的需求和方法。如果某項既定流程共有七個步驟,然後 AI 助理推薦了四個步驟,人類必須確定七個步驟都確實執行,這樣才能達到預期的效果,並確保所採取的動作符合企業政策和外部法規。若是一不小心抄了捷徑,很可能帶來嚴重的後果。
「它無法像《關鍵報告》電影中的犯罪預防單位那樣,但至少[您]可以從大量資料當中看到攻擊途徑可能會長怎樣。」
David Roth
#3 – 運用 AI 化被動為主動
GenAI 很有潛力將大數據「問題」變成大數據「機會」,讓資安團隊比今日更主動地掌握攻擊面的演變,並模擬攻擊路徑的各種情境。或許它無法準確預測會發生什麼事情,但它可以讓資安團隊預先防範原本可能不會注意的威脅。
這在實務中的效果如何,取決於企業對其系統、組態設定以及當前狀態的掌握。如果掌握上有漏洞,那麼 AI 也會有漏洞。不幸的是,這些漏洞在今日相當普遍,就連大型企業也常常將資料和文件記錄在不同電腦上的多個試算表上。
這突顯出良好、AI 立即可用的資料以及將資料管理作業標準化的重要性。原始資料的品質越好,AI 能做的事就越多。
「如果您企業有在使用企業軟體的話,它們現在就已經具備了 AI,不論是 SAP Joule、Salesforce 的 Einstein GPT、Microsoft 的 Copilot 或是其他數十種名稱的 AI。這又是另一個您必須擔心的領域,因為這改變了使用者與公司資料的互動方式。」
Jeff Pollard
#4 – 留意影子 AI
企業對 AI 可能洩漏企業或客戶敏感資訊的疑慮,實在無可厚非。不論員工使用的是未經授權的工具,或是經過核准的企業軟體,都可能發生這樣的情況,而且這些軟體也越來越常加入 AI 強化功能。過去,歹徒必須了解如何駭入 ERP 系統才能取得未經授權的資料,現在有了 AI 之後,只要一道正確的提示,就能輕輕鬆鬆取得同樣的資訊。
企業必須保護自己,防止員工使用影子 AI,還有以非法方式使用經過核准的 AI 工具。此外,當他們使用大型語言模型 (LLM) 來開發自己的應用程式時,也必須小心謹慎。除了底層資料必須受到保護之外,建構中的應用程式、LLM 本身以及系統提示,也都需要獲得保護。
基本上,這些風險將衍生出一系列全新的問題:使用者自備 AI 的問題、企業應用程式的問題,以及產品安全或創新的問題。每一種問題都需要有對應的防護措施,而這也是 CISO 的責任,即使相關的專案並非由 CISO 所負責。
「這幾乎讓我想起雲端熱潮剛開始的時期... 當您跟不上業務的腳步時,就更難透過治理與資安控管來加以強制。」
David Roth
#5 – 您需要一套 AI 策略
雲端發展之初那種影子 IT 應用程式氾濫的情況,與 AI 目前所處的狀況相似。儘管資安領導人將未經核准的應用程式稱為「影子 IT」,但業務領導人和投資人卻將它們視為「產品導向的成長」。資安團隊很快就發現,他們無法禁止這些創新:任何試圖加以箝制的作為,都只會讓它們轉入地下化,然後就變得無法管理。
資安團隊必須接受並適應 AI 存在的事實,就算它目前仍未實現其狂熱支持者心中的所有希望或夢想,甚至遭遇到一些挫折,但兩三年內,AI 就會比今日更加成熟、更為強大,企業一旦導入,便不得不確保其安全。
這表示,現在正是您做好準備的機會:制定資安導向 AI 策略、了解這項技術,然後準備看它起飛。許多觀察家認為,資安團隊總覺得自己對雲端感到束手無策,但他們也曾經擁有過充分的時間準備。有鑑於 AI 的威力與複雜性,他們實在禁不起再重複一次同樣的錯誤。
抱持保留態度,但要認真看待
GenAI 或許還未能實現人們對其革命性潛力的期待,但它在網路資安領域還是有一些實質的應用。它無法在短期內解決人才短缺問題,但卻可以減輕資安團隊的一些負擔。而且,企業越能妥善管理及維護自己的 IT 資料,那麼長期下來,AI 的偵測、甚至防範能力就會越強。資安團隊若能從過去的影子 IT 與雲端導入經驗當中汲取教訓,就能做好有效準備,等待 AI 開始實現其偉大夢想的那天,確保其企業的安全。
探索更多 AI 觀點
請參閱以下其他文章:
- [影片] [Podcast] 因應 AI 資安的熱潮:讓 CISO 採取行動的洞見
- [影片] 給網路資安領導人的 AI 法規挑戰、趨勢與祕訣
- [影片] Xday 2024: 非官方 AI 生存指南
Despite all that, there are compelling security use cases for current genAI systems. By augmenting existing capabilities, AI can help teams do more and get better results with less drudgery, especially in areas such as application development and detection and response.
“The faster you’re able to [generate reports], the more time you’re spending working an event.”
Jeff Pollard
#2 – Grab the quick wins
One task where security teams stand to gain fast from genAI is generating documentation. Action summaries, event write-ups, and other types of reports are tedious and time-consuming to produce. But they need to get done. GenAI can produce them on the fly, freeing up security practitioners to immerse themselves in more events and work more incidents.
One caveat is that security professionals need good communication skills to perform their roles. AI-generated reports may save time but can’t come at the cost of professional development.
GenAI can also recommend next best actions and query existing knowledge bases to surface usable information quicker than a human. The key in these cases is to be sure AI outputs align with the organization’s needs and methodologies. If a defined process has seven steps and an AI companion recommends four, a human agent needs to make sure all seven are followed—to achieve the desired outcomes and to ensure the actions being taken are compliant with corporate policies and external regulations. Inadvertent shortcuts could have serious consequences.
“It's not like the ‘minority report’ on the pre-crime unit, but at least [you] can see enough across a huge volume of data about what the likelihood of an attack path could or would look like.”
David Roth
#3 – Be more proactive with AI
GenAI has potential to turn the ‘big data problem’ into a big data opportunity and allow security teams to become much more proactive than they are today by identifying changes in an attack surface and running attack path scenarios. While it might not predict exactly what will happen, it could position security teams to get ahead of threats that would otherwise slip by.
How effective this turns out to be in practice depends on the extent of an organization’s awareness of its systems, configurations, and current states. If there are gaps, the AI will also have gaps. Today, those gaps are unfortunately common, with data and documentation recorded in multiple spreadsheets on different computers even in large enterprises.
This points to the importance of good, AI-ready data hygiene and applying standardized approaches to data management. The better the raw material, the more AI can do with it.
“If you have enterprise software in your business, it has AI now. Whether it's SAP Joule, Salesforce with Einstein GPT, Microsoft with Copilot, and the dozens of other names out there. That’s another area you need to worry about because this changes the ways users interact with company data. ”
Jeff Pollard
#4 – Watch out for shadow AI
Enterprises have legitimate concerns about AI leaking sensitive company or customer information. This can happen through employee use of unauthorized tools or via sanctioned enterprise software, which is increasingly being augmented with AI capabilities. In the past, a bad actor would need to understand how to hack into an ERP system to access unauthorized data inside it; with AI, the right prompt could too easily make the same information available.
Enterprises need to secure themselves against employee shadow AI and illegitimate use of approved AI tools. They also need to take care when using large language models (LLMs) to build applications for themselves. The underlying data needs to be secured along with the application being built, the LLM itself, and the prompt system.
These risks basically boil down to a new set of problems: bring-your-own-AI problems, enterprise app problems, and product security or innovation problems. Each type requires its own protective measures and impinges on CISO responsibilities—even though CISOs aren’t in charge of the related projects.
“It almost reminds me of the early hyper-growth cloud days.... Being able to enforce through governance and security controls is harder when you’re not keeping up with the speed of the business.”
David Roth
#5 – You need an AI strategy
There are helpful parallels between the shadow IT app frenzy and early days of cloud and the present state of AI. While security leaders called unsanctioned apps ‘shadow IT’, business leaders and investors saw their use as ‘product-led growth’. And security teams learned quickly that they couldn’t ban those innovations: any attempt to clamp down just drove usage underground where things couldn’t be managed at all.
Security teams need to accept and adapt to the AI reality. Even if it isn’t currently fulfilling all the hopes and dreams of its most fervent champions—and even if it encounters some setbacks—in two to three years AI will be much more mature and powerful than it is today. Organizations can’t afford to try to secure it after it’s adopted.
That means the opportunity is right now to get ready, develop security-oriented AI strategies, learn about the technology, and be prepared for when it really takes off. Many observers feel security teams got caught flat-footed with cloud even though they had plenty of lead time. Given the potency and complexity of AI, they can’t afford to repeat that mistake.
Infographic: Survival Guide for AI-Generated Fraud
Take it with a grain of salt, but take it seriously
GenAI may not have lived up to the hype of its transformative potential—yet—but it nonetheless has meaningful applications in cybersecurity. It won’t solve the skills shortage in the near term but it can lift some of the burden off security teams, and the better organizations manage and maintain their IT data, the more AI will be able to detect and even prevent over time. By taking lessons from recent past experiences with shadow IT and cloud adoption, security teams can prepare effectively for the day when AI does start to realize its wilder dreams—and keep their enterprises safe.
Explore more AI perspectives
Check out these additional resources: