Trust fractured across multiple layers of the digital stack in 2025, exposing the limits of long‑standing trust frameworks and forcing security teams to reimagine how assurance is established and withdrawn. According to the original report, credential sprawl, session hijacking and AI‑driven impersonation eroded traditional identity signals, turning routine access into a persistent risk that defenders had to evaluate in real time. [1]
What emerged from the year’s turmoil was a practical shift: organisations began treating incidents as a baseline expectation rather than anomalies to be avoided. Cando Wango told the roundtable that “shadow AI and autonomous agents are forcing organisations to assume incidents rather than hope to prevent them,” framing resilience as a visible business differentiator and driving demand for auditable proof of readiness rather than self‑attested posture. That change aligns with academic work proposing trust‑native architectures that bake verification into agent infrastructure rather than layering oversight on afterwards. [1][4]
Identity proved the recurring hinge point. Aaron Painter warned that “Zero Trust still depends on recognizing who is asking for access,” reflecting a consensus that stronger identity assertion and recovery must precede any trustworthy access model. Industry voices at the roundtable and subsequent research both stress cryptographic and hardware‑backed mechanisms, continuous validation of machine identities, and lifecycle controls for keys and certificates to prevent attackers from exploiting automated trust decisions. [1][4]
AI amplified both the problem and the possible solutions. Several participants described AI as “the sword and the shield,” noting rapid weaponisation of generative tools alongside their defensive utility. Academic proposals such as BlockA2A and TrustTrack map directly to these concerns: decentralised identifiers and blockchain‑anchored ledgers for immutable audit trails, smart contracts for dynamic access control, and embedded policy commitments and tamper‑resistant behavioural logs to make agent behaviour verifiable at runtime. Together they point toward systems that can attest who an agent is and what it did without relying solely on vendor declarations. [1][2][4]
Software and AI supply chains also moved from peripheral risk to primary attack surface. Roundtable participants highlighted SBOMs becoming routine in CI pipelines and the need to extend that concept into AI workflows; recent scholarship introduces TAIBOM, a tailored “SBOM for AI” that propagates integrity statements and provenance across heterogeneous pipelines. Practical adoption hinges on making these provenance signals part of continuous delivery and anomaly detection so that components that appear out of band can trigger verifiable alarms. [1][3]
Trust metrics are shifting from paper to cryptography and runtime enforcement. The roundtable emphasised that documentation alone, SBOMs, model cards, policy statements, does not equal control. New research on verifiable SLA claims shows how trusted hardware monitors and zero‑knowledge proofs can turn machine‑readable promises into cryptographic attestations, enabling parties to prove compliance or violations without exposing sensitive telemetry. The lesson for defenders is clear: build enforcement and proof into the execution path, not into post‑hoc reports. [1][5]
Governance and accountability remain the social half of the problem. Several contributors argued that oversight groups without clear ownership fail in crisis, and that legal and regulatory regimes lag technical capability. A socio‑technical perspective on public trust in generative AI reinforces this: trust is produced by networks of actors, not by isolated artefacts, and operationalising responsibility across builders, deployers and agents will be a legal and organisational challenge in 2026. [1][6]
Practically, progress will come from compositional change: embedding verification into agent infrastructure, extending SBOM principles to AI components, and deploying cryptographic monitors that create auditable, privacy‑preserving proofs of behaviour. According to the original report, organisations that defined control responsibilities early and instrumented change paths held up best; the emerging academic frameworks offer concrete mechanisms to operationalise those lessons at scale. [1][2][3][4][5]
The transition will not be instantaneous. Roundtable contributors urged caution against fear‑driven testing that misses real‑world adversary techniques and stressed the need for human judgement layered over automation. As the sector moves into 2026, the combination of governance, verifiable runtime controls and transparent supply‑chain provenance will determine whether trust becomes a resilient guarantee or an increasingly fragile assumption. [1][6]
📌 Reference Map:
##Reference Map:
- [1] (LastWatchdog , Byron V. Acohido) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 8, Paragraph 9
- [2] (arXiv , BlockA2A) - Paragraph 4, Paragraph 9
- [3] (arXiv , TAIBOM) - Paragraph 5, Paragraph 9
- [4] (arXiv , TrustTrack) - Paragraph 2, Paragraph 4, Paragraph 9
- [5] (arXiv , verifiable SLA proofs) - Paragraph 6, Paragraph 9
- [6] (Springer , socio‑technical trust in generative AI) - Paragraph 7, Paragraph 9
Source: Noah Wire Services