For its December technology focus, The Actuary Asia connected with Kittipon Sarnvanichpitak, FSA, Principal Data Scientist at AIA in Bangkok, for a wide-ranging conversation on artificial intelligence and its bearing on actuarial practice. According to the original report, Kittipon argued that while generative AI and large language models (LLMs) are changing workflows, explainability and human oversight remain central to professional actuarial judgment. [1]

Kittipon described explainable machine learning as "a set of processes and methods that allow users to understand and trust results from ML algorithms," noting the fundamental trade-off between predictive performance and interpretability. He highlighted tools such as variable importance measures, Shapley additive explanations (SHAP) and partial dependence plots as practical techniques for illuminating how complex models reach their outputs, and he emphasised their role in both global and local explanation of predictions. [1]

Industry guidance echoes that emphasis. Practical frameworks for explainable AI stress model simplification, visualisation and the use of interpretable surrogate models to make complex systems comprehensible to non‑technical stakeholders, while recommending design choices that bake interpretability into models from the outset. These approaches help firms demonstrate transparency to business leaders and regulators and reduce the risk of opaque decisioning. [2][3][4]

Kittipon positioned generative AI, LLMs that can draft memos, summarise documents and extract meaning from unstructured texts, as an augmentation rather than a replacement for actuarial tasks. "These outputs still require human review, but they can significantly reduce time spent on documentation and communication tasks," the interview stated, adding that LLMs can accelerate review of policy wordings, filings and regulatory texts. The company claims and anecdotal experience referenced in the interview suggest large productivity gains, particularly in documentation-heavy processes. [1]

The interview also detailed where AI has begun to reshape traditional actuarial workflows: data preparation pipelines, faster integration with cloud and modern programming tools, and increased involvement of actuaries in AI projects as reviewers and governance leads. Kittipon said actuaries increasingly participate in validating underwriting and customer‑facing models, quantifying the implications of automated decisions on pricing, reserving and portfolio risk. [1]

Those shifts bring ethical and regulatory challenges. Kittipon warned of "bias amplification" from historical data, regulatory friction where non‑traditional predictors are used, and the persistent problem of accountability when even powerful models "can still be wrong." Best practice guidance urges organisations to pair interpretability tools with governance, model inventories, documentation standards, monitoring and human‑in‑the‑loop controls, to limit unintended harms and meet compliance expectations. [1][6][5]

On professional preparedness, Kittipon advised actuaries to "stay curious," retain core actuarial fundamentals and engage in interdisciplinary projects so they can critique and guide applied AI work. Firms and professional bodies have likewise been advised to train teams on interpretability techniques and to embed governance frameworks that make model decisions auditable and explainable to internal and external stakeholders. [1][5][2]

Looking ahead, Kittipon predicted that over the next five to ten years AI will be "a supporting infrastructure for actuarial work, rather than a replacement," enabling faster assumption monitoring, automated routine calculations and broader scenario exploration while preserving conservatism and long‑term solvency as core actuarial responsibilities. Thought leadership on explainable AI argues the same: interpretability tools will grow more sophisticated, but responsible deployment will require continuous oversight, clear communication and ethical guardrails. [1][7][4]

For actuaries, the practical takeaway in the interview is clear: adopt AI where it improves efficiency, demand explainability where decisions affect customers and capital, and develop the governance and interdisciplinary skills necessary to ensure those systems remain trustworthy and fit for purpose. [1][5][6]

##Reference Map:

  • [1] (The Actuary Asia) - Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 8, Paragraph 9
  • [2] (IBM blog: 9 ways to improve explainable AI) - Paragraph 3, Paragraph 7
  • [3] (IBM blog: understanding explainable AI) - Paragraph 3
  • [4] (IBM blog: advancements in explainable AI) - Paragraph 3, Paragraph 8
  • [5] (IBM blog: implementing explainable AI in business) - Paragraph 6, Paragraph 7, Paragraph 9
  • [6] (IBM blog: ethical considerations in explainable AI) - Paragraph 6, Paragraph 9
  • [7] (IBM blog: future of explainable AI) - Paragraph 8

Source: Noah Wire Services