Avnet’s newly released 2026 Insights survey finds a clear inflection: engineering teams are moving from experiment to embedded AI, but confidence and operational readiness lag behind. According to the original report, 56% of 1,200 engineers surveyed worldwide said they are shipping products with AI incorporated, up from 42% a year earlier, signalling that AI is becoming a default component in product designs even as significant implementation hurdles persist. [1]

Data quality emerged as the highest‑ranked design challenge in the Avnet findings, with 46% of respondents flagging it as a top issue. The company’s data frames this as a classical “garbage in, garbage out” constraint: engineers are working with much larger datasets than in earlier waves of product development, and poor input quality directly undermines model performance and business value. Industry surveys and vendor research amplify this concern, reporting widespread and growing data‑quality pain as organisations scale AI beyond prototypes. [1][2][5]

Operational burdens are shifting from cost and initial integration to continuous maintenance. Avnet reports that 54% of engineers view continuous learning and maintenance as the leading operational challenge, reflecting the practical realities of drift detection, retraining pipelines and governance. Analysts warn that models left unmaintained degrade quickly and that many projects stall after proof of concept for these reasons. The implication is stark: launching a capability has become the easy part; sustaining it is where projects live or die. [1]

The survey clarifies where AI is actually being embedded today: measurable, contained use cases such as process automation (42%), predictive maintenance (28%) and fault or anomaly detection (28%) dominate deployments. These are applications where outcomes are quantifiable and failure modes can be constrained , manufacturing, industrial sensors and quality control remain prominent adopters. Market research and sector analyses show a similar concentration of value in predictive maintenance and inspection workloads. [1]

Architecturally, hybrid approaches are common. Avnet found 57% of respondents prioritise Edge AI and cloud ML equally, reflecting trade‑offs in latency, bandwidth, privacy and resilience that drive splitting inference across device and cloud. Multimodal integration , combining vision, text, speech and time‑series models , is increasingly presented as an engineering integration problem rather than a single‑model breakthrough, and market forecasts envisage rapid growth in production‑grade multimodal systems. [1]

Engineers’ tool preferences point toward professionally tuned, domain‑specific models rather than public general‑purpose LLMs. Avnet’s respondents said only 16% would prefer a publicly available LLM for technical questions, while 47% would rather use an LLM trained by engineers outside their organisation. “Engineers would prefer to be using an LLM trained by their peers outside of their organization to answer technical questions, as opposed to a publicly trained LLM,” Alex Iuorio, Senior Vice President of Global Supplier Development at Avnet, said in the company Q&A accompanying the survey. That preference underscores demand for provenance, stronger evaluation and documentation tailored to engineering constraints. [1]

Mainstream LLM usage is widespread among hardware and systems engineers, but market positions are shifting. Avnet reported use rates of ChatGPT at 69%, Gemini at 57% and Copilot at 50%. “This is not something our survey looked at, but what I can tell you is that engineers would prefer to be using an LLM trained by their peers outside of their organization to answer technical questions, as opposed to a publicly trained LLM,” Alex Iuorio added when asked about Gemini’s traction. Broader reporting and regulatory attention around distribution and bundling suggest vendor reach , as well as model performance on engineering benchmarks , will continue to shape adoption. [1]

Trust and verification remain decisive operational frictions. Stack Overflow’s 2025 Developer Survey and related analyses show a similar paradox: high adoption paired with low trust. The developer survey found roughly eight in ten developers use AI tools in their workflows, yet only about a third trust the accuracy of AI outputs and many report time wasted debugging AI‑generated code. That pattern dovetails with Avnet’s finding that verification and monitoring now occupy more engineering effort than initial generation, advantaging teams that invest in automated evaluation pipelines, staged rollouts and rigorous test coverage. [2][3][5]

Taken together, Avnet’s results and corroborating developer surveys suggest the competitive advantage in commercial AI may shift away from headline models toward the less glamorous but critical layers: data pipelines, evaluation frameworks, monitoring, governance and maintenance. “Almost half of the engineers surveyed being aware of the data quality challenge is a positive: knowing there’s a problem is an important step in solving it,” Alex Iuorio said, while cautioning that the technology remains “new and ever‑changing.” For engineering organisations, the task ahead is practical and unglamorous: make AI easier to verify, easier to maintain, and better aligned with existing workflows so that adoption translates into reliable, long‑term value. [1]

##Reference Map:

  • [1] (R&D World / Avnet 2026 Insights) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7, Paragraph 9
  • [2] (Stack Overflow 2025 Developer Survey) - Paragraph 8, Paragraph 9
  • [3] (Stack Overflow analysis for leaders) - Paragraph 8
  • [5] (Admin Magazine reporting on Stack Overflow survey) - Paragraph 2, Paragraph 8

Source: Noah Wire Services