Fivetran, the global leader in data movement, today announced the results of a survey which shows 81 percent of organisations trust their AI/ML outputs despite admitting to fundamental data inefficiencies. Organisations lose on average six percent of their global annual revenues, or $406 million, based on respondents from organisations with an average global annual revenue of $5.6 billion (USD). This is due to underperforming AI models, which are built using inaccurate or low-quality data, resulting in misinformed business decisions.

Conducted by independent market research specialist Vanson Bourne, the online survey polled 550 respondents across the US, UK, Ireland, France and Germany from organisations with 500 or more employees. It found that nearly nine in 10 organisations are using AI/ML methodologies to build models for autonomous decision-making, and 97 percent are investing in generative AI in the next one to two years. At the same time, organisations express challenges of data inaccuracies and hallucinations, and concerns around data governance and security. Organisations leveraging large language models (LLMs) report data inaccuracies and hallucinations 42 percent of the time.

“The rapid uptake of generative AI reflects widespread optimism and confidence within organisations, but under the surface, basic data issues are still prevalent, which are holding organisations back from realising their full potential,” said Taylor Brown, co-founder and COO at Fivetran. “Organisations need to strengthen their data integration and governance foundations to create more reliable AI outputs and mitigate financial risk.”

Different “AI realities” exist across various job roles

Advertisement

Approximately one in four (24 percent) organisations reported that they have reached an advanced stage of AI adoption, where they utilise AI to its full advantage with little to no human intervention. However, there are significant disagreements between respondents who work more closely with the data and those more removed from its technical detail.

Technical executives – who build and operate AI models – are less convinced of their organisations’ AI maturity, with only 22 percent describing it as “advanced,” compared to 30 percent of non-technical workers. When it comes to generative AI, non-technical workers’ high level of confidence is coupled with more trust, too, with 63 percent fully trusting it, compared to 42 percent of technical executives.

There is a further dissonance between data experts at various levels of seniority within an organisation. While those working in more junior positions see outdated IT infrastructures as the top barrier to building AI models (49 percent), their more senior colleagues say the problem is primarily employees with the right skills focusing on other projects (51 percent). It is true that data workers are forced to direct their resources towards manual data processes such as cleaning data and fixing broken data pipelines. In fact, organisations admit that their data scientists spend the majority (67 percent) of their time preparing data, rather than building AI models.

Bad data practices still prevalent

The root of the wasted data talent potential and underperforming AI programmes are the same: inaccessible, unreliable and incorrect data. The magnitude of the issue is shown by the fact that most organisations struggle to access all the data needed to run AI programmes (69 percent) and cleanse the data into a usable format (68 percent).

New generative AI use cases have introduced further complications, with 42 percent of respondents experiencing data hallucinations. These can lead to ill-informed decisions, reduce trust in LLMs or the willingness of staff to use the tool, and consume staff time in locating and correcting the data. With 60 percent of senior management using generative AI – and their responsibility to make strategic decisions – any issues with the quality and trustworthiness of data will be further amplified.

Data governance a key focus area for AI use

Fears of generative AI use also remain, with “maintaining data governance” and “financial risk due to the sensitivity of data” tying for the top spot of concerns among organisations (37 percent). Solid data governance foundations will be particularly important for organisations that plan to either build their own generative AI models or use a combination of existing external and internally-developed models. However, as the majority (67 percent) of respondents plan to deploy new technology to strengthen basic data movement, governance and security functions, there is reason for optimism.

Advertisement