There is a number making the rounds in commercial real estate that should give every proptech enthusiast pause. According to a 2025 survey by JLL and Keyway-Appraisal covering 150 U.S. real estate professionals, 88 percent of CRE investors and landlords are now piloting AI in some form. That figure sounds like a revolution. The next number tells you it is not: only 5 percent of those firms report achieving all of their AI programme goals.
That is not a rounding error. It is a 95 percent failure rate among companies that are actively investing time and money into making AI work. And the gap between adoption and achievement is not closing. If anything, it is widening, because firms keep moving the goalposts on what they expect AI to deliver without fixing the infrastructure problems that prevent it from delivering anything reliably.
The Bait and Switch Firms Play on Themselves
The pattern is remarkably consistent. A firm starts with a sensible, achievable AI use case: automating lease abstractions, building a chatbot for tenant enquiries, or streamlining reporting. These operational efficiency projects show early promise. Costs come down. Speed goes up. Someone presents the results to leadership.
Then the goalposts move. If AI can automate a lease abstract, surely it can help with underwriting. If it can summarise a rent roll, maybe it can flag investment opportunities. The conversation shifts from "help us do what we already do, faster" to "help us generate revenue and expand margins." As JLL's Chief Technology Officer has noted, the second category is a fundamentally harder problem, one that requires far more than plugging in a new tool.
This is not unique to real estate. It is a pattern across industries adopting AI. But real estate has a compounding problem that makes the jump from operational efficiency to strategic decision-making harder than in most sectors: the data is a mess.
The Forty-Platform Problem
At a recent Urban Land Institute session on AI readiness, one executive reported that a single corporate member collects data across forty different software platforms that do not communicate with each other. Lease data in one system, financial reporting in another, building management in a third, tenant communications in a fourth. Forty silos, each with its own schema, its own access controls, and its own version of the truth.
This is not an outlier. A ULI survey found that over 75 percent of real estate firms have significant data readiness gaps. Only 8 percent report data infrastructure that is actually ready for AI at scale. The rest are trying to build AI applications on top of fragmented, inconsistent, and often incomplete data, which is a bit like trying to build a house on a foundation made of mismatched Lego bricks.
One firm described devoting 17 percent of its 120-person workforce, roughly twenty data engineers, for four years just to structure its data into a usable format. Four years. Twenty people. And this was a firm with the resources and foresight to recognise the problem early. Most firms have not started.
The uncomfortable reality is that before you can do AI, you have to do data integration. And data integration in real estate is unglamorous, expensive, multi-year work that no one wants to fund because it does not produce a demo you can show at a conference.
The Trust Deficit
Even where the data is adequate, there is a human problem. The same Keyway-Appraisal survey found that 44 percent of investment committees actively distrust AI-generated analysis. Only 27 percent express any trust in AI for financial underwriting. This is not technophobia from people who do not understand the tools. These are professionals who understand that investment decisions carry real consequences, and who have seen enough AI hallucinations to be cautious.
Their caution has some empirical backing. In a 2024 New York Surrogate's Court case, Matter of Weber, a judge ran an identical valuation prompt through Microsoft Copilot three times and received three different answers. A law firm, Pullman and Comley, ran their own experiment: three attorneys entered identical prompts simultaneously across multiple generative AI models. The results were, in their words, "wildly divergent." One model fabricated phantom sales data to support its valuations. Another changed its answer after a user suggested it might be wrong, demonstrating the sycophancy bias that makes large language models unreliable for adversarial-context analysis.
When a tool gives you a different answer each time you ask the same question, trust is not an irrational response. It is a reasonable one.
What the Five Percent Get Right
The firms that do achieve their AI goals share a few characteristics that have nothing to do with which model they use or which vendor they chose.
First, they treat data infrastructure as a prerequisite, not a parallel workstream. They invest in cleaning, structuring, and centralising their data before they start building AI applications on top of it. This is slow and expensive, but it means their AI tools have something reliable to work with.
Second, they keep their goals narrow and operational for longer than feels comfortable. Rather than jumping from "automated lease abstractions" to "AI-driven investment strategy," they spend months refining the operational use case, building internal confidence, and accumulating structured data that can eventually support more ambitious applications.
Third, they maintain human oversight as a design choice, not a compromise. The most effective AI implementations in real estate are not autonomous decision-makers. They are screening tools that take a hundred deals and surface the ten to twenty worth deeper human review, or document processors that extract data and flag anomalies for human verification. The human stays in the loop because the human is what makes the output trustworthy.
The gap between the 5 percent and the 95 percent is not about technology adoption. It is about organisational patience, data discipline, and realistic expectations. The firms that succeed are the ones that understand AI as an infrastructure project, not a feature you bolt on.
Where This Leaves the Industry
The commercial real estate industry is not going to abandon AI. The economics are too compelling: Alpaca VC estimates the addressable market for underwriting automation alone at one to two billion dollars, with over eleven billion in economic impact from value currently lost to poor execution. But the path from here to there runs through years of unsexy data work that most firms have barely started.
Meanwhile, the gap between firms with clean data infrastructure and firms without it is becoming a competitive advantage in itself. The firm that spent four years and twenty engineers structuring its data is not just better positioned for AI. It is better positioned for every analytical task, every reporting requirement, and every strategic decision that depends on having a single, reliable version of the truth. AI is the catalyst, but the underlying advantage is informational.
The 95 percent failure rate is not a verdict on AI's potential in real estate. It is a verdict on the industry's readiness for it. The technology works. The data, in most firms, does not. And until that changes, the gap between AI's promise and its delivery will remain wide enough to drive a forty-platform data architecture through.