Two of the products under General Intelligence Systems live in commercial real estate. Gravity Realty Group is an AI-driven brokerage. Titan01 is an intelligence layer that extracts opportunity signals from CRE data noise. So we're not writing from the outside. We're writing from a year of watching what actually moves and what doesn't.
The short version: the consumer-facing AI features in CRE (the 3D walkthroughs, the chatbot leasing assistants) are mostly window dressing. The back-office and analyst-facing AI is where the real money is. And the gap between small teams using new tools well and big brokerages still operating like it's 2015 is widening fast.
What's not working (yet)
Let's get the hype out of the way first.
AI-generated 3D walkthroughs
Cool tech demo. Useful for marketing the listing once it's already polished. Doesn't drive deals. Buyers of $50M industrial properties don't decide based on a Gaussian splat. They decide based on T-12 financials, cap rate, and whether the roof needs replacement.
AI leasing chatbots
For multifamily, fine. For commercial, the deals are too high-value and the questions too specific. A prospective tenant for a 30,000 square foot office isn't going to be qualified by a chatbot. They're going to talk to a broker. The chatbot intercept is friction.
Auto-generated comps
There are several startups offering "AI comps" that scrape public data and produce comparable sales reports. The reports are technically accurate. They miss everything that matters. The deal structure, the seller motivation, the broker's known asking range. CRE comps are a relationship business. Public data is the worst slice of the truth.
Predictive pricing models
The pitch is that AI can value any property in any market. The reality is that the training data is thin and the market context is everything. A model trained on national data underestimates micro-market premia. A model trained on micro-market data overfits. Real brokers still beat them on most assets, and where they don't, the broker is using the model as a sanity check rather than a source of truth.
What's actually working
Now the interesting half.
Document extraction from offering memos and PSAs
This is the biggest single win in the last 18 months. CRE generates an enormous volume of unstructured PDFs. Offering memorandums, purchase and sale agreements, rent rolls, T-12 financials, environmental reports. Every one of them used to require an analyst to manually pull numbers into a spreadsheet.
Modern multimodal models can read these documents and extract the relevant data with high accuracy. Tenant rosters, lease expirations, NOI by year, cap rate, in-place rents. Analyst work that took half a day is now ten minutes plus a review pass. The leverage on a small team is enormous.
Opportunity signal from noise
This is what Titan01 does and we've been continually surprised at how much signal there is buried in public data once you have models that can read it at scale. Permit filings. Loan maturity dates. Foreclosure notices. LLC formations. Tenant move-outs. None of these signals individually mean much. Stacked, scored, and correlated, they identify properties that are about to come to market before they come to market.
This is a structurally hard problem for big brokerages because their information advantage was relationship-based. They knew about deals because they were on the call list. A small team with good data infrastructure can now be on the same call list, plus the data list, plus the signal list. The information moat is shrinking.
Pre-call research that's actually thorough
Old broker workflow before a meeting: glance at the property record, maybe check LoopNet, walk in. New workflow: a fifteen-minute model-generated brief on the property, the ownership, the loan, the tenancy history, the local submarket trends, and likely seller motivations. You walk in informed in a way that wasn't possible without an analyst behind you.
This sounds incremental. In practice it's transformative because brokers who do this consistently win more listings.
Outreach personalization at scale
Cold outreach in CRE used to be done with mail merge templates. Open rates were terrible. With current models, you can generate genuinely personalized outreach grounded in the property's actual context (the loan that matures in 14 months, the recent vacancy, the comparable sale that priced higher than expected) and the response rates jump meaningfully.
The caveat is that the outreach has to feel human. AI-flavored cold emails get filtered. We've written separately about how to keep AI-assisted writing from reading as AI, and the same patterns apply here.
Internal knowledge retrieval
Big brokerages have decades of files. Closed deal memos. Old market reports. Past appraisals. None of it is searchable in any useful way. Modern retrieval-augmented systems can make the whole archive queryable in plain English. "Show me every industrial deal we closed in the Phoenix MSA over 50,000 square feet in the last five years where the seller was a 1031 buyer." That used to be a research project. Now it's a question.
This is the kind of win that doesn't show up in pitch decks because it doesn't have a flashy UI. But it changes how analysts spend their time, which changes the cadence of the whole firm.
Where small teams have the edge
Here's the part that's structurally interesting. The big brokerages have data. They have relationships. They have brand. What they don't have is the agility to ship AI-native workflows fast.
Compliance reviews take six months. Vendor approvals take a year. By the time a new AI tool is approved at a Fortune 500 brokerage, the small competitive shop has been using it for eighteen months and has wired it into every step of their process.
This is why a five-person team running on modern infrastructure can now compete with, and in some markets beat, much larger firms. The work-per-broker ratio has changed. One analyst with the right tools can do what three analysts did in 2022.
That's the bet behind Gravity. Build a brokerage where AI is in the foundation, not bolted on. Where the analyst tooling is custom because we control it. Where the brokers spend their time on the parts of the deal that genuinely need humans, and the rest of the work happens in the background.
What to expect next
If you're operating in CRE and trying to figure out where to invest your tooling budget, our take in 2026:
- Underweight: consumer-facing AI features. Walkthrough generators. Marketing copy tools. Chatbots for listings.
- Overweight: document extraction, signal aggregation, personalized outreach, internal knowledge retrieval.
- Watch: agentic systems that can run a multi-step research process end-to-end. They're not quite there yet but they're close, and when they land they'll change the analyst role meaningfully.
The longer-term thing to watch is how the data layer evolves. Right now CRE data is fragmented across CoStar, Crexi, Reonomy, LoopNet, county records, and proprietary brokerage CRMs. The firms that can stitch this together cleanly will have an enormous structural advantage. We think this is one of the most undervalued problems in commercial real estate technology.
If you're working on something in this space, or if you want to see what an AI-native brokerage actually looks like in practice, check out Gravity or Titan01. And if you're thinking about adjacent problems, see our piece on what AI-native software actually means.