This article is the second in a series about artificial intelligence’s applications in the multifamily sector and the risks associated with its use. Click here for the first article.
Artificial intelligence tools have proliferated quickly over the past several years, with products now available to carry out leasing, marketing and data analysis for multifamily companies.
Some of the latest AI products run on generative AI, or models capable of generating new data from an existing dataset, such as OpenAI’s ChatGPT. They are distinct from predictive AI, which has been used in multifamily applications for many years, most visibly in the form of chatbots on leasing websites.
“Chatbots don't create content from scratch,” Donald Davidoff, president of Littleton, Colorado-based real estate revenue consultancy D2 Demand Solutions, told Multifamily Dive. “They interact with users to provide content and answers from its training set."
However, generative AI is still in its infancy, and far from infallible. Mistakes and falsehoods presented by large language model-type AI — known as “hallucinations” — are a common occurrence, and have even made the news in some high-profile cases. The most recent was on Feb. 20, when a bug in ChatGPT caused it to produce gibberish responses to prompts over the course of one night, according to The Verge.
Properly deploying a generative AI tool means understanding what it can and can’t do. Here are some tips for multifamily pros to consider before taking the leap:
Seeing through the hype
AI and its potential capabilities have generated interest across all industries, and multifamily is no exception.
However, at its core, generative AI produces its results based on patterns in its data set and is currently poor at performing critical analysis, according to Davidoff. This leaves it vulnerable to incorrect conclusions and biases present in its source data, and makes it prone to producing over-simplified, low-quality or very generic text, according to a research paper by the University of Southern California in Los Angeles.
U.S. companies are struggling to provide their workers with information on how best to use AI-driven products. As many as 71% of workplaces have little to no guidance on how to use AI, or when and where it is most effective, according to a recent Gallagher report.
Dom Beveridge, principal of Houston-based consulting firm 20for20, said that he saw a rush of new AI-based products, companies and technologies at multifamily conferences following the launch of ChatGPT in late 2022. However, by late 2023, this was no longer the case. “There was really nothing on offer that people hadn’t been talking about for most of the year already,” Beveridge told Multifamily Dive.
For now, anyone looking to purchase or implement a generative AI tool should be aware of the true extent of what it can and cannot do — and in particular what it cannot replace.
“It seems like everybody, or at least many people who are in AI positions, and vendors, they try to answer questions like AI can be used for everything, AI will replace everything,” Davidoff said. “And I use the analogy — you can use a chisel to unscrew a screw. But you can also cut yourself. If you just use a flathead screwdriver, you won’t cut yourself.”
This, of course, does not preclude the possibility of technological evolution. “I’m not going to get rid of my PR firm [right now] and say OK, [AI] can handle it,” Jim Love, vice president of marketing and brand at Chicago-based real estate firm Draper and Kramer, told Multifamily Dive. “But where is [AI] going? I’m excited to see that, because it’s this next-generation tool.”
Understanding AI’s limits
While ChatGPT is the most widely known large language model-style AI, its open-ended nature does not make it the best choice for most business-oriented tasks. “It could do all sorts of inappropriate things,” Davidoff noted.
In these cases, purpose-built artificial intelligence made to accomplish a particular task, and given a much narrower dataset than broader large language models, are better options.
“They aren’t the open-ended generative AI like ChatGPT,” Davidoff said, referring to tools more specialized for leasing or marketing. “They're working off of a fixed corpus of information, and they're pretty quick to realize, ‘I can't answer that.’... So you don't see those hallucinating, or at least I've not heard of that happening at all.”
Generative AI tools are best-suited for processing very large amounts of data — terabytes and petabytes. Multifamily data sets operate on a much smaller scale. “We live in a world of small data in multifamily leasing on the operation side,” Davidoff said. “We have megabytes and maybe gigabytes.”
The ‘black box’ problem
Another issue with AI is that the tech is what Davidoff calls a “black box” — it can be difficult to tell why or how it made the decisions that it did, or to reproduce its results. “In places where you have very large data, and you're willing to accept black box answers, AI is spectacular,” he said. “But when you have moderate data or small datasets, and transparency and understanding is key, that's a real challenge.”
Given its status in the industry, the misuse of AI is likely to be common in the near future, according to Chris Snyder, vice president of engineering at San Diego-based property management automation company Zego, which uses AI in its digital payment and expense management products.
“There will be side effects and discoveries about the misuse of AI,” Snyder told Multifamily Dive. “And some high-profile company will do something that they shouldn't have, and it'll make the news and people will reach broader conclusions from that.”
However, while this may breed uncertainty about the technology, Snyder believes the industry will overcome its hesitancy, given the trends of the past. “‘Mobile’ was a buzzword, so was ‘cloud,’” said Snyder. “AI is going to go from a buzzword to a commodity.”