The construction industry is embracing Artificial Intelligence (AI) tools and the effects are materializing. In a September 2025 analysis of survey data from over 2,200 construction professionals globally, the Royal Institute of Chartered Surveyors reported on adoption and risk tolerance for AI and recommended action to accelerate progress and ensure responsible integration of these technologies. Based on the data, 56 percent of responders planned to increase AI spending in the next year. Many felt adoption of AI would significantly improve scheduling, resource allocation, contract review, and cost and risk management. Surprisingly, a total of 74 percent of respondents described themselves as either “not prepared” or “minimally prepared” to implement any AI solutions. However, with the explosion of AI tools and their positive results, the question is no longer whether construction firms will embrace AI, but how quickly the 45 percent of firms with “zero implementation” will be forced to either catch up or lose competitive ground.

AI agents are already reviewing and editing contracts, processing RFIs, optimizing equipment fleets, and analyzing change orders with measurable efficiency gains. Industry projections suggest that within five years, AI adoption will move from today’s 12-percent regular-use rate to majority implementation across medium and large construction firms, transforming everything from safety compliance to schedule optimization. With adoption of these systems, of course, come challenges. There are plenty of headlines about the impact of AI, but what is not being discussed are quiet mistakes showing up in its output, leading to RFIs, change orders, and scope disputes. Almost no one is asking the hard question: Who’s legally responsible when “smart” tools make dumb, or legally actionable, mistakes?

REAL WORLD RISK

Problems caused by use of AI tools are not theoretical problems. Like many other professional fields (accounting, legal), AI “mistakes” are happening now in construction, and fall into distinct categories, from “unbuildable imagery” that looks stunning but ignores physics, “hallucinated specifications” that cite standards and tests that do not exist, and concerns about “copyright infringement” that can expose firms to serious legal liability. Each represents a different flavor of the same problem. AI is extraordinarily good at producing “plausible” information, not necessarily “accurate” or “legally sound” information.

Generative image tools like Midjourney, DALL-E, Nano, and Stable Diffusion have become standard equipment in many design studios. These tools are phenomenal for fast ideation, mood boards, and early visualization. The problem is simple, they optimize for visual impact, not constructability. 

The point being when an engineer and contractor finally examine the details of AI designs, reality intrudes. Either the design gets substantially modified, disappointing an owner who was sold on the original image, or the team attempts heroic (and expensive) engineering to “make it work.” Both scenarios are the foundation for disputes. The first creates expectation gaps and scope arguments. The second creates defects claims and cost overruns.

THE PHANTOM CODE PROBLEM

Long, boilerplate-heavy project specifications are natural targets for AI assistance. Design firms are increasingly using large language models (LLMs), like ChatGPT, Claude, and others, to update specification sections, harmonize requirements across projects, or “refresh to current ASTM standards.” Used carefully, this can save time. Used carelessly, it creates “phantom codes,” plausible-sounding standard designations that do not actually exist.

Engineering librarians at major universities are already warning their students about this exact problem. Washington University in St. Louis explicitly cautions in its mechanical engineering research guides that tools like ChatGPT “might hallucinate the perfect standard,” inventing a convincing designation that cannot be found in any standards catalog. These models are useful for brainstorming but unreliable for precise, reference-level information like standards and regulations. They will confidently generate fabricated citations rather than admit uncertainty. In fact, so concerned about this possibility, ASTM International has issued an AI policy prohibiting anyone from entering their standards into an AI tool at the risk of the offending engineer losing their license to access their libraries. 

Spec writers asking an LLM to update a concrete section to current ASTM and ACI requirements, as an example, might get an output which includes freeze-thaw durability, with a clean, professional text that includes a real ASTM test method, a blended requirement mixing U.S. and European standards, and a completely invented, non-existent test method. The phantom code sounds plausible, follows ASTM’s numbering convention, and gets missed in QA review. After being issued with the bid documents, the testing lab cannot locate the method. An RFI reveals the problem. Now the project team is facing a contract clarification or change order, scheduling delays while parties negotiate equivalent testing, potential claims for wasted costs and time, and a legitimate professional liability question: Who owned the duty to verify cited standards?

THE COPYRIGHT MINEFIELD

AI models are trained on vast datasets scraped from the internet, including copyrighted images, text, and technical documents. When these tools generate output, they may reproduce substantial portions of copyrighted work, often without any way to be detected. Several major copyright lawsuits against AI companies are in litigation, with plaintiffs arguing training AI on copyrighted works without permission constitutes infringement.

For construction professionals, this creates exposure in several ways. When using AI to generate concept renderings, the designer has no reliable way to know whether the output incorporates copyrighted works. If marketing materials or presentations include AI-generated images similar to copyrighted works, your firm could face infringement claims. If specs contain text that is substantially similar to copyrighted master specifications or proprietary technical guides, there may be liability even if not knowingly copied. Finally, AI-generated project reports, safety analyses, or technical documentation may incorporate copyrighted text from industry publications, training materials, or other protected sources, or the phantom code references mentioned above.

USING SMART TOOLS SMARTLY 

The Associated General Contractors (AGC) has noted that contractors using AI tools should establish clear policies around verification and review of AI-generated content. While the AGC hasn’t issued comprehensive AI guidance yet, its standard position on professional responsibility applies: Contractors are responsible for the accuracy and legality of documents they submit, regardless of how those documents were created.

The solution is not to reject these tools; it is to use them appropriately. Some recommended guardrails are important. Label AI outputs clearly. Concept images should be marked “conceptual only, subject to engineering and code compliance.” Do not let AI renderings migrate into contract documents without full professional review. Verify every citation. If an AI tool includes a standard number, test method, code reference, or technical requirement, verify it against official sources. Make this a mandatory QA step, like checking calculations. Implement copyright review. Before using AI-generated images or text in external materials, have someone review for potential similarity to known protected works. Consider using reverse image search for AI-generated graphics. Document your review process. Harden your spec process. Assign a human specification authority, in-house or retained, who validates every AI-assisted edit against current standards. Treat AI output like a first draft from an intern, not finished work product. Finally, update your policies and contracts. Establish internal protocols for AI use. Consider addressing AI tools in professional services agreements, clarifying that all AI output is subject to professional review and approval. On the contracting side, discuss whether AI-generated schedules are informational or binding.

WHO’S IN RESPONSIBLE CHARGE?

The construction industry has always adopted new technologies, CAD, BIM, drones, laser scanning. Each brought risks that professionals learned to manage through training, protocols, and professional judgment. AI is no different in principle, but it’s different in kind. Previous technologies augmented human capability, they made us faster, more precise, more coordinated. AI can substitute human judgment if we let it, generating plausible content that bypasses our critical thinking.

The quiet failures all share a common cause: treating AI output as finished work product rather than raw material requiring professional judgment. Used thoughtfully, AI can help construction professionals work faster, explore more options, and catch errors earlier. But the moment we stop asking “Is this right?” and start assuming the AI knows better than we do, we have created a liability that no algorithm can optimize away. The smart are asking how to use AI while maintaining professional standards, legal compliance, and jobsite reality. That is the conversation worth having. 


about the author

William Thomas is a principal at Gausnell, O’Keefe & Thomas, LLC in St. Louis, where he focuses his practice on construction claims and loss prevention. He is a member of the International Association of Defense Counsel (IADC), currently serving as chairperson of the IADC’s Construction Law Committee; an AAA Panel Arbitrator; a Fellow with the Construction Lawyers Society of America; and a member of the ABA Forum on Construction, AIA, and ASCE. He can be reached at wthomas@gotlawstl.com.