Jeff Albee

Jeff Albee, Vice President, Stantec

With the entire world racing to adopt Artificial Intelligence (AI), the AEC industries are under immense pressure to keep up and integrate AI into how they work and what they offer to clients. Mistakes in AEC can have devastating consequences, so before firms rush to adopt AI, they need to make sure they’re doing it in a way that’s both compliant and error-free. 

It’s hard to overstate how potentially transformative AI is for architecture, engineering and construction. It has the ability to streamline various project workflows, reducing both labor costs and labor hours out of the work architects, scientists, and engineers do. As with almost every other industry, the excitement of the transformative promises of AI are translating into a race between competitors eager to move faster while cutting costs in the long run.  

AI jargon is making its way into firms' marketing pages, as CTOs and CIOs are pressured to let their customers know they’re on the cutting edge of technological advancement. Whether it’s a simple CoPilot-aided boost in productivity or an in-house solution built to address a specific engineering issue, AEC firms are rushing to ensure they can tell investors and customers their firm is leveraging AI.  

The reality is that the rush to adopt AI can lead to an over-reliance on systems that aren’t fully understood or properly vetted. This is remarkably risky in the AEC world, where legal and safety compliance is mandatory and quality standards are non-negotiable. 

The consequences of failing to properly assess and implement AI could be catastrophic, potentially leading to engineering failures or other serious issues that could endanger lives.

The issue for firms (and the clients who employ them) is that the understanding of how to bring AI systems under the compliance umbrella in our industry is relatively immature, and the mysterious processes that power AI and Machine Learning (ML) are a black box that are often left unexplained to the consumer of the outcomes that these services produce.  

As firms use more generative AI tools in preparing client deliverables, questions arise about the extent to which AI-generated content should be disclosed, reviewed, and measured. For example, if an AI model generates part of some design schematic, who is responsible for ensuring that those elements of the design meet regulatory and safety standards? Should there be an “ingredients” label to disclose AI has been employed in the creation of work? A warning label?  And if so, how should a client distinguish between a broadly available AI system (like CoPilot from Microsoft) that’s tried and trusted versus a proprietary model perhaps less well known? 

This lack of clarity could result in over-promising and worse, science errors or design flaws. That obviously creates massive potential liability for AEC firms.

To navigate the complexities of AI adoption, AEC firms must look to established standards and frameworks that provide guidance on quality and compliance. Fortunately, several organizations, both in the United States and internationally, are developing guidelines to help companies manage their AI use responsibly.

The White House, for example, has issued a blueprint for an AI Bill of Rights, which outlines five principles to protect individuals from the potential harms of AI. These principles include ensuring that AI systems are safe and effective, that individuals have the right to know when AI is being used, and that AI systems do not exacerbate discrimination. Similarly, the European Union’s AI Act aims to regulate the use of AI by categorizing applications based on their level of risk, with stricter requirements for high-risk applications like those in the AEC sector.

Additionally, the National Institute of Standards and Technology (NIST) in the U.S. offers one of the most comprehensive frameworks for understanding and managing AI use. The NIST AI Risk Management Framework (AI RMF) emphasizes four key functions: map, measure, manage, and govern.

Map: Identify and analyze the AI system's use cases and associated risks. This includes understanding the AI system's intended function and the context in which it will be used.

Measure: Evaluate the AI system's performance and reliability against established benchmarks. This step involves assessing the accuracy, robustness, and fairness of the AI system.

Manage: Implement strategies to mitigate identified risks and enhance the AI system’s benefits. This includes developing contingency plans and continuously monitoring the system's performance.

Govern: Establish policies, procedures, and practices to ensure that AI systems are used ethically and in compliance with regulatory requirements. This also involves fostering a culture of accountability and transparency within the organization.

These functions provide a roadmap for AEC firms to assess the quality and compliance of their AI systems, ensuring that they are not only effective but also safe and reliable. As regulatory bodies increasingly turn their attention to AI, compliance with these standards will likely become mandatory. Firms that proactively align their AI practices with these frameworks will be better positioned to adapt to future regulatory changes and maintain their competitive edge.

We must not wait for the first catastrophic failure to figure this out. Using these frameworks now will allow companies to open the aperture of understanding of the wider risks that AI poses.

It’s from this understanding that companies can then take the steps to ask more strategy-based questions. As they do, they can start to unlock the use of AI in AEC and move broadly to models of all kinds that are anchored more in data than in theory.  

AI can push the industry in new directions, but only if the industry pushes AI responsibility first.

Jeff Albee is the vice president and director of digital solutions at Stantec