top of page
  • Writer's pictureVik Chaudhry

Keeping it Real: Data’s Critical Role in Optimizing Power Utility Infrastructure Inspections

Updated: Aug 2


Data has a critical role in optimizing utility infrastructure inspections
Data has a critical role in optimizing utility infrastructure inspections. File photo

This column discusses the importance of data in utility infrastructure inspections, the paradigm shift, workflows and how it intersects.

Maintaining a resilient power grid is a time-intensive and expensive endeavor. As Deloitte points out, “capital expenditures have trended upward in recent years as utilities boost spending to upgrade aging infrastructure; harden systems against increasingly severe climate events; modernize and digitize processes; defend against cyberattacks; and address the growing mandate for cleaner energy sources.” Forward-thinking power utility companies are investing in technology to streamline operations and manage physical assets as part of this expenditure uptick. One area that’s seeing significant innovation is infrastructure inspections.


Utility companies are increasingly using drones fitted with cameras to inspect T&D infrastructure – adding to an already growing set of visual data captured via helicopter and fixed-wing aircraft. To make all of this visual data actionable, power utilities are also investing in software that can quickly analyze, identify and predict infrastructure issues ranging from downed power lines to encroaching vegetation to woodpecker holes. Faults in transmission and distribution lines play a major role in the reliability of power systems, and early detection is critical for minimizing problems.


A Paradigm Shift Underway

Until recently, the process of detecting infrastructure faults from massive amounts of visual data has been manual, time-consuming, and expensive. Artificial intelligence (AI) and other digital innovations have helped the industry streamline inspection processes and maximize all of its historical and real-time data. However, not all AI is the same. Accuracy (and therefore usefulness) is critical in sectors like healthcare (tumor detection) and utilities (major blackout and wildfire prevention). Just like with humans, AI’s quality of work largely depends on training. While we’re not all data scientists, it’s important to understand the “why?” in an outcome-driven world.


Data, Data Everywhere

Utility companies have exponentially more data than ever – from customer usage via smart meters to transformer data and other sources. In an Analytics in Power & Utilities report, Andres Carvallo, Chief Information Officer, Austin Energy said: “[Smart meter data] requires 200 TB of storage space [annually] when disaster recovery redundancy is factored in …” Why is this good news for innovative infrastructure inspection solutions? Data is the lifeblood of machine learning and AI– the more it gets, the better it learns and performs. However, not all data (and therefore AI) is created equally. So, what’s the difference between real and synthetic data and why is it important?


Real vs. Synthetic Data

There’s a lot of buzz about synthetic data around the topic of deep fakes. In the utility sector, real data applies to whatever can be captured in the field, usually from images/video taken by helicopters, fixed-wing aircrafts, or drones. This information is crucial for ongoing maintenance and repairs, but also for training purposes. The advantage of real data is that it maps 1:1 with what the industry refers to as “ground truth” – an accurate representation of the physical world scenarios a technician is likely to encounter, including things like blurriness or background noise. These are real-world problems, and highlight the unpredictable variables of fault detection.

Conversely, synthetic data in the case of utility infrastructure is when you take one image and replicate it while synthetically altering various things in the image to try and account for an exponential number of possible scenarios. While this is an expedient method of generating AI training data, it, unfortunately, falls short in effectiveness.


Synthetic Data is Great on Paper, but Not in Practice

The problem with synthetic data thus far has been that the models aren’t able to optimize for real-world scenarios quite yet. Instead, it’s more akin to memorizing a certain case study of a specific problem. For example, synthetic data might take one image and then manipulate it in multiple ways—like rotating the image 90 degrees, for example. Whereas in an actual real-world fault detection process, the angles of capture could impact how a problem is spotted or solved.


Models trained with real data from the start are proven to be more accurate. However, AI models (by definition) don’t stop learning after the initial set of training data. The data inputs ongoing also matter when it comes to increasing accuracy and getting smarter over time. However, that isn’t to say that synthetic data holds zero potential value. Synthetic data generation could be effective if there was enough “ground truth” data available for the synthetic data models to account for the various factors that occur in real world data capture. In this scenario, the models could more accurately generate variable/diverse synthetic data.


Lifelong Learning with “Human-in-the-Loop” Technology

Given the nascent status of AI in the utility sector, the technology often works best with an assist from a real person. Referred to as the “human-in-the-loop” approach, Stanford posits that “instead of thinking of automation as the removal of human involvement from a task, we should imagine it as the selective inclusion of human participation – resulting in a process that harnesses the efficiency of intelligent automation while remaining amenable to human feedback.”


For utility companies, this has allowed engineers and other technicians to be included in the decision making, helping train the AI and revising the learning in real-time to match the real-world conditions once they are on site to inspect and service a problem that computer vision identified or predicted. Especially with fault detection, the human-in-the-loop approach strengthens AI insights with the institutional knowledge and on-the-job experience of veteran workers. And with 50% of the utility sector’s workforce expected to retire over the next decade, the opportunity for AI to gain that perspective is invaluable (and urgent).


Building Future-Ready Workflows

AI’s role in advancing the utility industry largely hinges on the accuracy and usability of the data. Forward-thinking utility leaders understand that their organizations are better served when humans and technology work together to continuously improve data quality. In aggregate, the impact of these technological advancements is faster fault detection, which in turn prevents major disasters. A future-ready workforce is one that embraces the power of AI to streamline and break down data silos so that utility experts can act more quickly with the most accurate information in their corner.


Vikhyat Chaudhry is the co-founder, COO and CTO of Buzz Solutions. This column appeared on March 7, 2023, in Energy Central and can be found here.


bottom of page