The Hidden Risks of AI Expansion: Why Leaders Must Act on AI’s Second-Order Consequences Now

The Hidden Risks of AI Expansion

Why Leaders Must Act on AI’s Second-Order Consequences Now

While AI’s promise dominates headlines, savvy leaders must now confront its less visible but increasingly urgent consequences. From ChatGPT and Gemini to autonomous decision systems and hyper-optimized decision systems, AI is reshaping how governments, businesses, and individuals operate. But beneath this surface transformation lie four second-order risks, both deeply human and deeply technical, that are already impacting critical infrastructure, trust systems, and talent development. Ignoring them could compromise the long-term value of AI.

These challenges are not theoretical, and they are already manifesting in governmental and corporate operations, residential neighborhoods, educational institutions, and the data infrastructure behind some of the world’s largest companies. Here’s what individuals, executives, businesses, and policy makers need to watch for to continue staying ahead.

AI’s second-order risks are already here. Leaders must address four overlooked consequences now:

  1. Eroding human critical thinking as junior talent overrelies on AI.
  2. Misinformation loops that blur truth and decision-making.
  3. Power grid destabilization and increased maintenance caused by the electrical demands of AI infrastructure.
  4. Training data scarcity may collapse model quality within a decade.

Business leaders must act now to address these threats or risk costly consequences.

Loss of Fundamental Skills and Over-Reliance on AI

AI is accelerating productivity at the cost of eroding foundational human thinking.

AI is reshaping how people learn and work—but not always for the better. A conversation with the Chief Administrative Officer of a Canadian municipality emphasizes a growing concern: as more people rely on AI-generated answers, the process of learning to think critically is being quietly displaced.

AI tools are often pitched as “augmenting” human capabilities; however, this framing obfuscates the fact that many students and early-career professionals now leverage AI as a shortcut and tool for appearing competent, replacing learning altogether rather than enhancing it.

We are now seeing a generational split:

  • Senior professionals use AI to automate rote tasks and refine outputs based on their existing expertise.
  • Junior professionals and students, however, increasingly use AI to bypass the challenging intermediary intellectual, research, and logical steps that build domain expertise, including critical thinking, structured writing, counterargument development, and source validation.

For experienced professionals, AI can accelerate routine tasks and enhance analysis. When AI becomes a cognitive crutch too early, it prevents the development of foundational competencies. A world in which professionals can generate memos, strategies, or arguments without understanding the principles behind them is one where critical thinking will erode. Fundamentally, this creates a talent risk, decreased critical thinking, innovation stagnation, and a future skills gap, making experienced pre-AI generation employees positioned diametrically to those of younger generations and potentially altogether irreplaceable.

As AI-generated content proliferates, the threshold for what qualifies as original, high-quality human thinking rises, but antithetically, the standard deviations between exceptional and mediocre rise also. Simply put, as experienced professionals use AI to eliminate the menial work from their jobs and developing professionals are increasingly able to leverage AI to skip the skill development of previous generations, the capacity and productivity ceiling will continue to increase while the average capacity for younger generations will decrease without systemic support. This risks creating a flatter, less innovative talent pool, where truly exceptional human thinking becomes even rarer and harder to cultivate.

The long-term danger is not one where AI takes over people’s jobs, but one where there are no longer individuals capable of performing these roles, even if the roles continue to exist. Where the world develops a workforce that knows how to ask AI for an answer, but no longer knows how to construct one independently. This is not an anti-AI position, rather the opposite. Leaders across the board, from teachers and educators to executives and policymakers, must champion a culture where AI accelerates, rather than replaces, deep thinking, and create specific opportunities to develop the skills AI cannot replace.

Misinformation and the Erosion of Reality

The internet is filling with confident fiction, eroding reality-based decision-making in the process.

Even more challenging is the growing inability to distinguish between truth and AI-enhanced fiction, and this problem will only become more apparent over time.

Many AI platforms now generate content that appears authoritative but is factually incorrect, contextually distorted, or entirely fabricated. What makes this more dangerous is the scale. When millions of users rely on AI systems to summarize articles, answer questions, or guide decisions, we’re placing enormous epistemic weight on systems trained not to understand, but to predict.

This risk is magnified by three converging trends:

  1. Synthetic content saturation: AI is now generating a significant portion of the internet’s new content—articles, code, product reviews, and even academic abstracts. When future AIs train on this output, they begin learning from themselves. This circular learning reinforces bias, magnifies errors, and exacerbates sensational language.
  2. Probabilistic confidence without truth tracking: LLMs are optimized to produce statistically likely answers that are likely to be correct, but they do not verify factuality themselves.
  3. Economic incentives for volume over verification: Platforms optimize for engagement, not epistemic integrity, and AI can generate infinite content at near-zero cost, allowing for near-infinite proliferation of text, picture, and even video content to fit any narrative with near-indistinguishable realism.

The resulting AI-created products and their pull on society are a form of ‘epistemic decay,’ or the degradation of our societal sense of what is real, sourced, or verified. With AI-generated misinformation already undermining public discourse, skewing elections, threatening mental health, and occasionally, risking lives, AI will continue to lead us to a future in which the internet is saturated with confident, persuasive, and untrue content. In the immediate future, it is the obligation of those in positions of authority to:

  1. Develop systems that verify data sources, especially those that might be AI-generated or influenced, force chain-of-custody documentation for facts and sources, and highlight crucial, system-critical, or fundamental external unverified information.
  2. Educate those they teach, govern, or manage on how to identify and combat misinformation, both internally and externally. AI content will only continue to improve the appearance of truth, even if the actual factuality remains broken. As has been shown many times since the introduction of AI, existing models are not capable of assessing correctness in any meaningful sense; they are only able to assess whether data appears correct. Whether it’s a lawyer using ChatGPT in court and citing fake cases or doctors using ChatGPT to diagnose patients, there are significant and potentially severe reputational, legal, strategic/operational risks caused by eroded decision-making integrity.
  3. Advocate for responsible regulations and internal policies that create meaningful standards for content provenance and transparency. Business leaders, corporations, and individuals collectively are responsible for engaging with policymakers and technology providers to create responsible, meaningful industry standards that protect and require the international protection of data accuracy.

Without deliberate and transparent interventions in content moderation, data lineage, media literacy, and regulatory oversight, the gap between reality and perception will continue to widen, eroding both institutional trust and operational accuracy in everything from journalism to law to medicine.

Electrical Harmonics and Hidden Infrastructure Instability

AI infrastructure isn’t just energy-intensive—it’s damaging the grid itself.

AI’s physical footprint is a much less discussed, much less appreciated issue, despite its critical importance. Beneath every chatbot and computer vision system lies an incredibly dense and expansive network of servers and data centres, some of which are now overwhelming local electrical infrastructure. AI’s computational demand, and therefore its electrical demand, has already cost the North American electrical grid destabilizing power systems in ways most organizations and even some grid providers have yet to fully appreciate.

The data centres that train and operate AI models, with their high concentration of servers, GPUs, and Uninterruptible Power Supply (UPS) systems, are characterized by non-linear loads. Unlike traditional electrical loads that draw current smoothly, these modern electronics convert alternating current (AC) to direct current (DC) by drawing current in short, sharp bursts, thereby distorting the grid’s smooth sine wave and creating harmonic interference in the grid. Put simply, imagine the electrical grid as a smooth river and the riverbanks and boats as the businesses and individuals that are connected to the grid. AI data centres don’t just draw more water; they create turbulent waves (‘harmonics’) that can damage the riverbanks and the boats on the water.

At the core of the issue is the unique nature of AI data centres. Unlike conventional data centres or industrial loads, AI centres run thousands of GPUs and custom application-specific integrated circuits (ASICs) with extremely high, fluctuating power draws. Unlike linear loads that draw electricity smoothly and continuously, these components draw current in short, sharp bursts, which leads to harmonic distortion in the electrical grid.

The consequences are measurable and already happening:

  1. Amplified Voltages and Currents: Resonance amplifies certain harmonic frequencies, creating spikes in voltage or current beyond equipment tolerances. In the short term, this means that electrical equipment of all types will fail faster, perform less effectively over time, and need replacing sooner. Longer term, however, will require manufacturers to build electrical infrastructure with higher tolerances to absorb these new harmonic distortions, creating perverse incentives to maintain or expand legacy non-renewable energy sources.
  2. Overheating and Premature Failure: Motors, transformers, UPS systems, and household electronics exposed to “dirty power” run hotter and degrade faster, and this impacts all electrical draws; residential, commercial, industrial, and in fact, the power grid itself all degrade faster and thus experience higher maintenance costs and shorter operating lifespans as a result of increased harmonic distortion.
  3. Grid Instability and Localized Brownouts: Areas with dense AI infrastructure are reporting higher frequencies of nuisance tripping, flickering lights, and unexplained equipment failures. This instability also manifests itself in the ever-increasing need for variable load. This can be solved short-term by simply not decommissioning coal, oil, and natural gas power plants to subsidize the base and variable loads, but it does not solve the problem of the need for variable load capacity increases.
  4. Cascading System Effects: Harmonics can propagate far from the data centre, affecting rural and residential neighbourhoods with no direct connection to the AI industry, including those in vulnerable rural or isolated communities. While in locations that see significant AI data centre construction, the costs of managing AI-based grid instability can be mitigated, in rural, isolated, and less productive regions of electrical grids, harmonic distortions will also occur, but these regions will struggle to respond to these challenges.

A recent Bloomberg analysis showed that large AI data centres create disproportionate power quality issues in nearby areas, particularly in those areas near Virginia’s “Data Centre Alley.” Among these issues was a significantly higher rate of harmonic distortions in the power grid, with 1.7% of harmonic sensors across the US reporting above-threshold values.

While harmonic distortion is not a new phenomenon and therefore, methods to mitigate harmonic distortion already exist, they are not wholly able to eliminate these distortions, nor are they as scalable or fundable as the causes of these distortions. Mitigation technologies like Active Harmonic Filters (AHFs), line reactors, isolation transformers, and UPS systems with high-quality output stages can reduce harmonic distortion, but there is a greater economic problem: it is ultimately the tax payer that is responsible for funding these mitigation efforts, despite these efforts only existing as a result of the pursuit of capital.

Grid operators face an ever-increasing challenge: the speed and volume of AI-related deployments is rapidly exceeding the control architecture and power quality safeguards designed for linear-era electricity consumption.

What’s required now is a shift in the mindset of regulators, executives within AI-first companies, and members of the communities in which data centres are being built:

  • When planning AI deployments or data centre partnerships, executives and regulators must ask about power quality and harmonic mitigation strategies. This is a new line item for their risk assessment and should be a required consideration during the permitting process.
  • ESG/Sustainability: The increasing prevalence of and challenges posed by AI technology will now require consideration in the context of corporate environmental, social, and governance (ESG) commitments. Ignoring the impact of added electrical grid demand and the harmonic implications of this demand will inadvertently require the continued operation of less sustainable energy solutions.
  • Utility Engagement: Direct public and private involvement with local utilities and regulatory agencies to ensure future infrastructure planning accounts for these new demands, including incentivizing and mandating that AI centres adopt best-in-class filtering technologies proportionate to their energy consumption, and requiring investment in smart grid monitoring and real-time harmonic detection at the data centres level, not just the grid operator level.

Grid operators, regulators, and AI companies must treat harmonic distortion not as an obscure electrical anomaly or theoretical problem, but as a systemic infrastructure risk that, if ignored, will increase maintenance costs, shorten equipment lifespans, and degrade grid stability. As of today, the speed at which AI data centres are scaling far outpaces the rate at which power quality solutions are being implemented, creating a quiet infrastructure arms race with the ultimate loss being higher costs for local utilities, regulators, and residents alike.

This is not about how much power we use—it’s about the quality of the power left over.

The Impending AI Data Collapse

AI may soon run out of high-quality data to learn from, preventing future growth and damaging what already exists.

The final lesser-known challenge AI is creating is the need for reliable human-created content for training new models is rapidly running out. Despite the apparent vastness of the internet, the amount of truly original, diverse, high-quality human-generated content is finite. Large Language Models (LLMs) and other AI models have already “scraped” and consumed all of the easily accessible, well-structured, reliable datasets. AI companies face a precarious situation as experts already warn that high-quality textual training data, on which they rely for model improvements, could be exhausted as early as 2026-2032.

AI developers have trained current models on vast swaths of internet data, but a real and fast-approaching ‘data wall’ threatens to stall future progress. The AI ‘wall’ can be broken down into four main categories:

  1. Quantity: High-quality text, image, and code datasets are finite. Studies suggest we could exhaust usable, high-quality internet data between 2026 and 2032 as described in the paper “Will we run out of data?” which argues that, following recent trends in AI model development “…models will be trained on datasets roughly equal in size to the available [total] public human text data between 2026 and 2032, or slightly earlier if models are overtrained.” This presents an existential risk to the development of AI training, as researchers must feed each new model exponentially more data than the last to train it effectively.
  2. Quality: As AI-generated content proliferates, future models risk training on copies of AI outputs and creating feedback loops of distortion, bias, and “model collapse.” As more AI-generated content exists on the internet, future AI models that scrape the web for training data will inevitably ingest this synthetic content. Training materials containing AI-generated content will inevitably create a “self-damaging feedback loop,” where each iteration introduces subtle inaccuracies, biases, and reduces the diversity and nuance found compared to truly human-created content. This recursive training can lead to models producing repetitive, distorted, or even nonsensical outputs over time, experiencing “catastrophic forgetting” of original factual knowledge, and increasing hallucinations, decreasing the utility or reliability of any content generated by newer AI models. This is part of the cause behind the ‘ChatGPT’ writing style, where simply using an em-dash has become so commonplace in AI writing that its appearance induces suspicion and sometimes even disdain for the writer—AI or otherwise.
  3. Explicitity: Supervised models still rely on human-labelled data. But complex domains like medicine or law require expert annotators, making data labelling an expensive and limited resource. To train supervised AI models for critical applications, experts must label vast amounts of data accurately, and companies must invest significant labour, money, and time in this process. Accurately labelling complex and nuanced data, such as medical images or legal documents, requires specialized human expertise that cannot be easily scaled. Furthermore, human labelling itself can introduce biases or inconsistencies, further complicating data quality. Organizations that rely on cheap, unskilled labour for annotation are discovering it insufficient for increasingly capable AI models that demand expert human annotators for nuanced tasks.
  4. Legality: Copyright lawsuits and data consent laws are closing the door on free access to many data sources. Content creators, publishers, and platforms increasingly restrict AI companies from using their data for training, often due to intellectual property concerns, fair compensation, or simply not wanting their content used to train models that might eventually compete with them. When organizations fail to provide transparency and proper provenance for training data, they create significant legal and ethical risks, including the exposure of sensitive information and the perpetuation of unintended biases.

This “digital famine” has overwhelming implications. The current model of indiscriminately scraping public internet data for training is facing increasing legal challenges, including concerns over copyright infringement, data privacy, and the ethical implications of using content without explicit consent or fair compensation. Innovations like few-shot learning, active learning, and synthetic data generation offer a potential resolution but remain insufficient on their own. Addressing this requires transparent data lineage procedures, ethical sourcing practices, and a re-evaluation of data ownership and compensation models. Without these measures, AI development risks legal liabilities, perpetuating biases, and eroding public trust in the technology itself.

Why This Matters Now

The AI revolution is already reshaping our professional landscapes, societal trust, and fundamental infrastructure. This is a unique moment where the transformative power of artificial intelligence must be met with equally developed, articulate, and pragmatic foresight and responsibility. The challenges discussed, whether the quiet erosion of critical human skills and the unsettling blurring of truth, to the unseen strain on our electrical grids and the looming scarcity of high-quality data, are not theoretical hurdles for a future generation. They are immediate, interconnected realities that demand attention, investment, and measured sanguinity.

For business leaders and policymakers, the path has already been set: steering through this new frontier requires more than just excitedly investing in innovation, but requires a strategic perspective that looks beyond the immediate promise to anticipate the ripple effects. It means fostering a workforce that can wield AI as a tool without sacrificing foundational human intellect. It means building fortified information ecosystems that protect against the deluge of intentionally or accidentally manufactured disinformation. This means proactively investing in the resilient infrastructure and ethical data practices that will support the continued benefits that have been experienced through AI’s growth thus far.

This is not an anti-AI stance, but rather a recognition that for AI to remain an accelerant of human progress and not become an anchor, we must collectively commit to understanding its full spectrum of impact, balancing meliorism and naïveté to create a realistic, factual, and truly functional AI ecosystem. The future of innovation, the integrity of our society, and the very stability of our operational foundations rely on our cultural willingness to confront these complex challenges with intelligence, integrity, and decisive leadership.

Whether it’s safeguarding the next generation’s critical thinking skills, protecting the electrical grid, maintaining the integrity of factual information, or preserving the fuel that powers intelligent systems—these are not problems to solve later. They are the defining challenges of AI’s present.

Comments are closed.