Penberthy Found Where Edge AI Wins And Most Edge Stacks Can't Run It

Penberthy Found Where Edge AI Wins And Most Edge Stacks Can't Run It

Penberthy Found Where Edge AI Wins And Most Edge Stacks Can't Run It

Where the high-return AI lives, governance has to run locally. Connectivity is one constraint among several.

Where the high-return AI lives, governance has to run locally. Connectivity is one constraint among several.

Where the high-return AI lives, governance has to run locally. Connectivity is one constraint among several.

Share post

by

by

Steve Yates, P.E. (VA), CEO & Co-Founder

Steve Yates, P.E. (VA), CEO & Co-Founder

Steve Yates, P.E. (VA), CEO & Co-Founder

9 minutes

9 minutes

9 minutes

Industry

Industry

Industry

Industry

Scott Penberthy, one of Google’s lead AI engineers, just put hard numbers on where AI is actually turning corporate investment into revenue gains. It’s a sobering analysis.

According to his findings, the deployments earning the highest excess returns are vertical and physical-world: quality inspection, clinical trials, subsurface drilling, autonomous systems, industrial robotics. However, a meaningful subset of those deployments runs in environments where the cloud-first edge stack was never built to operate, where connectivity is intermittent, regulatory air gaps separate operations from any external control plane, and latency budgets rule out a round-trip to a server two thousand miles away. That subset is where the architecture problem bites. And where the industry faces the ‘hard problem’ of AI adoption at the far edge.

Scott doesn’t proffer a solution (it’s out of the paper’s scope). And that’s exactly what I want to discuss.

But let’s dive deeper into the research first.

Penberthy’s paper, Where AI Creates Value, released in Q2 2026, applies a rigorous empirical method to a question most AI-deployment analyses leave to anecdote: where exactly is corporate AI actually working?

He and his team extracted 5,752 AI deployments from earnings calls and SEC filings (Q1 2025 through Q1 2026) and scored each against a four-rung readiness ladder inspired by NASA’s Technology Readiness Levels: Corroborated, Production, Quantified, Progressing. Anything that didn’t reach rung two was thrown out. That left 5,127 production-validated deployments from 2,385 public companies across the S&P 500, MidCap 400, SmallCap 600, Russell 2000, and select international ADRs. They then clustered the validated deployments using semantic embeddings and measured excess returns against benchmark indices, with statistical significance assessed via Welch’s t-test, Bonferroni correction across all clusters, and Cohen’s d for effect size.

A quick caveat before getting to the results: the study, though methodologically rigorous, is nevertheless looking at correlations, not causal relationships. Penberthy himself acknowledges this limitation. But it’s a good sample size and, at any rate, the architectural ramifications are real enough.

Vertical Deployments Yield Higher Returns than Horizontal Applications

As it turns out, the most difficult deployments yield the greatest results. Penberthy breaks out the AI deployments into two types: horizontal and vertical. Horizontal deployments are the AI every vendor sells to every industry: customer-support chatbots, code-generation assistants, back-office automation, fraud detection. Vertical deployments are domain-specific: AI inspecting welds on a factory floor, AI optimizing subsurface drilling paths, AI accelerating clinical trial enrollment.

The associated financial performance between these two categories is significant: Seventy-three percent of the vertical deployment showed positive excess returns. Only 15% of the horizontal deployments did.

The leaderboard:

  • Automated Quality Inspection: +79%, 20 companies

  • Clinical Trial Optimization: +51%, 42 companies

  • Intelligent Drug Development: +44%, 36 companies

  • Subsurface and Well Optimization: +42%, 35 companies

  • Autonomous Systems: +32%, 12 companies

  • Industrial Robotics: +21%, 26 companies

  • Loan Loss Forecasting: +19%, 32 companies

  • Robotic Warehouse Operations: +15%, 30 companies

Note, these are not raw excess returns against the S&P 500. It’s actually the difference against the median AI-deploying public company. So, in other words, +79% on Automated Quality Inspection is “this cluster outperforms the median AI-deploying public company by 79 points.” That is still a remarkable difference. It’s just not the same as “+79% against the index” (which would be legendary).

Compare those numbers to the horizontal deployments: Infrastructure AI at -5% across 307 companies, Operations AI at -14% across 1,157 companies, Customer-Facing AI at -13% across 561 companies. Incidentally, the largest deployment clusters in the dataset are also the worst-performing.

Penberthy explains the difference as horizontal AI deployments have become “tablestakes” — keeping pace with the pack, not outrunning it. Vertical deployments, however, are generating a strategic value.

He identifies five attributes the high-return verticals share:

  • The AI touches the physical world

  • The deployment sits behind regulatory clearance

  • The AI requires years of proprietary data that cannot be downloaded

  • Mistakes are expensive

  • Deployment requires specialists who are hard to hire

Automated quality inspection has all five, while customer-support chatbots have none.

Why High-Return AI Lives at the Far Edge

Most readers will fixate on the leaderboard. But the line I keep coming back to is “a competitor cannot replicate this by signing up for an API.”

It explains why the high-return deployments cluster where they cluster: they require years of proprietary data, physical-world feedback loops, regulatory clearance, and specialists you cannot hire on demand.

None of that is API-shaped.

It also explains why a meaningful subset of these deployments (drill ships, substations, distributed industrial sites, autonomous fleets, off-grid agricultural operations) operate in environments where a cloud round-trip isn’t always within policy or even possible. At remote sites the connectivity is intermittent and the power is dirty. In regulated industries the auditors are watching every decision, and the audit trail can’t depend on a link that may have been down. In real-time control loops the latency budget rules out a remote control plane. Across all three, the link cannot be the load-bearing wall.

You can’t just get to those returns through an API, and you can’t get to them with a cloud-first edge stack retrofitted into a remote site three years ago.

This is the hard problem underlying Penberthy’s paper.

The Hard Problem: Why the Cloud-First Edge Stack Fails at the Far Edge

Penberthy's data identifies where the high-return AI lives, but doesn't address how to actually run AI at those sites. The challenge is not one problem but a set of interdependent ones: continuous governance that operates locally under partition, connectivity managed as a policy-driven resource across multiple bearers, hardware-level recovery that doesn't require a technician on-site, and the integration of all three into a single operational platform that a field team can deploy and maintain without a dedicated IT staff. That's the architectural problem; or, as I like to think of it, the 'hard problem' of edge AI.

More than three decades in embedded systems and edge networking gives me a particular vantage point on what is happening at the edge AI layer. During development in the lab, everything is peachy: the model improves with each generation; the benchmark numbers improve. The pilot demonstrates value in a controlled environment. And then it deploys to the far edge, or hits a regulatory partition or an air gap that the architecture diagram never accounted for, and the system underneath the model fails.

The failure is not in the AI. It is in an architecture beneath the AI that discarded a principle prior generations of edge computing got right. This is a regression, not a recurring problem.

The default enterprise architecture is a model in the cloud, an API at the site, and a wing-and-a-prayer assumption about connectivity. It works in a warehouse with two redundant fiber paths. It doesn’t work on a remote rig. It doesn’t work in a substation behind an air gap. And it certainly doesn’t work when an EU AI Act auditor asks for a complete decision-attestation record from a node that has been offline for nine days.

Networking buyers know the default is broken. Cisco’s 2025 IT networking leader survey, reported by ComputerWeekly, found that 97% of businesses believe they need to upgrade their networks to make AI and IoT initiatives successful. The architectural problem is not contested.

But the architectural answer is far from a consensus.

Six requirements have to work together, and none solves cleanly without the others:

  1. The edge runtime has to survive loss of connectivity and keep operating under local governance.

  2. The policy regime running at the disconnected node has to be the same policy that runs in the data center, not a pared-down cousin.

  3. Audit logs have to buffer locally with cryptographic attestation and reconcile on reconnect through a protocol designed for intermittent links

  4. Attested boot, segmentation, and key management have to operate without a phone-home.

  5. Connectivity itself has to be managed as a resource portfolio across satellite, cellular, WiFi, and LPWAN bearers, routed by policy on the device, not treated as a binary dependency on a single link.

  6. Fleet operations have to scale to a thousand sites without a thousand truck rolls, with hardware-level recovery that can re-image a bricked node from the baseboard up without sending a technician.

And the entire system has to work as an integrated platform, not as six separate vendor solutions bolted together at the site. That is the operational layer Penberthy’s vertical winners need wherever they actually run. Most edge architectures sold today don’t have it.

Edge Inference Isn’t the Same as Edge Governance

There’s a critical distinction the industry frequently conflates: running an AI model at the edge isn’t the same as governing it there.

Many modern edge architectures execute inference locally. NVIDIA’s Jetson platform runs models on the device, and an increasing number of competitive offerings do too. But a system can execute inference at the edge while its policy enforcement, decision attestation, and operational boundaries still depend on a cloud platform. Edge inference solves latency and data sovereignty. By itself, it doesn’t solve governance under partition.

What is governance under partition? It is the property that the policies, attestation, and decision boundaries governing an AI system continue to operate locally when the network does not. Edge inference puts the model at the edge. Governance under partition puts the auditor’s record, the policy enforcement, and the override authority there too.

Agentic AI shifts this from a tolerable inefficiency to a structural liability. The new generation of edge AI is moving from outputs to actions — from agents that suggest things to agents that decide and actuate, often in real time with limited human intervention. If the policies, boundaries, and attestation depend on connectivity, then the governance is cloud-dependent regardless of where the model runs. The link drops, and the AI keeps acting while the governance disappears. TechRadar Pro recently called data governance “the operating system of enterprise trust,” arguing that organizations who treat it as anything less will face recalls, fines, and eroded confidence.

That principle scales down to the edge with sharper consequences.

Penberthy’s data suggests this gap will become more expensive over time, not less. The deployments at the top of his leaderboard are precisely the deployments where mistakes are most costly: clinical trials, drilling decisions, manufacturing defects, autonomous fleets. The combination of high return and high failure cost is what makes those deployments valuable. It’s also what makes ungoverned autonomy at those sites unacceptable — a point Google Cloud’s Office of the CISO has put squarely on board agendas, naming AI, cyber risk strategy, and edge security as the three governance areas directors now have to own.

Why Edge AI Governance Has to Live with the Process

I would know — I designed these systems.

Thirty-two years ago at GE Fanuc Automation, I engineered CPU and I/O modules for Programmable Logic Controllers, including the Series 90-30. The bedrock principle was non-negotiable: governance lives with the process. If the supervisory layer loses communication, the PLC keeps running within its governed parameters. If a safety interlock trips, it trips locally. No round trip to a server. No dependency on a network link. No cloud service that might be experiencing latency or an outage. Governance availability was 100% by design, because the governance and the governed were never separated by a network.

That is the principle that has to carry forward into edge AI. Not as a legacy constraint. As the design center.

The obvious objection is that the PLC works because the rule is deterministic, and AI doesn’t produce deterministic outputs, so the principle doesn’t extend. That objection is wrong, but only if you’re precise about what you’re enforcing.

The PLC principle doesn’t require deterministic decisions. It requires deterministic policy over decisions.

The PLC reads a sensor (continuous, noisy, physical) and applies a deterministic predicate: pressure > 150 PSI, shut down. The same architectural shape extends one layer up. An AI vision system produces a non-deterministic classification; the local governance layer applies a deterministic predicate over that classification. Confidence below threshold T and consequence class high? Route to human queue, do not execute. Don’t dispatch this clinical action without a human signature recorded in the last sixty seconds. Don’t reroute the autonomous vehicle without a verified obstacle ID. The model is non-deterministic. The policy enforcing what the model is allowed to do with its output isn’t.

For this to actually hold at a disconnected node, four mechanisms have to operate locally:

  • A policy-as-code engine that intercepts every inference call and evaluates the active policy against the model output before any actuation

  • A signed model registry that proves which model and which weights produced which decision (so the audit trail can answer “which version said what” without a phone-home)

  • An attested decision log that records, with a cryptographic signature from a key only that device controls, what the model output was, what the policy evaluated, and what was actuated; i.e., what was signed at the moment of decision, not at the next sync

  • A local human override that operates without connectivity, with the override itself recorded in the attestation log

None of these mechanisms need connectivity to function. All of them have to function whether connectivity is up or not.

When the node reconnects, attested store-and-forward packages the local state and merges it with the fleet-wide governance picture through a reconciliation protocol designed for intermittent asynchronous links. The reconciliation isn’t a copy operation; it’s a merge of divergent states that evolved independently, with the attestation chain preserved and conflicts resolved through fleet-level policy. That reconciliation capability is what transforms an offline-capable edge node into a genuinely governed autonomous system.

That’s what extending the PLC principle into edge AI actually requires. It's not a metaphor; it's an architecture.

What Governed Edge AI Looks Like Across a Fleet

Across a fleet of governed nodes, the operational challenge extends well beyond connectivity. Four properties have to hold simultaneously, and they span governance, connectivity management, auditability, and hardware-level recovery.

Policy parity, cloud to disconnected. The same NIST AI RMF controls, including the GenAI Profile’s testing, red-teaming, and documentation practices. This is the same MITRE ATLAS coverage, and the same data-in-use protections that run in your data center also run on the offline node, sealed in an enclosure on a wind turbine, with the regulator’s clock still ticking. When the node reconnects, the audit trail comes back with it: lineage, action logs, attestation evidence, already in the format a CSF 2.0 or DORA examiner expects.

Lifecycle evidence the auditor recognizes. Aligned to NIST AI RMF, MITRE ATLAS, and Singapore’s Model AI Governance Framework from day one — with the artifacts the auditor wants to see, not a deck claiming alignment.

Recovery economics that scale across a fleet. At far-edge sites, hardware fails. Boards brick. Firmware updates fail mid-apply. OS corruption renders the management agent unreachable. In a conventional edge architecture, each of these is a truck roll: a technician drives to a remote wind farm, mine site, or cell tower at a cost of thousands of dollars per visit. Multiply that across a thousand sites, and fleet economics collapse. Hardware-rooted attestation and BMC-level recovery change the math entirely. Out-of-band management operates independently of the primary OS, the application stack, and the network management agent. It can power-cycle the node, re-image the OS, and restore a known-good state through a dedicated management interface. The node recovers remotely. The truck stays parked. This is not a convenience feature. For fleets operating at far-edge scale, it is the difference between a viable operation and an unsustainable one.

Connectivity that fails gracefully. Far-edge sites rarely have one connectivity option. An offshore platform may have satellite, a cellular link from a coastal tower, and a point-to-point microwave to a neighboring facility. A smart agriculture deployment may have LoRaWAN for sensor telemetry, WiFi HaLow for device communication, and satellite for management backhaul. Policy-driven multi-bearer management allocates each data flow to the appropriate bearer based on operational rules enforced locally on the device. Safety alerts route over the most reliable bearer immediately. Routine telemetry batches for low-cost transmission windows. Management traffic routes through the most secure available link. When one bearer degrades, the policy engine reallocates without the application layer noticing. This is not failover from primary to backup. It is continuous optimization across every available bearer, governed by the same policy framework that governs the AI decisions themselves.

This is governance treated as the operating layer — not as a compliance afterthought.

“Decoupled From the Cloud” Is Operational, Not Ideological

This isn’t an argument against hyperscalers. They’re an important part of any cloud-to-edge fabric. However, the decoupling I’m talking about is operational. The edge intelligence has to keep thinking when the link is gone, and it has to do it under the same policy regime as the rest of the fleet. That’s what no other architecture currently delivers at far-edge scale.

Cloud capabilities are essential. They just have to be additive, not gating. That distinction is the entire architectural argument. Connectivity is a feature; resilience is an architecture; governance is the operating layer that holds both together when the link drops.

What Penberthy’s Data Tells You About Where to Deploy AI

I see two key takeaways from Scott’s research.

First, build the horizontal AI infrastructure. You have to. Your competitors already have it. Penberthy’s data shows median returns are negative on horizontal AI, which means absence is more expensive than presence. His analogy is the right one: “Like having a website in 2000 or a mobile app in 2012. You fall behind without it. You don’t get ahead with it.” That’s the cost of competing in 2026. It's not strategic; horizontal AI is the new ground floor. And you don’t want to find yourself in the basement.

Second, find your verticals. Walk through your business with his five questions in hand:

  • Where do you touch the physical world?

  • Where do you operate behind regulatory clearance — FDA, FAA, PCI DSS v4.0.1, DORA, CSF 2.0?

  • Where do you sit on decades of proprietary data nobody can buy?

  • Where does a mistake cost you millions?

  • Where do your specialists take five years to grow?

The intersection of those answers is where you should investigate. Incidentally, it’s also where you’ll discover that your existing edge stack doesn’t survive contact with a partitioned site. That’s the moment to look at governance as the operating layer, and at policy parity as the architecture. It's not as a bolt-on.

Edge-first architecture requires a new paradigm: governance that operates locally whether connected or not, connectivity managed by policy across whatever bearers are available, hardware-rooted recovery without a truck roll, and a platform that integrates all three so the operator deploys one system. That is what it takes when cloud-anchored AI meets the physical world.

That’s the work in front of us. Connected or not.

What Federant Is Building, and How to Get Early Access

The governance architecture I've described in this piece is what Federant is building. We don't think the industry can afford to let edge AI governance fragment into proprietary silos, so we're taking the foundational work open. The edge operating system and hardware designs together with interfaces, attestation record formats, interoperability specifications, and additional details will be published as open resources through a leading open infrastructure community. We're well along in active discussions and expect to formalize the partnership in the coming months.

Federant's underlying patent portfolio will be pledged under terms that ensure developers, partners, customers, and competitors can build compliant implementations without legal risk. The specific structure and terms will be published alongside the open community announcement.

We’re not building fences with eleven patents pending in edge AI governance to keep people out. We’re building roads with them, so the industry can traverse a safe, proven path that we’re blazing. Details will be posted at federant.com as the work matures.


Steve Yates, P.E. (VA) is CEO and Co-Founder of Federant, which is developing open governance infrastructure for autonomous edge AI. He previously founded ADI Engineering, a global OEM edge networking supplier, and led its acquisition by Silicom Ltd. He holds 9 US patents in edge networking and cybersecurity with 11 additional patents pending in AI governance infrastructure.

Scott Penberthy, one of Google’s lead AI engineers, just put hard numbers on where AI is actually turning corporate investment into revenue gains. It’s a sobering analysis.

According to his findings, the deployments earning the highest excess returns are vertical and physical-world: quality inspection, clinical trials, subsurface drilling, autonomous systems, industrial robotics. However, a meaningful subset of those deployments runs in environments where the cloud-first edge stack was never built to operate, where connectivity is intermittent, regulatory air gaps separate operations from any external control plane, and latency budgets rule out a round-trip to a server two thousand miles away. That subset is where the architecture problem bites. And where the industry faces the ‘hard problem’ of AI adoption at the far edge.

Scott doesn’t proffer a solution (it’s out of the paper’s scope). And that’s exactly what I want to discuss.

But let’s dive deeper into the research first.

Penberthy’s paper, Where AI Creates Value, released in Q2 2026, applies a rigorous empirical method to a question most AI-deployment analyses leave to anecdote: where exactly is corporate AI actually working?

He and his team extracted 5,752 AI deployments from earnings calls and SEC filings (Q1 2025 through Q1 2026) and scored each against a four-rung readiness ladder inspired by NASA’s Technology Readiness Levels: Corroborated, Production, Quantified, Progressing. Anything that didn’t reach rung two was thrown out. That left 5,127 production-validated deployments from 2,385 public companies across the S&P 500, MidCap 400, SmallCap 600, Russell 2000, and select international ADRs. They then clustered the validated deployments using semantic embeddings and measured excess returns against benchmark indices, with statistical significance assessed via Welch’s t-test, Bonferroni correction across all clusters, and Cohen’s d for effect size.

A quick caveat before getting to the results: the study, though methodologically rigorous, is nevertheless looking at correlations, not causal relationships. Penberthy himself acknowledges this limitation. But it’s a good sample size and, at any rate, the architectural ramifications are real enough.

Vertical Deployments Yield Higher Returns than Horizontal Applications

As it turns out, the most difficult deployments yield the greatest results. Penberthy breaks out the AI deployments into two types: horizontal and vertical. Horizontal deployments are the AI every vendor sells to every industry: customer-support chatbots, code-generation assistants, back-office automation, fraud detection. Vertical deployments are domain-specific: AI inspecting welds on a factory floor, AI optimizing subsurface drilling paths, AI accelerating clinical trial enrollment.

The associated financial performance between these two categories is significant: Seventy-three percent of the vertical deployment showed positive excess returns. Only 15% of the horizontal deployments did.

The leaderboard:

  • Automated Quality Inspection: +79%, 20 companies

  • Clinical Trial Optimization: +51%, 42 companies

  • Intelligent Drug Development: +44%, 36 companies

  • Subsurface and Well Optimization: +42%, 35 companies

  • Autonomous Systems: +32%, 12 companies

  • Industrial Robotics: +21%, 26 companies

  • Loan Loss Forecasting: +19%, 32 companies

  • Robotic Warehouse Operations: +15%, 30 companies

Note, these are not raw excess returns against the S&P 500. It’s actually the difference against the median AI-deploying public company. So, in other words, +79% on Automated Quality Inspection is “this cluster outperforms the median AI-deploying public company by 79 points.” That is still a remarkable difference. It’s just not the same as “+79% against the index” (which would be legendary).

Compare those numbers to the horizontal deployments: Infrastructure AI at -5% across 307 companies, Operations AI at -14% across 1,157 companies, Customer-Facing AI at -13% across 561 companies. Incidentally, the largest deployment clusters in the dataset are also the worst-performing.

Penberthy explains the difference as horizontal AI deployments have become “tablestakes” — keeping pace with the pack, not outrunning it. Vertical deployments, however, are generating a strategic value.

He identifies five attributes the high-return verticals share:

  • The AI touches the physical world

  • The deployment sits behind regulatory clearance

  • The AI requires years of proprietary data that cannot be downloaded

  • Mistakes are expensive

  • Deployment requires specialists who are hard to hire

Automated quality inspection has all five, while customer-support chatbots have none.

Why High-Return AI Lives at the Far Edge

Most readers will fixate on the leaderboard. But the line I keep coming back to is “a competitor cannot replicate this by signing up for an API.”

It explains why the high-return deployments cluster where they cluster: they require years of proprietary data, physical-world feedback loops, regulatory clearance, and specialists you cannot hire on demand.

None of that is API-shaped.

It also explains why a meaningful subset of these deployments (drill ships, substations, distributed industrial sites, autonomous fleets, off-grid agricultural operations) operate in environments where a cloud round-trip isn’t always within policy or even possible. At remote sites the connectivity is intermittent and the power is dirty. In regulated industries the auditors are watching every decision, and the audit trail can’t depend on a link that may have been down. In real-time control loops the latency budget rules out a remote control plane. Across all three, the link cannot be the load-bearing wall.

You can’t just get to those returns through an API, and you can’t get to them with a cloud-first edge stack retrofitted into a remote site three years ago.

This is the hard problem underlying Penberthy’s paper.

The Hard Problem: Why the Cloud-First Edge Stack Fails at the Far Edge

Penberthy's data identifies where the high-return AI lives, but doesn't address how to actually run AI at those sites. The challenge is not one problem but a set of interdependent ones: continuous governance that operates locally under partition, connectivity managed as a policy-driven resource across multiple bearers, hardware-level recovery that doesn't require a technician on-site, and the integration of all three into a single operational platform that a field team can deploy and maintain without a dedicated IT staff. That's the architectural problem; or, as I like to think of it, the 'hard problem' of edge AI.

More than three decades in embedded systems and edge networking gives me a particular vantage point on what is happening at the edge AI layer. During development in the lab, everything is peachy: the model improves with each generation; the benchmark numbers improve. The pilot demonstrates value in a controlled environment. And then it deploys to the far edge, or hits a regulatory partition or an air gap that the architecture diagram never accounted for, and the system underneath the model fails.

The failure is not in the AI. It is in an architecture beneath the AI that discarded a principle prior generations of edge computing got right. This is a regression, not a recurring problem.

The default enterprise architecture is a model in the cloud, an API at the site, and a wing-and-a-prayer assumption about connectivity. It works in a warehouse with two redundant fiber paths. It doesn’t work on a remote rig. It doesn’t work in a substation behind an air gap. And it certainly doesn’t work when an EU AI Act auditor asks for a complete decision-attestation record from a node that has been offline for nine days.

Networking buyers know the default is broken. Cisco’s 2025 IT networking leader survey, reported by ComputerWeekly, found that 97% of businesses believe they need to upgrade their networks to make AI and IoT initiatives successful. The architectural problem is not contested.

But the architectural answer is far from a consensus.

Six requirements have to work together, and none solves cleanly without the others:

  1. The edge runtime has to survive loss of connectivity and keep operating under local governance.

  2. The policy regime running at the disconnected node has to be the same policy that runs in the data center, not a pared-down cousin.

  3. Audit logs have to buffer locally with cryptographic attestation and reconcile on reconnect through a protocol designed for intermittent links

  4. Attested boot, segmentation, and key management have to operate without a phone-home.

  5. Connectivity itself has to be managed as a resource portfolio across satellite, cellular, WiFi, and LPWAN bearers, routed by policy on the device, not treated as a binary dependency on a single link.

  6. Fleet operations have to scale to a thousand sites without a thousand truck rolls, with hardware-level recovery that can re-image a bricked node from the baseboard up without sending a technician.

And the entire system has to work as an integrated platform, not as six separate vendor solutions bolted together at the site. That is the operational layer Penberthy’s vertical winners need wherever they actually run. Most edge architectures sold today don’t have it.

Edge Inference Isn’t the Same as Edge Governance

There’s a critical distinction the industry frequently conflates: running an AI model at the edge isn’t the same as governing it there.

Many modern edge architectures execute inference locally. NVIDIA’s Jetson platform runs models on the device, and an increasing number of competitive offerings do too. But a system can execute inference at the edge while its policy enforcement, decision attestation, and operational boundaries still depend on a cloud platform. Edge inference solves latency and data sovereignty. By itself, it doesn’t solve governance under partition.

What is governance under partition? It is the property that the policies, attestation, and decision boundaries governing an AI system continue to operate locally when the network does not. Edge inference puts the model at the edge. Governance under partition puts the auditor’s record, the policy enforcement, and the override authority there too.

Agentic AI shifts this from a tolerable inefficiency to a structural liability. The new generation of edge AI is moving from outputs to actions — from agents that suggest things to agents that decide and actuate, often in real time with limited human intervention. If the policies, boundaries, and attestation depend on connectivity, then the governance is cloud-dependent regardless of where the model runs. The link drops, and the AI keeps acting while the governance disappears. TechRadar Pro recently called data governance “the operating system of enterprise trust,” arguing that organizations who treat it as anything less will face recalls, fines, and eroded confidence.

That principle scales down to the edge with sharper consequences.

Penberthy’s data suggests this gap will become more expensive over time, not less. The deployments at the top of his leaderboard are precisely the deployments where mistakes are most costly: clinical trials, drilling decisions, manufacturing defects, autonomous fleets. The combination of high return and high failure cost is what makes those deployments valuable. It’s also what makes ungoverned autonomy at those sites unacceptable — a point Google Cloud’s Office of the CISO has put squarely on board agendas, naming AI, cyber risk strategy, and edge security as the three governance areas directors now have to own.

Why Edge AI Governance Has to Live with the Process

I would know — I designed these systems.

Thirty-two years ago at GE Fanuc Automation, I engineered CPU and I/O modules for Programmable Logic Controllers, including the Series 90-30. The bedrock principle was non-negotiable: governance lives with the process. If the supervisory layer loses communication, the PLC keeps running within its governed parameters. If a safety interlock trips, it trips locally. No round trip to a server. No dependency on a network link. No cloud service that might be experiencing latency or an outage. Governance availability was 100% by design, because the governance and the governed were never separated by a network.

That is the principle that has to carry forward into edge AI. Not as a legacy constraint. As the design center.

The obvious objection is that the PLC works because the rule is deterministic, and AI doesn’t produce deterministic outputs, so the principle doesn’t extend. That objection is wrong, but only if you’re precise about what you’re enforcing.

The PLC principle doesn’t require deterministic decisions. It requires deterministic policy over decisions.

The PLC reads a sensor (continuous, noisy, physical) and applies a deterministic predicate: pressure > 150 PSI, shut down. The same architectural shape extends one layer up. An AI vision system produces a non-deterministic classification; the local governance layer applies a deterministic predicate over that classification. Confidence below threshold T and consequence class high? Route to human queue, do not execute. Don’t dispatch this clinical action without a human signature recorded in the last sixty seconds. Don’t reroute the autonomous vehicle without a verified obstacle ID. The model is non-deterministic. The policy enforcing what the model is allowed to do with its output isn’t.

For this to actually hold at a disconnected node, four mechanisms have to operate locally:

  • A policy-as-code engine that intercepts every inference call and evaluates the active policy against the model output before any actuation

  • A signed model registry that proves which model and which weights produced which decision (so the audit trail can answer “which version said what” without a phone-home)

  • An attested decision log that records, with a cryptographic signature from a key only that device controls, what the model output was, what the policy evaluated, and what was actuated; i.e., what was signed at the moment of decision, not at the next sync

  • A local human override that operates without connectivity, with the override itself recorded in the attestation log

None of these mechanisms need connectivity to function. All of them have to function whether connectivity is up or not.

When the node reconnects, attested store-and-forward packages the local state and merges it with the fleet-wide governance picture through a reconciliation protocol designed for intermittent asynchronous links. The reconciliation isn’t a copy operation; it’s a merge of divergent states that evolved independently, with the attestation chain preserved and conflicts resolved through fleet-level policy. That reconciliation capability is what transforms an offline-capable edge node into a genuinely governed autonomous system.

That’s what extending the PLC principle into edge AI actually requires. It's not a metaphor; it's an architecture.

What Governed Edge AI Looks Like Across a Fleet

Across a fleet of governed nodes, the operational challenge extends well beyond connectivity. Four properties have to hold simultaneously, and they span governance, connectivity management, auditability, and hardware-level recovery.

Policy parity, cloud to disconnected. The same NIST AI RMF controls, including the GenAI Profile’s testing, red-teaming, and documentation practices. This is the same MITRE ATLAS coverage, and the same data-in-use protections that run in your data center also run on the offline node, sealed in an enclosure on a wind turbine, with the regulator’s clock still ticking. When the node reconnects, the audit trail comes back with it: lineage, action logs, attestation evidence, already in the format a CSF 2.0 or DORA examiner expects.

Lifecycle evidence the auditor recognizes. Aligned to NIST AI RMF, MITRE ATLAS, and Singapore’s Model AI Governance Framework from day one — with the artifacts the auditor wants to see, not a deck claiming alignment.

Recovery economics that scale across a fleet. At far-edge sites, hardware fails. Boards brick. Firmware updates fail mid-apply. OS corruption renders the management agent unreachable. In a conventional edge architecture, each of these is a truck roll: a technician drives to a remote wind farm, mine site, or cell tower at a cost of thousands of dollars per visit. Multiply that across a thousand sites, and fleet economics collapse. Hardware-rooted attestation and BMC-level recovery change the math entirely. Out-of-band management operates independently of the primary OS, the application stack, and the network management agent. It can power-cycle the node, re-image the OS, and restore a known-good state through a dedicated management interface. The node recovers remotely. The truck stays parked. This is not a convenience feature. For fleets operating at far-edge scale, it is the difference between a viable operation and an unsustainable one.

Connectivity that fails gracefully. Far-edge sites rarely have one connectivity option. An offshore platform may have satellite, a cellular link from a coastal tower, and a point-to-point microwave to a neighboring facility. A smart agriculture deployment may have LoRaWAN for sensor telemetry, WiFi HaLow for device communication, and satellite for management backhaul. Policy-driven multi-bearer management allocates each data flow to the appropriate bearer based on operational rules enforced locally on the device. Safety alerts route over the most reliable bearer immediately. Routine telemetry batches for low-cost transmission windows. Management traffic routes through the most secure available link. When one bearer degrades, the policy engine reallocates without the application layer noticing. This is not failover from primary to backup. It is continuous optimization across every available bearer, governed by the same policy framework that governs the AI decisions themselves.

This is governance treated as the operating layer — not as a compliance afterthought.

“Decoupled From the Cloud” Is Operational, Not Ideological

This isn’t an argument against hyperscalers. They’re an important part of any cloud-to-edge fabric. However, the decoupling I’m talking about is operational. The edge intelligence has to keep thinking when the link is gone, and it has to do it under the same policy regime as the rest of the fleet. That’s what no other architecture currently delivers at far-edge scale.

Cloud capabilities are essential. They just have to be additive, not gating. That distinction is the entire architectural argument. Connectivity is a feature; resilience is an architecture; governance is the operating layer that holds both together when the link drops.

What Penberthy’s Data Tells You About Where to Deploy AI

I see two key takeaways from Scott’s research.

First, build the horizontal AI infrastructure. You have to. Your competitors already have it. Penberthy’s data shows median returns are negative on horizontal AI, which means absence is more expensive than presence. His analogy is the right one: “Like having a website in 2000 or a mobile app in 2012. You fall behind without it. You don’t get ahead with it.” That’s the cost of competing in 2026. It's not strategic; horizontal AI is the new ground floor. And you don’t want to find yourself in the basement.

Second, find your verticals. Walk through your business with his five questions in hand:

  • Where do you touch the physical world?

  • Where do you operate behind regulatory clearance — FDA, FAA, PCI DSS v4.0.1, DORA, CSF 2.0?

  • Where do you sit on decades of proprietary data nobody can buy?

  • Where does a mistake cost you millions?

  • Where do your specialists take five years to grow?

The intersection of those answers is where you should investigate. Incidentally, it’s also where you’ll discover that your existing edge stack doesn’t survive contact with a partitioned site. That’s the moment to look at governance as the operating layer, and at policy parity as the architecture. It's not as a bolt-on.

Edge-first architecture requires a new paradigm: governance that operates locally whether connected or not, connectivity managed by policy across whatever bearers are available, hardware-rooted recovery without a truck roll, and a platform that integrates all three so the operator deploys one system. That is what it takes when cloud-anchored AI meets the physical world.

That’s the work in front of us. Connected or not.

What Federant Is Building, and How to Get Early Access

The governance architecture I've described in this piece is what Federant is building. We don't think the industry can afford to let edge AI governance fragment into proprietary silos, so we're taking the foundational work open. The edge operating system and hardware designs together with interfaces, attestation record formats, interoperability specifications, and additional details will be published as open resources through a leading open infrastructure community. We're well along in active discussions and expect to formalize the partnership in the coming months.

Federant's underlying patent portfolio will be pledged under terms that ensure developers, partners, customers, and competitors can build compliant implementations without legal risk. The specific structure and terms will be published alongside the open community announcement.

We’re not building fences with eleven patents pending in edge AI governance to keep people out. We’re building roads with them, so the industry can traverse a safe, proven path that we’re blazing. Details will be posted at federant.com as the work matures.


Steve Yates, P.E. (VA) is CEO and Co-Founder of Federant, which is developing open governance infrastructure for autonomous edge AI. He previously founded ADI Engineering, a global OEM edge networking supplier, and led its acquisition by Silicom Ltd. He holds 9 US patents in edge networking and cybersecurity with 11 additional patents pending in AI governance infrastructure.

Scott Penberthy, one of Google’s lead AI engineers, just put hard numbers on where AI is actually turning corporate investment into revenue gains. It’s a sobering analysis.

According to his findings, the deployments earning the highest excess returns are vertical and physical-world: quality inspection, clinical trials, subsurface drilling, autonomous systems, industrial robotics. However, a meaningful subset of those deployments runs in environments where the cloud-first edge stack was never built to operate, where connectivity is intermittent, regulatory air gaps separate operations from any external control plane, and latency budgets rule out a round-trip to a server two thousand miles away. That subset is where the architecture problem bites. And where the industry faces the ‘hard problem’ of AI adoption at the far edge.

Scott doesn’t proffer a solution (it’s out of the paper’s scope). And that’s exactly what I want to discuss.

But let’s dive deeper into the research first.

Penberthy’s paper, Where AI Creates Value, released in Q2 2026, applies a rigorous empirical method to a question most AI-deployment analyses leave to anecdote: where exactly is corporate AI actually working?

He and his team extracted 5,752 AI deployments from earnings calls and SEC filings (Q1 2025 through Q1 2026) and scored each against a four-rung readiness ladder inspired by NASA’s Technology Readiness Levels: Corroborated, Production, Quantified, Progressing. Anything that didn’t reach rung two was thrown out. That left 5,127 production-validated deployments from 2,385 public companies across the S&P 500, MidCap 400, SmallCap 600, Russell 2000, and select international ADRs. They then clustered the validated deployments using semantic embeddings and measured excess returns against benchmark indices, with statistical significance assessed via Welch’s t-test, Bonferroni correction across all clusters, and Cohen’s d for effect size.

A quick caveat before getting to the results: the study, though methodologically rigorous, is nevertheless looking at correlations, not causal relationships. Penberthy himself acknowledges this limitation. But it’s a good sample size and, at any rate, the architectural ramifications are real enough.

Vertical Deployments Yield Higher Returns than Horizontal Applications

As it turns out, the most difficult deployments yield the greatest results. Penberthy breaks out the AI deployments into two types: horizontal and vertical. Horizontal deployments are the AI every vendor sells to every industry: customer-support chatbots, code-generation assistants, back-office automation, fraud detection. Vertical deployments are domain-specific: AI inspecting welds on a factory floor, AI optimizing subsurface drilling paths, AI accelerating clinical trial enrollment.

The associated financial performance between these two categories is significant: Seventy-three percent of the vertical deployment showed positive excess returns. Only 15% of the horizontal deployments did.

The leaderboard:

  • Automated Quality Inspection: +79%, 20 companies

  • Clinical Trial Optimization: +51%, 42 companies

  • Intelligent Drug Development: +44%, 36 companies

  • Subsurface and Well Optimization: +42%, 35 companies

  • Autonomous Systems: +32%, 12 companies

  • Industrial Robotics: +21%, 26 companies

  • Loan Loss Forecasting: +19%, 32 companies

  • Robotic Warehouse Operations: +15%, 30 companies

Note, these are not raw excess returns against the S&P 500. It’s actually the difference against the median AI-deploying public company. So, in other words, +79% on Automated Quality Inspection is “this cluster outperforms the median AI-deploying public company by 79 points.” That is still a remarkable difference. It’s just not the same as “+79% against the index” (which would be legendary).

Compare those numbers to the horizontal deployments: Infrastructure AI at -5% across 307 companies, Operations AI at -14% across 1,157 companies, Customer-Facing AI at -13% across 561 companies. Incidentally, the largest deployment clusters in the dataset are also the worst-performing.

Penberthy explains the difference as horizontal AI deployments have become “tablestakes” — keeping pace with the pack, not outrunning it. Vertical deployments, however, are generating a strategic value.

He identifies five attributes the high-return verticals share:

  • The AI touches the physical world

  • The deployment sits behind regulatory clearance

  • The AI requires years of proprietary data that cannot be downloaded

  • Mistakes are expensive

  • Deployment requires specialists who are hard to hire

Automated quality inspection has all five, while customer-support chatbots have none.

Why High-Return AI Lives at the Far Edge

Most readers will fixate on the leaderboard. But the line I keep coming back to is “a competitor cannot replicate this by signing up for an API.”

It explains why the high-return deployments cluster where they cluster: they require years of proprietary data, physical-world feedback loops, regulatory clearance, and specialists you cannot hire on demand.

None of that is API-shaped.

It also explains why a meaningful subset of these deployments (drill ships, substations, distributed industrial sites, autonomous fleets, off-grid agricultural operations) operate in environments where a cloud round-trip isn’t always within policy or even possible. At remote sites the connectivity is intermittent and the power is dirty. In regulated industries the auditors are watching every decision, and the audit trail can’t depend on a link that may have been down. In real-time control loops the latency budget rules out a remote control plane. Across all three, the link cannot be the load-bearing wall.

You can’t just get to those returns through an API, and you can’t get to them with a cloud-first edge stack retrofitted into a remote site three years ago.

This is the hard problem underlying Penberthy’s paper.

The Hard Problem: Why the Cloud-First Edge Stack Fails at the Far Edge

Penberthy's data identifies where the high-return AI lives, but doesn't address how to actually run AI at those sites. The challenge is not one problem but a set of interdependent ones: continuous governance that operates locally under partition, connectivity managed as a policy-driven resource across multiple bearers, hardware-level recovery that doesn't require a technician on-site, and the integration of all three into a single operational platform that a field team can deploy and maintain without a dedicated IT staff. That's the architectural problem; or, as I like to think of it, the 'hard problem' of edge AI.

More than three decades in embedded systems and edge networking gives me a particular vantage point on what is happening at the edge AI layer. During development in the lab, everything is peachy: the model improves with each generation; the benchmark numbers improve. The pilot demonstrates value in a controlled environment. And then it deploys to the far edge, or hits a regulatory partition or an air gap that the architecture diagram never accounted for, and the system underneath the model fails.

The failure is not in the AI. It is in an architecture beneath the AI that discarded a principle prior generations of edge computing got right. This is a regression, not a recurring problem.

The default enterprise architecture is a model in the cloud, an API at the site, and a wing-and-a-prayer assumption about connectivity. It works in a warehouse with two redundant fiber paths. It doesn’t work on a remote rig. It doesn’t work in a substation behind an air gap. And it certainly doesn’t work when an EU AI Act auditor asks for a complete decision-attestation record from a node that has been offline for nine days.

Networking buyers know the default is broken. Cisco’s 2025 IT networking leader survey, reported by ComputerWeekly, found that 97% of businesses believe they need to upgrade their networks to make AI and IoT initiatives successful. The architectural problem is not contested.

But the architectural answer is far from a consensus.

Six requirements have to work together, and none solves cleanly without the others:

  1. The edge runtime has to survive loss of connectivity and keep operating under local governance.

  2. The policy regime running at the disconnected node has to be the same policy that runs in the data center, not a pared-down cousin.

  3. Audit logs have to buffer locally with cryptographic attestation and reconcile on reconnect through a protocol designed for intermittent links

  4. Attested boot, segmentation, and key management have to operate without a phone-home.

  5. Connectivity itself has to be managed as a resource portfolio across satellite, cellular, WiFi, and LPWAN bearers, routed by policy on the device, not treated as a binary dependency on a single link.

  6. Fleet operations have to scale to a thousand sites without a thousand truck rolls, with hardware-level recovery that can re-image a bricked node from the baseboard up without sending a technician.

And the entire system has to work as an integrated platform, not as six separate vendor solutions bolted together at the site. That is the operational layer Penberthy’s vertical winners need wherever they actually run. Most edge architectures sold today don’t have it.

Edge Inference Isn’t the Same as Edge Governance

There’s a critical distinction the industry frequently conflates: running an AI model at the edge isn’t the same as governing it there.

Many modern edge architectures execute inference locally. NVIDIA’s Jetson platform runs models on the device, and an increasing number of competitive offerings do too. But a system can execute inference at the edge while its policy enforcement, decision attestation, and operational boundaries still depend on a cloud platform. Edge inference solves latency and data sovereignty. By itself, it doesn’t solve governance under partition.

What is governance under partition? It is the property that the policies, attestation, and decision boundaries governing an AI system continue to operate locally when the network does not. Edge inference puts the model at the edge. Governance under partition puts the auditor’s record, the policy enforcement, and the override authority there too.

Agentic AI shifts this from a tolerable inefficiency to a structural liability. The new generation of edge AI is moving from outputs to actions — from agents that suggest things to agents that decide and actuate, often in real time with limited human intervention. If the policies, boundaries, and attestation depend on connectivity, then the governance is cloud-dependent regardless of where the model runs. The link drops, and the AI keeps acting while the governance disappears. TechRadar Pro recently called data governance “the operating system of enterprise trust,” arguing that organizations who treat it as anything less will face recalls, fines, and eroded confidence.

That principle scales down to the edge with sharper consequences.

Penberthy’s data suggests this gap will become more expensive over time, not less. The deployments at the top of his leaderboard are precisely the deployments where mistakes are most costly: clinical trials, drilling decisions, manufacturing defects, autonomous fleets. The combination of high return and high failure cost is what makes those deployments valuable. It’s also what makes ungoverned autonomy at those sites unacceptable — a point Google Cloud’s Office of the CISO has put squarely on board agendas, naming AI, cyber risk strategy, and edge security as the three governance areas directors now have to own.

Why Edge AI Governance Has to Live with the Process

I would know — I designed these systems.

Thirty-two years ago at GE Fanuc Automation, I engineered CPU and I/O modules for Programmable Logic Controllers, including the Series 90-30. The bedrock principle was non-negotiable: governance lives with the process. If the supervisory layer loses communication, the PLC keeps running within its governed parameters. If a safety interlock trips, it trips locally. No round trip to a server. No dependency on a network link. No cloud service that might be experiencing latency or an outage. Governance availability was 100% by design, because the governance and the governed were never separated by a network.

That is the principle that has to carry forward into edge AI. Not as a legacy constraint. As the design center.

The obvious objection is that the PLC works because the rule is deterministic, and AI doesn’t produce deterministic outputs, so the principle doesn’t extend. That objection is wrong, but only if you’re precise about what you’re enforcing.

The PLC principle doesn’t require deterministic decisions. It requires deterministic policy over decisions.

The PLC reads a sensor (continuous, noisy, physical) and applies a deterministic predicate: pressure > 150 PSI, shut down. The same architectural shape extends one layer up. An AI vision system produces a non-deterministic classification; the local governance layer applies a deterministic predicate over that classification. Confidence below threshold T and consequence class high? Route to human queue, do not execute. Don’t dispatch this clinical action without a human signature recorded in the last sixty seconds. Don’t reroute the autonomous vehicle without a verified obstacle ID. The model is non-deterministic. The policy enforcing what the model is allowed to do with its output isn’t.

For this to actually hold at a disconnected node, four mechanisms have to operate locally:

  • A policy-as-code engine that intercepts every inference call and evaluates the active policy against the model output before any actuation

  • A signed model registry that proves which model and which weights produced which decision (so the audit trail can answer “which version said what” without a phone-home)

  • An attested decision log that records, with a cryptographic signature from a key only that device controls, what the model output was, what the policy evaluated, and what was actuated; i.e., what was signed at the moment of decision, not at the next sync

  • A local human override that operates without connectivity, with the override itself recorded in the attestation log

None of these mechanisms need connectivity to function. All of them have to function whether connectivity is up or not.

When the node reconnects, attested store-and-forward packages the local state and merges it with the fleet-wide governance picture through a reconciliation protocol designed for intermittent asynchronous links. The reconciliation isn’t a copy operation; it’s a merge of divergent states that evolved independently, with the attestation chain preserved and conflicts resolved through fleet-level policy. That reconciliation capability is what transforms an offline-capable edge node into a genuinely governed autonomous system.

That’s what extending the PLC principle into edge AI actually requires. It's not a metaphor; it's an architecture.

What Governed Edge AI Looks Like Across a Fleet

Across a fleet of governed nodes, the operational challenge extends well beyond connectivity. Four properties have to hold simultaneously, and they span governance, connectivity management, auditability, and hardware-level recovery.

Policy parity, cloud to disconnected. The same NIST AI RMF controls, including the GenAI Profile’s testing, red-teaming, and documentation practices. This is the same MITRE ATLAS coverage, and the same data-in-use protections that run in your data center also run on the offline node, sealed in an enclosure on a wind turbine, with the regulator’s clock still ticking. When the node reconnects, the audit trail comes back with it: lineage, action logs, attestation evidence, already in the format a CSF 2.0 or DORA examiner expects.

Lifecycle evidence the auditor recognizes. Aligned to NIST AI RMF, MITRE ATLAS, and Singapore’s Model AI Governance Framework from day one — with the artifacts the auditor wants to see, not a deck claiming alignment.

Recovery economics that scale across a fleet. At far-edge sites, hardware fails. Boards brick. Firmware updates fail mid-apply. OS corruption renders the management agent unreachable. In a conventional edge architecture, each of these is a truck roll: a technician drives to a remote wind farm, mine site, or cell tower at a cost of thousands of dollars per visit. Multiply that across a thousand sites, and fleet economics collapse. Hardware-rooted attestation and BMC-level recovery change the math entirely. Out-of-band management operates independently of the primary OS, the application stack, and the network management agent. It can power-cycle the node, re-image the OS, and restore a known-good state through a dedicated management interface. The node recovers remotely. The truck stays parked. This is not a convenience feature. For fleets operating at far-edge scale, it is the difference between a viable operation and an unsustainable one.

Connectivity that fails gracefully. Far-edge sites rarely have one connectivity option. An offshore platform may have satellite, a cellular link from a coastal tower, and a point-to-point microwave to a neighboring facility. A smart agriculture deployment may have LoRaWAN for sensor telemetry, WiFi HaLow for device communication, and satellite for management backhaul. Policy-driven multi-bearer management allocates each data flow to the appropriate bearer based on operational rules enforced locally on the device. Safety alerts route over the most reliable bearer immediately. Routine telemetry batches for low-cost transmission windows. Management traffic routes through the most secure available link. When one bearer degrades, the policy engine reallocates without the application layer noticing. This is not failover from primary to backup. It is continuous optimization across every available bearer, governed by the same policy framework that governs the AI decisions themselves.

This is governance treated as the operating layer — not as a compliance afterthought.

“Decoupled From the Cloud” Is Operational, Not Ideological

This isn’t an argument against hyperscalers. They’re an important part of any cloud-to-edge fabric. However, the decoupling I’m talking about is operational. The edge intelligence has to keep thinking when the link is gone, and it has to do it under the same policy regime as the rest of the fleet. That’s what no other architecture currently delivers at far-edge scale.

Cloud capabilities are essential. They just have to be additive, not gating. That distinction is the entire architectural argument. Connectivity is a feature; resilience is an architecture; governance is the operating layer that holds both together when the link drops.

What Penberthy’s Data Tells You About Where to Deploy AI

I see two key takeaways from Scott’s research.

First, build the horizontal AI infrastructure. You have to. Your competitors already have it. Penberthy’s data shows median returns are negative on horizontal AI, which means absence is more expensive than presence. His analogy is the right one: “Like having a website in 2000 or a mobile app in 2012. You fall behind without it. You don’t get ahead with it.” That’s the cost of competing in 2026. It's not strategic; horizontal AI is the new ground floor. And you don’t want to find yourself in the basement.

Second, find your verticals. Walk through your business with his five questions in hand:

  • Where do you touch the physical world?

  • Where do you operate behind regulatory clearance — FDA, FAA, PCI DSS v4.0.1, DORA, CSF 2.0?

  • Where do you sit on decades of proprietary data nobody can buy?

  • Where does a mistake cost you millions?

  • Where do your specialists take five years to grow?

The intersection of those answers is where you should investigate. Incidentally, it’s also where you’ll discover that your existing edge stack doesn’t survive contact with a partitioned site. That’s the moment to look at governance as the operating layer, and at policy parity as the architecture. It's not as a bolt-on.

Edge-first architecture requires a new paradigm: governance that operates locally whether connected or not, connectivity managed by policy across whatever bearers are available, hardware-rooted recovery without a truck roll, and a platform that integrates all three so the operator deploys one system. That is what it takes when cloud-anchored AI meets the physical world.

That’s the work in front of us. Connected or not.

What Federant Is Building, and How to Get Early Access

The governance architecture I've described in this piece is what Federant is building. We don't think the industry can afford to let edge AI governance fragment into proprietary silos, so we're taking the foundational work open. The edge operating system and hardware designs together with interfaces, attestation record formats, interoperability specifications, and additional details will be published as open resources through a leading open infrastructure community. We're well along in active discussions and expect to formalize the partnership in the coming months.

Federant's underlying patent portfolio will be pledged under terms that ensure developers, partners, customers, and competitors can build compliant implementations without legal risk. The specific structure and terms will be published alongside the open community announcement.

We’re not building fences with eleven patents pending in edge AI governance to keep people out. We’re building roads with them, so the industry can traverse a safe, proven path that we’re blazing. Details will be posted at federant.com as the work matures.


Steve Yates, P.E. (VA) is CEO and Co-Founder of Federant, which is developing open governance infrastructure for autonomous edge AI. He previously founded ADI Engineering, a global OEM edge networking supplier, and led its acquisition by Silicom Ltd. He holds 9 US patents in edge networking and cybersecurity with 11 additional patents pending in AI governance infrastructure.

© 2026 Federant

© 2026 Federant