Experience
Operating AI safety, rollout governance, and platform reliability at 40M–70M+ device scale. I’ve spent 12+ years turning messy telemetry and legacy infra into deterministic safety bars, rollout gates, and risk dashboards that leaders can actually act on.
Impact at a glance
Leadership highlights
- • Unified 50+ engineers across 6 orgs under a single AI safety and rollout governance model.
- • Regularly brief VP/SVP leaders on safety–latency–velocity tradeoffs for high-stakes releases.
- • Turned messy telemetry into executive-ready dashboards and decision briefs.
- • Designed architectures that became the basis for 8 patent-backed AI safety, privacy, and code-safety systems.
- • Act as the bridge between infra, AI research, security, and product when decisions have real customer and regulatory impact.
What I own end-to-end
Safety & Governance
Edge & LLM Infra
Product & Leadership
Case studies
Case Study 1 – AI safety governance for 40M–70M+ devices
Turning ad-hoc rollout decisions into deterministic risk gates.
- Problem
- Firmware and feature releases across tens of millions of gateways were gated by manual judgment and fragmented telemetry, causing regressions and multi-million-dollar SLA risk.
- My role
- Defined safety bars, anomaly thresholds, and rollout gates; aligned Product, ML/DS, SRE, QA, Field Ops, and vendors on a single evaluation framework; partnered with leaders to encode these into decision workflows.
- Outcome
- Reduced regression recurrence by 28%, improved triage and recovery by 35%, and gave VP/SVP leaders a clear, quantitative view of safety vs. velocity for every high-stakes release.
Case Study 2 – Deterministic rule engines behind 8 patent filings
From one-off checks to a reusable safety and privacy architecture.
- Problem
- Safety, privacy, and quality checks were scattered across scripts and tools, making it hard to guarantee consistent behavior or explain decisions to leadership and auditors.
- My role
- Designed a reusable rule-engine pattern for redaction, code safety, alignment debugging, and risk tagging; led PoCs and prototypes; evolved it into multiple patent-backed architectures including EdgeLLM V2, AI Risk Navigator, AutoRedact AI, and LLM Code Safety Auditor.
- Outcome
- Created a deterministic foundation for AI safety and governance, resulting in 8 USPTO-filed non-provisional patents and reusable blueprints that can be applied across new AI products without retraining models.
Case Study 3 – Telemetry-driven risk scoring for rollout decisions
Giving executives a single risk lens across millions of devices.
- Problem
- Post-deployment incidents were hard to connect back to specific releases or configurations, slowing rollback decisions and extending customer impact.
- My role
- Defined metrics, thresholds, and dashboards that connect device-level telemetry (Wi-Fi KPIs, error codes, rollback events) to release decisions; partnered with data and platform teams to productionize the scoring model.
- Outcome
- Enabled faster detection of bad releases, clearer rollback criteria, and a consistent language for risk across engineering, operations, and leadership teams.
Career timeline
2020 – Present
AI Safety & Rollout Governance · Comcast (via Tata Elxsi)
Lead safety bars, rollout gates, and risk scoring for broadband and Wi-Fi fleets serving 40M–70M+ devices, partnering with Product, ML/DS, SRE, QA, and Field Ops.
2018 – 2020
CPE / RDK-B Platform Engineer · Comcast
Owned Wi-Fi reliability, telemetry pipelines, and firmware rollout stability for large-scale device fleets, focusing on regressions, triage, and recovery.
2014 – 2018
Systems & Network Engineering Roles
Built and operated networked systems, laying the foundation for later work in large-scale platform reliability, observability, and AI governance.
Why this experience maps to AI Product Management
My work sits where high-stakes infrastructure, AI, and governance meet. I define safety bars, design rule engines, align cross-org stakeholders, and give executives clear risk tradeoffs. The same skills apply directly to senior AI Product roles in AI Safety, LLM Infra, Edge AI, and Platform teams—where decisions must balance model performance, safety, latency, privacy, and regulatory expectations, not just features.