Quantum Networking, Security, and Sensing: The Overlooked Adjacent Markets
Quantum networking, QKD, and sensing may deliver enterprise value sooner than gate-model computing for security, infrastructure, and precision use cases.
When most teams hear “quantum,” they think gate-model computing, error correction, and a distant future of fault-tolerant machines. That framing misses a more immediate enterprise story: the adjacent markets of quantum networking, quantum security, and quantum sensing. These segments are already being shaped by telecom carriers, critical infrastructure operators, defense programs, and precision-industry buyers who care less about universal quantum speedups and more about measurable improvements in secure communications, navigation, timing, detection, and assurance. If you want a practical primer on the computing side first, start with our guide to qubit basics for developers, then come back to this guide to understand why many organizations should prioritize these non-compute markets sooner.
There is also a commercial reason this matters. Enterprise adoption rarely begins where the physics is most glamorous; it begins where risk, regulation, and ROI align. That is why the right lens for many organizations is not “Can quantum computers replace classical systems now?” but “Where can quantum technologies solve an existing operational problem, reduce exposure, or create a new capability with a realistic integration path?” In that sense, the adjacent markets are often more enterprise-ready than gate-model computing. They fit naturally alongside current network architectures, security programs, and instrumentation stacks, much like operational modernization efforts described in our guide to moving from one-off pilots to an operating model or measuring automation ROI with disciplined experiments.
1) Why adjacent quantum markets deserve enterprise attention now
The core strategic shift: from compute promises to operational capabilities
Quantum computing is still constrained by noise, scale, and the long road to fault tolerance. Quantum networking, security, and sensing do not face the same “solve everything” burden. Instead, they address specific, high-value workflows: distributing keys, detecting faint signals, improving timing precision, or linking remote quantum devices in a trusted architecture. That narrower scope is an advantage because it produces clearer pilot criteria, more defensible investment cases, and shorter feedback loops for operational teams.
Enterprises already know how to fund technologies that improve trust, observability, and resilience. Security teams budget for better identity controls and key management; network teams budget for latency, redundancy, and protected paths; industrial teams budget for instrumentation that improves detection thresholds and reduces false positives. The adjacent quantum stack maps directly onto those categories. In other words, this is not science fiction procurement. It is an extension of the same due-diligence mindset covered in technical KPI checklists for hosting providers and capacity decision-making from off-the-shelf research.
Where the market is already forming
Source material from the industry shows vendors increasingly positioning beyond pure compute. IonQ, for example, explicitly markets quantum computing, networking, security, and sensing as part of a full-stack platform. That matters because it signals commercial bundling: the market is not waiting for one breakthrough to unlock value, but instead packaging multiple quantum modalities for different buyer needs. A separate company landscape also confirms the breadth of activity across communication and sensing, not just computing, with companies spanning photonics, cryptography, trapped ions, neutral atoms, and integrated photonics.
For enterprise buyers, the practical implication is that vendor claims should be evaluated by domain fit, not by headline qubit count alone. Security teams need to understand how a quantum-safe roadmap is being implemented. Infrastructure teams need to understand whether sensing improves uptime, detection, or calibration. Network architects need to understand interoperability with existing fiber, telecom, and trust frameworks. That kind of evaluation is similar to the way teams assess hybrid software stacks in code quality workflows or internal team structures for marketplaces: the key question is operational integration, not novelty.
Why critical infrastructure buyers are ahead of general enterprise buyers
Critical infrastructure operators—utilities, telecoms, transportation, defense, and emergency services—have the strongest incentives to adopt first. They face long asset lifecycles, sophisticated adversaries, and strict continuity requirements. If a technology offers even incremental gains in secure communications, location assurance, or sub-surface detection, it can justify pilots earlier than in more discretionary IT environments. That is also why quantum adjacent markets are often discussed in the same breath as national security and resilience planning.
For these buyers, the decision framework is closer to resilience engineering than innovation theater. They ask: does this reduce operational uncertainty, improve response speed, harden a trust boundary, or reveal physical information that conventional systems cannot? Those are practical questions, not academic ones. In this context, the enterprise relevance of quantum networking and sensing can exceed that of computing because the return is tied to current missions, not to speculative future compute advantage.
2) Quantum networking explained: what it is and why enterprises care
Quantum networking is not just “faster internet”
Quantum networking refers to the transmission, distribution, and coordination of quantum states across nodes, often to support secure communication or link quantum devices in a larger architecture. It is not a replacement for the internet as we know it. Instead, it adds a specialized layer that can enable quantum key distribution, entanglement-based protocols, and eventually distributed quantum computing and quantum sensing networks. The enterprise case starts with secure communication and trusted key exchange, then expands toward multi-node quantum systems.
For many organizations, the immediate relevance is not end-to-end quantum teleportation or a global quantum internet. It is how to create stronger trust in a communication path, especially where the stakes are high and the attack surface is large. That includes inter-data-center links, defense communications, government networks, and sector-specific links between mission-critical facilities. Just as teams compare tooling before deployment in simulation-first quantum workflows, network leaders should compare quantum networking approaches against their current encryption, transport, and key management architecture.
Enterprise use cases for quantum networking
One major use case is protected links between highly sensitive sites. If a bank, government agency, or energy operator needs secure coordination between geographically separated facilities, quantum networking can support a stronger trust model for key exchange and future-proofed communications. Another use case is distributed quantum infrastructure itself: as quantum processors grow more networked, enterprises may eventually access composite systems across locations. This is especially relevant for research consortia and cloud providers experimenting with hybrid quantum-classical orchestration.
There is also a strategic use case in vendor and sovereign risk management. Organizations can reduce concentration risk by designing communications architectures that do not depend exclusively on a single region, provider, or trust domain. That aligns with the broader enterprise practice of building resilience into critical workflows, similar to how operations teams evaluate risk in commercial AI for military ops or protect high-value assets in surveillance-heavy environments. Quantum networking adds a new class of protected transport for scenarios where conventional security may not be enough.
What makes quantum networking technically hard
The challenge is not just fiber or hardware. Quantum states are fragile, and distribution at scale depends on loss, noise, timing, and the ability to preserve entanglement across distance and through intermediate nodes. That means repeaters, trusted nodes, and future quantum memory systems are all central to the roadmap. Enterprises must therefore treat quantum networking as a staged program, not a turnkey upgrade. Pilot projects should test path loss, integration points, latency trade-offs, and failover procedures before any production commitment.
Pro tip: Do not evaluate quantum networking as if it were a universal replacement for TLS, VPNs, or MPLS. Evaluate it as an additional trust and transport layer for highly sensitive links, then define where it sits in your security architecture.
3) Quantum security and QKD: where the commercial story becomes concrete
What QKD actually does
Quantum key distribution, or QKD, is one of the most practical quantum security use cases because it focuses on a narrow but important task: generating and sharing cryptographic keys with detection properties that reveal certain eavesdropping attempts. QKD does not magically make all communications secure, and it does not replace every classical security control. Instead, it provides a specialized method for key exchange and complements existing encryption systems. That distinction is crucial for procurement teams and security architects.
Many vendor claims collapse because they overstate the scope of QKD. The buyer should ask whether the solution is point-to-point or networked, whether it uses trusted nodes, how it handles key management, and what operational evidence supports its security model. The same disciplined skepticism that teams apply when reading about emerging platforms in discount spotting and price validation may sound humorous, but the principle is serious: do not conflate marketing with validated capability. For security programs, that means demanding architectural diagrams, test results, and cryptographic assumptions in plain language.
Where QKD fits best in enterprise environments
QKD is most compelling where the communication path is both highly sensitive and relatively stable. That includes government-to-government links, defense supply chains, central bank infrastructure, intercity backbone links, and critical infrastructure control networks. In these settings, the value proposition is not only confidentiality but also future preparedness. Organizations are concerned about “harvest now, decrypt later” risk, where encrypted data captured today could be decrypted in the future if cryptographic assumptions fail or if large-scale quantum attacks become viable.
This is why quantum security should be viewed alongside post-quantum cryptography, not in competition with it. Post-quantum cryptography is software-centric and broadly deployable; QKD is hardware-and-link-centric and more specialized. Enterprises should think in layers: migrate to quantum-resistant algorithms where possible, assess where QKD is justified for especially sensitive links, and define governance for how those decisions are operationalized. That type of layered thinking resembles how teams approach phased modernization in enterprise operating models and ROI experiments.
Security procurement questions that separate serious vendors from hype
Enterprises should ask direct questions: What threat model is being addressed? What key rates are supported at operational distances? Is the solution compatible with existing optical infrastructure? How is authentication handled? What happens during link failure or adverse weather? What evidence exists from field deployments? These questions are especially important because many quantum security deployments depend on the surrounding classical stack for authentication, routing, and service continuity.
Good procurement also requires operational thinking. Who owns the key lifecycle? Which team patches the physical devices? How are logs exported into SIEM and SOC workflows? What is the fallback if the quantum link degrades? If these questions sound familiar, that is because they mirror the same integration discipline used in enterprise systems work, from proof-of-delivery workflows to automated KYC pipelines. Quantum security only becomes useful when it fits the operational grain of the organization.
4) Quantum sensing: the hidden market with the broadest practical surface area
Why sensing may be the most immediately useful adjacent market
Quantum sensing exploits the extraordinary sensitivity of quantum states to environmental changes. That sensitivity can be a liability in computing, but in sensing it becomes a feature. The result is potential advances in timing, magnetometry, gravimetry, inertial navigation, and imaging. For enterprises, this is often more tangible than computing because it maps to concrete operational tasks: locating assets, detecting anomalies, improving navigation where GPS is unavailable, and characterizing physical environments with greater precision.
This broad applicability is one reason sensing may deliver value sooner. A utility can use better sensing to inspect infrastructure. A mining company can use improved measurements to locate resources. A hospital or research institution can use ultra-precise instrumentation for imaging and diagnostics. A transport operator can benefit from navigation systems that continue working in denied environments. These are not abstract “future quantum” claims; they are direct extensions of enterprise measurement needs, similar in spirit to how operational analytics helps schools spot struggling students earlier or how fleet teams use telemetry in fleet lifecycle economics.
Common enterprise sensing use cases
In critical infrastructure, sensing can support inspection of pipelines, power lines, substations, tunnels, and underground structures. In defense and aerospace, it can improve inertial navigation and detection in environments where satellite signals are jammed or unavailable. In geoscience and resource discovery, quantum sensors may improve the detection of subsurface structures or minute gravitational changes. In medical and life sciences, higher-precision measurement can aid imaging, diagnostics, and laboratory instrumentation.
The key point is that quantum sensing often competes with traditional instrumentation on precision and environmental tolerance rather than on compute performance. That means the enterprise buying process resembles capital equipment procurement, not software adoption. Buyers must evaluate calibration, maintenance, operating conditions, integration with analytics stacks, and total cost of ownership. If your team already understands how to compare physical systems using lifecycle criteria, as in simulation vs hardware trade-offs or technical due diligence, you are closer to the right mental model.
Why sensing can fit into existing enterprise workflows
Quantum sensors are often easiest to justify when they slot into current monitoring, analytics, or maintenance programs. For example, a sensor that increases detection fidelity is only valuable if the organization can ingest its outputs, correlate them with existing data, and act on them quickly. That means edge processing, data pipelines, and operator dashboards matter just as much as the sensor itself. The “last mile” of sensing is software and workflow integration, not physics alone.
That integration challenge is where many pilots fail. A proof of concept may demonstrate remarkable precision in a lab, but enterprise value requires reliability, environmental hardening, and operational interpretation. Teams should treat sensing pilots like production-readiness exercises: define the measurement objective, baseline current performance, specify acceptance thresholds, and map the downstream response process. This is the same discipline behind quality programs in software quality and governance-heavy environments like bank risk and compliance.
5) Comparing quantum networking, security, sensing, and gate-model computing
Decision matrix for enterprise buyers
Not every quantum use case should be judged with the same expectations. The table below provides a practical comparison for enterprise stakeholders deciding where to focus first. In most cases, the adjacent markets have clearer operational fit and shorter paths to value than gate-model computing, especially for organizations with strong security, infrastructure, or instrumentation needs.
| Domain | Primary value | Typical buyer | Time to enterprise value | Main adoption barrier |
|---|---|---|---|---|
| Quantum networking | Protected links, future quantum interconnects | Telecom, defense, government, regulated infrastructure | Short to medium | Integration with existing transport and trust layers |
| Quantum security / QKD | Specialized key exchange and key transport assurance | Critical infrastructure, sovereign networks, financial institutions | Short to medium | Cost, distance constraints, architecture complexity |
| Quantum sensing | Higher-precision detection and measurement | Energy, aerospace, healthcare, mining, transport | Short to medium | Environmental hardening and workflow integration |
| Gate-model computing | Potential long-term algorithmic acceleration | R&D, advanced analytics, optimization teams | Long | Noise, scale, error correction, unclear near-term ROI |
| Post-quantum cryptography | Quantum-resistant classical encryption | Nearly every enterprise | Short | Migration planning and crypto inventory |
This comparison should not be read as a dismissal of gate-model computing. Rather, it shows why many enterprises will derive earlier and more predictable value from adjacent markets. Organizations that cannot yet justify experimental compute workloads may still be able to justify secure communication infrastructure or precision sensing programs. That matters for IT leaders who need to align technology investments with operational outcomes, not hype cycles. It also echoes the value of cross-functional planning discussed in data storytelling and positioning and alternative dataset analysis.
When to choose which path
If your enterprise is primarily focused on cryptographic resilience, start with post-quantum cryptography and assess QKD for high-value links. If your challenge is navigation, detection, or measurement in difficult environments, prioritize sensing. If your requirement is multi-node quantum infrastructure or secure interconnect research, look at quantum networking. If your team is exploring algorithmic advantage for chemistry, optimization, or materials, then gate-model computing may belong on the roadmap—but likely not as the first operational pilot.
A useful rule is to match the technology to the pain point. Quantum sensing solves precision problems. Quantum networking solves trust and interconnect problems. Quantum security solves key exchange and secure distribution problems. Gate-model computing solves future algorithmic problems, but usually after the hardware stack matures. That framing makes the adjacent markets especially attractive to enterprises whose budgets are governed by risk reduction and mission assurance.
6) How enterprise adoption actually happens: from pilot to production
Start with the operational workflow, not the vendor demo
The most common mistake in quantum adjacent-market adoption is starting with a demo and then searching for a use case. The better approach is to start with an operational workflow that has an expensive failure mode. For example, a utility might identify a monitoring problem where conventional sensing misses early warning signs. A telecom might identify a secure link that needs stronger assurance. A defense organization might identify a navigation challenge in denied environments. Once the workflow is clear, you can assess whether a quantum modality genuinely improves the outcome.
This approach is consistent with the broader enterprise modernization playbook: define the outcome, establish a baseline, test against current methods, and scale only after evidence is strong. It is the same logic behind turning metrics into product intelligence and connecting system layers in a modern stack. For quantum projects, the baseline should be especially rigorous because the technology is still emerging and the surrounding ecosystem is fragmented.
Build a procurement checklist that includes both physics and operations
Enterprise teams should evaluate the following before committing to a pilot: measurable business problem, environmental constraints, integration points, vendor maturity, service model, data export requirements, fallback behavior, maintenance overhead, and security governance. For sensing projects, include calibration schedules, placement constraints, and operator training. For networking and QKD projects, include optical path assumptions, node trust architecture, authentication methods, and failover plans. If these questions are not answered clearly, the pilot is probably not ready.
It is also wise to treat vendor proof points with healthy skepticism. Ask for independent validation, published field results, and clear assumptions. In adjacent markets, the gap between lab success and enterprise deployability can be large. That is why organizations often benefit from the same kind of evidence discipline seen in policy impact evaluation and fact-checking under pressure: claims are not enough without reproducible context.
Plan for hybrid integration, not replacement
Quantum networking, security, and sensing are almost never standalone replacements for existing systems. They are hybrid layers that augment classical infrastructure. That means your design should anticipate translation between quantum and classical components, logging and observability, incident response, and change management. The more tightly you integrate the project with existing enterprise practices, the more likely it is to survive beyond the pilot phase.
That is especially important for shared services, where different teams own pieces of the stack. Security may own key policy, network engineering may own transport, and operations may own the physical site. Coordination failures can kill good projects. If your organization has experience with cross-team automation, as in expense tracking workflows or shipment APIs, you already understand why integration governance matters more than isolated performance claims.
7) Vendor landscape and ecosystem signals
What the company landscape tells us
Source material listing companies involved in quantum computing, communication, and sensing shows a broad and maturing ecosystem. That ecosystem includes hardware companies, photonics specialists, integrated-photonics startups, communication-focused ventures, and organizations spanning multiple quantum subfields. The important signal is not just that many companies exist, but that the market is diversifying by modality. This usually indicates commercial specialization: some firms focus on compute, while others aim at networks, security, or sensors.
For enterprise buyers, specialization is a double-edged sword. It can improve focus and technical depth, but it can also complicate procurement because no single vendor may cover the full stack. That is why decision-makers should map the ecosystem by capability, not by brand name alone. If you need quantum security, a pure compute vendor may not be your best fit. If you need sensing, a photonics or instrumentation vendor may be more appropriate than a processor-centric company. This is similar to how enterprises choose tools in other domains, such as AI-assisted product workflows or trade show sourcing strategies where niche fit matters more than general popularity.
How to evaluate vendor readiness
Look for evidence of field deployments, not just lab demos. Assess whether the vendor provides integration documentation, service-level expectations, and support for your environment. Ask whether the offering is a product, a platform, a research collaboration, or a custom build. Each has different timelines and risk profiles. In adjacent quantum markets, the difference between a sellable product and a promising prototype is often decisive for budget approval.
It is also useful to evaluate the partner ecosystem. A vendor that works with cloud providers, telecom carriers, systems integrators, or government labs may be easier to pilot with than one that expects you to build the whole stack from scratch. That is one reason ecosystem breadth matters as much as technical novelty. The same logic applies in other industries where platform integrations accelerate adoption, such as low-power telemetry app design or subscription-based commerce.
8) A practical enterprise roadmap for the next 12 to 24 months
Phase 1: identify mission-relevant problems
Begin by inventorying pain points in secure communications, sensing, timing, navigation, and infrastructure monitoring. Rank them by operational cost, risk severity, and technical feasibility. If you cannot articulate a business problem in one sentence, the use case is too vague. This is the stage where security teams, network teams, OT teams, and research leaders should co-define outcomes rather than work in silos.
For many organizations, the first pilots will likely be quantum security assessments or sensing feasibility studies. Those pilots are easier to contextualize than compute-first experiments because they map to known operational categories. They also allow you to establish governance around procurement, testing, and rollout. A disciplined start can save months of wasted experimentation later.
Phase 2: run constrained pilots with success criteria
Design pilots with explicit metrics. For QKD, define acceptable key rates, latency impact, operational uptime, and failover conditions. For sensing, define detection thresholds, false-positive reduction, environmental robustness, and calibration drift. For networking, define path integrity, node trust constraints, and recovery performance. The goal is to test whether the technology improves your workflow enough to justify deployment complexity.
Use existing enterprise experimentation practices here. Teams that already know how to define ROI windows, as in 90-day experiments, should adapt that rigor to quantum pilots. This prevents “science project” drift and makes it easier to report results to leadership. It also forces clear ownership of the operational handoff from R&D to production.
Phase 3: scale only where integration is repeatable
Scale the use case only after you can reproduce the outcome in more than one setting. The best indicator of enterprise readiness is not a single flashy benchmark but a repeatable operational gain. For quantum sensing, that might mean validated deployments across multiple sites. For QKD, it might mean standardized link architecture. For networking, it might mean a roadmap to connect protected nodes under common governance.
At this stage, teams should create internal enablement materials, support models, and procurement language. That is how technologies move from a pilot to a capability. The same principle applies in talent and enablement workflows, which is why organizations benefit from structured learning paths like designing learning paths for busy teams and from change-management narratives that make technical programs legible to non-specialists.
9) What to watch next: standards, regulation, and market maturation
Standards will determine deployability
As with any infrastructure technology, standards will shape adoption. Quantum security especially depends on interoperability, authentication, and formalization of how keys and devices are managed. Quantum networking will need defined interfaces for nodes, repeaters, and trust architectures. Quantum sensing will benefit from standards around calibration, measurement quality, and data interchange. Buyers should watch industry groups and national programs closely because standards often determine who can actually deploy at scale.
Regulation will matter too. Governments are likely to influence procurement, export controls, security requirements, and critical infrastructure policies. That can accelerate adoption in some sectors and slow it in others. Enterprises should maintain a policy watch list so that investment decisions are not made blind to compliance reality. This is especially true in sectors where procurement is driven by national security, resilience mandates, or regulated communications.
The likely winners: use-case specialists and integration partners
The companies most likely to win in the near term are not necessarily those with the biggest research headlines. They are the ones that can package a quantum capability into a deployable, supportable, and auditable enterprise solution. That often means partnerships with system integrators, telecoms, cloud providers, and defense contractors. The vendor that wins is the one that reduces the buyer’s integration burden.
For buyers, that means success will depend on choosing partners who speak both physics and operations. You need people who can explain entanglement and people who can explain change windows, support tickets, and control objectives. The vendors and integrators that bridge that gap will likely capture the most enterprise revenue as the market matures.
10) Bottom line: why adjacent markets may outperform compute for some enterprises
The investment logic is simpler than gate-model computing
Gate-model computing remains important, but it is often a longer-horizon investment with more uncertain near-term ROI. Quantum networking, quantum security, and quantum sensing, by contrast, can often be tied to existing enterprise pain points with measurable outcomes. That makes them easier to justify to decision-makers, especially in environments that prioritize resilience, confidentiality, and physical-world precision. For some organizations, these adjacent markets will be the first real quantum budget line items.
This does not mean the adjacent markets are easy. They are technically demanding, integration-heavy, and sometimes constrained by geography or environment. But they offer something highly valuable: a credible bridge from quantum science to enterprise utility. That bridge is where adoption happens. It is also why organizations evaluating the broader ecosystem should not stop at compute-focused vendor roadmaps.
What enterprise leaders should do next
Start by identifying one communications, security, or sensing problem where quantum could plausibly improve outcomes. Build a pilot with clear metrics and a clear fallback path. Ask vendors for field evidence, integration documentation, and support models. Then compare the result not against hype, but against your current classical baseline. That is how you find whether quantum networking, security, or sensing belongs in your roadmap now.
If you want to broaden the context, review how quantum companies are positioning across compute, communication, and sensing, and compare those claims with your organization’s actual needs. You will likely find that the non-compute areas are closer to deployment than expected. For many enterprises, that is the real quantum story today.
Key takeaway: If your organization needs secure communications, precision measurement, or critical infrastructure resilience, the adjacent quantum markets may offer a more immediate and defensible entry point than gate-model computing.
Frequently Asked Questions
Is quantum networking the same as quantum internet?
No. Quantum networking is the broader category of technologies used to transmit and coordinate quantum information across nodes. A quantum internet is a future large-scale vision built on top of those technologies. Enterprises should think about quantum networking in practical terms first: secure links, protected nodes, and staged deployment.
Does QKD replace post-quantum cryptography?
Not in most enterprise environments. Post-quantum cryptography is more broadly deployable and software-driven, while QKD is specialized and infrastructure-dependent. Most organizations should treat them as complementary layers rather than substitutes.
Which industries are most likely to adopt quantum sensing first?
Utilities, defense, aerospace, mining, transport, and healthcare are strong candidates because they often need higher-precision detection, navigation, or imaging. These sectors also tend to have budgets for capital equipment and longer time horizons for infrastructure improvement.
What is the biggest mistake enterprises make when evaluating quantum adjacent markets?
The biggest mistake is starting with a vendor demo instead of a mission problem. The second mistake is ignoring integration, maintenance, and fallback behavior. A promising physics result is not the same as an enterprise-ready capability.
Are these markets mature enough for production deployment?
Some use cases are, especially in constrained or high-value environments. Many deployments will still begin as pilots, research partnerships, or limited production systems. The maturity question should be answered by use case, not by the general label “quantum.”
How should a security team begin evaluating QKD?
Start with a threat model and a narrow set of high-value links. Then examine key rates, distance constraints, authentication methods, operational support, and fallback architecture. Finally, compare QKD against post-quantum cryptography and existing secure transport controls.
Related Reading
- Qubit Basics for Developers: The Quantum State Model Explained Without the Jargon - A practical foundation for readers new to quantum information.
- Classical Opportunities from Noisy Quantum Circuits: When Simulation Beats Hardware - A reality check on when classical tools still win.
- From One-Off Pilots to an AI Operating Model: A Practical 4-step Framework - Useful for turning experiments into repeatable programs.
- The Technical KPIs Hosting Providers Should Put in Front of Due-Diligence Teams - A strong template for evaluating technical vendors.
- Leveraging AI for Code Quality: A Guide for Small Business Developers - Practical governance and workflow lessons for technical teams.
Related Topics
Daniel Mercer
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you