AI data centres 2026: India shifts to GPU-ready builds
AI workloads move from pilots to production
India’s data centre buying model is shifting as AI workloads move into production environments. Enterprises that once focused on space, basic power availability, and standard service levels are now screening providers for AI-led compute density and sustained performance. Operators say the selection process increasingly starts with GPU readiness, high-density racks, and the ability to run power-intensive workloads over long periods without throttling. This is changing how capacity is designed, priced, and contracted in the market. It is also pulling more scrutiny into risk areas that were earlier secondary, such as regulatory exposure and data sovereignty. Together, these factors are pushing the sector away from traditional colocation toward AI-specific infrastructure services. The result is a reordering of where margins are captured and how enterprise relationships are shared across the stack.
Compute density becomes the first filter
Across operators, AI workload density has emerged as a primary filter in enterprise buying decisions. Rack densities that previously sat in the 5–10 kW range are now being evaluated at much higher levels because of GPU-based training and inference. Operators say air-cooled designs built around older enterprise loads are no longer adequate in many new AI deployments. As density rises, buyers are also asking more detailed questions about sustained performance under continuous load, not just peak specifications. The change is practical: GPU-heavy environments create different operational constraints, including heat, power delivery, and component failure risks at scale. This moves evaluation away from “available space” to “usable compute at target performance.” It also increases the value of operators that can standardise high-density deployments rather than treat them as custom one-offs.
Cooling architecture becomes the immediate design change
Cooling architecture is emerging as the most immediate change in how facilities are built and assessed. Operators say the industry is moving beyond designs optimised for 5–10 kW enterprise racks, because AI deployments are driving sharply higher densities. Buyers are increasingly comparing cooling systems and site-level engineering as core differentiators, not optional upgrades. AI-ready cooling systems are also being positioned as part of a broader services layer, along with GPU deployment and infrastructure designed for sustained, power-intensive workloads. This matters because cooling decisions influence how much of the theoretical power capacity can be converted into stable, sellable compute. It also shapes the speed at which operators can deliver new AI halls or expand existing space. In practice, cooling is becoming one of the first technical due-diligence checkpoints during enterprise procurement.
Sovereignty and regulatory exposure shape final decisions
Alongside performance, data sovereignty is increasingly shaping final procurement decisions, especially for regulated sectors. Operators point to rising scrutiny around where data sits, how workloads are operated, and the exposure created by cross-border dependencies. Budget 2026 has also pushed data sovereignty and proposed Data Center Economic Zones, with discussion of tax holidays and incentives tied to localisation. This brings compliance considerations directly into infrastructure design and contracting. Sovereignty requirements can affect site selection, operational controls, and even which partners are allowed into the delivery chain. For buyers, this creates a dual test: technical fit for AI and governance fit for regulation. For operators, it raises the premium on “sovereign” positioning and audit-ready operations.
Operators move up the stack from colocation to AI services
Operators say AI infrastructure services are becoming the fastest-growing revenue layer across the data centre stack. This includes high-density GPU deployments, AI-ready cooling systems, and infrastructure built to support sustained, power-intensive workloads. The shift is changing the traditional colocation model where revenue is closely tied to space and power. As services move up the stack, capacity is increasingly sold as outcomes aligned to AI workloads rather than as generic racks. This can reshape how long-term enterprise relationships are structured, because the operator’s role extends into how workloads are integrated and maintained. It also increases the importance of operational maturity, since AI production workloads are less tolerant of performance drift. Operators broadly agree that as they move toward platforms and AI infrastructure services, no single player “owns” the full enterprise relationship.
Channel partners are pulled into integration and operations
As AI workloads move into production, operators say the most valuable partner skill over the next two to three years will be integration and optimisation, rather than simple provisioning. Partners are increasingly expected to understand how GPU-intensive workloads perform at scale and how AI environments are designed and operated to deliver measurable outcomes. This shifts partner opportunity from hardware supply and initial setup toward ongoing operations and performance optimisation. It also changes how enterprises manage vendor responsibilities, because AI environments can span data centre infrastructure, cloud functions, edge processing, and workload orchestration. In this setup, partners become critical for long-term reliability and tuning. Operators suggest that partner capabilities will influence buying decisions, especially when enterprises need a single accountable layer for operating AI workloads over time.
Telecom capex pivots toward compute and data-centric infrastructure
Telecom operators are expected to invest over ₹1 lakh crore in the next two to three years to build AI-ready data centres, edge infrastructure, and cloud functions. Deloitte India’s Aditya Khaitan said 20–30% of FY27 investment budgets could shift toward these areas, with the extent varying across telecom providers and likely higher for the top two telcos that have moved beyond the peak 5G capex cycle. Enterprise revenue currently represents 15–30% of total revenue for these companies, and Khaitan said the B2B contribution could rise to 30–40% as services expand beyond connectivity into cybersecurity, cloud computing, IoT platforms, and AI-powered solutions. Alvarez & Marsal India’s Shilpa Malaiya Singhai said two major operators’ gigawatt-scale, AI-focused data centre investments could add 4–5 GW to India’s digital infrastructure. The examples cited include a Bharti Airtel–Google partnership for about a 1 GW AI hub in Visakhapatnam and Jio’s planned roughly 3 GW campus in Jamnagar. Separately, Bharti Airtel announced in late 2024 that it would invest about ₹5,000 crore to expand Nxtra capacity to 400 MW from 240 MW.
Reliance, Adani, hyperscalers raise the investment tempo
Mukesh Ambani unveiled Reliance’s ₹10 trillion (₹10 lakh crore) plan to build AI computing infrastructure in India over the next seven years, describing it as a push for technological self-reliance. Ambani said Reliance is building a massive data centre campus in Jamnagar, aiming for 3 GW of total capacity, with the first 120 MW expected to come online in the second half of 2026. He also said the biggest bottleneck for AI in India is limited compute infrastructure, which can push up consumer costs, and that Reliance wants to reduce the cost of AI services as it once reduced mobile data prices. Reliance said the build-out would be supported by 10 GW of surplus solar power from projects in Gujarat and Andhra Pradesh. The broader investment cycle includes Adani Group outlining plans of about $100 billion to build AI data centres, and the Indian government expecting more than $100 billion in AI infrastructure spending over the next two years. Global commitments cited include Google’s $15 billion pledge for a gigawatt-scale AI hub in Visakhapatnam and Microsoft’s $17.5 billion plan for facilities in Hyderabad, Chennai, Mumbai, and Pune.
Capacity projections heighten the power constraint
India’s data centre capacity is cited at around 1.3–1.7 GW as of late 2025, with projections of 8–9 GW by 2030 driven by AI, cloud growth, and 5G rollout. A separate projection says capacity could reach 10 GW by 2035, requiring a $14.5 billion investment in power infrastructure. The power intensity is a core issue: one AI rack is described as consuming 15 times more electricity than a traditional server. This shifts investor attention to the broader “data centre power ecosystem,” spanning generation, transmission, transformers, cables, renewables, and battery storage. The companies mentioned across these layers include Tata Power, Adani Power, NHPC, Power Grid, Adani Energy Solutions, Voltamp, TRIL, GE T&D, Polycab, KEI, RR Kabel, KPI Green, Waaree, Vikram Solar, Exide, and Amara Raja. The linkage is straightforward: higher-density AI halls turn electrical capacity and grid readiness into a gating factor for data centre growth.
Key facts and figures at a glance
Market impact: what changes for investors and operators
The operational definition of a “good” data centre asset is changing from basic colocation readiness to AI-led density and cooling capability. This can change how capacity is monetised, since operators increasingly sell AI-specific infrastructure services rather than undifferentiated space and power. It also shifts value toward power availability and delivery systems, because AI racks materially increase electricity draw and can expose constraints in grid connection, onsite distribution, and cooling. Sovereignty requirements and regulatory exposure add another layer that can influence which campuses win regulated workloads. For telecom operators, the cited capex reallocation to AI-ready data centres, edge infrastructure, and cloud functions links directly to the goal of lifting enterprise revenue contribution from 15–30% toward 30–40% by FY27. The investment pipeline highlighted by Reliance, Airtel, Adani, and hyperscalers points to a longer cycle of execution milestones, including the 120 MW expected to come online in the second half of 2026 at Jamnagar.
Analysis: why the buying model is being rewritten
The story is not just “more data centres.” AI moves the market from capacity as real estate to capacity as engineered compute, where sustained performance, cooling architecture, and power delivery determine how much revenue a facility can realistically support. That is why operators describe AI infrastructure services as the fastest-growing layer: it is where complexity is rising and where enterprises need assurance beyond basic uptime. At the same time, sovereignty and regulatory exposure are becoming procurement constraints, especially for regulated sectors, turning compliance posture into a competitive variable. Modular and plug-and-play approaches are also gaining traction because they reduce timelines and let enterprises scale in phases, addressing concerns around overcapacity as AI demand evolves. Finally, the partner ecosystem shifts because integration and long-term operation become critical, and enterprise ownership disperses across operators, telcos, hyperscalers, and service partners.
Conclusion
AI workloads in production are forcing India’s data centre market to prioritise GPU-ready density, cooling design, and sovereignty compliance over traditional colocation checklists. The investment wave spans telcos, conglomerates, and hyperscalers, while power availability is emerging as a hard constraint, reinforced by the 15x electricity draw cited for AI racks. Near-term milestones to track include capacity coming online in the second half of 2026 at Jamnagar and the broader capex shift indicated for FY27 budgets toward AI-ready data centres, edge infrastructure, and cloud functions.
Frequently Asked Questions
Did your stocks survive the war?
See what broke. See what stood.
Live Q4 Earnings Tracker