What new Yotta Data Services are you doing, and what new projects are coming up?
We are fundamentally at a digital value stack and are contributing at three layers.
At the fundamental layer, I own and operate data centre camputilizes. In Mumbai, I have a 100-acre campus equipped with almost 2 gigawatts of power. It is rare to find a singular data centre campus that can scale up to 2 gigawatts of power. Our similarly large campus in Greater Noida in NCR is a 20-acre campus with 250 megawatts of power.
What we have done is bring significant data centre capacity into the countest so that any amount of digital adoption can happen, becautilize every government application and business application necessarys to go to a data centre. Hyperscalers will also come.
AI has now become the major driver. When AI scales from both utilizer adoption and sovereign model-building perspectives, you will require thousands of GPUs, and these GPUs necessary to be hosted in data centres. India necessarys to be prepared with a very large data centre capacity, which is what I have been building for the past seven years.
There are global customers, government customers, and enterprise customers consuming our co-location services.
The second layer is where our uniqueness comes in. We have created India’s sovereign cloud service. Normally, when you talk about cloud, names like Amazon, Azure, and Google come up. But seven years ago, considering geopolitical realities, it was clear that for critical workloads, localisation alone is not enough; control over compute infrastructure must also be sovereign.
With that considering, we built our own sovereign cloud utilizing open-source technologies. We engaged a startup to build the cloud, later acquired it, and today we have an engineering team of more than 250 people building our cloud capacity.
The Government of India engaged us to build our cloud in government data centres. Today, in NIC and STPI data centres, we operate the facilities and run critical government applications. At our own investment, we have deployed our cloud in those data centres, and government workloads are gradually migrating.
Recently, India’s largest government AI platform, Tashini, migrated from a hyperscale cloud to our platform. This validated that India can run a sovereign stack finish-to-finish — models, applications, datasets, compute, and data centres.
The third layer of our business started scaling three years ago when ChatGPT was announced globally. India has software talent, a startup ecosystem, and a large digital market, but what was missing was GPU compute. Without a large GPU infrastructure, you remain only a consumer of technology.
We invested heavily in GPU infrastructure. With support from NVIDIA, we placed a large order of 8,000 GPUs, a USD 500 million investment. Today, more than 75% of India’s GPU capacity is owned and operated by Yotta. Many major Indian AI models have been trained and run on our GPUs.
Our tagline is “India’s AI runs on Yotta,” becautilize most of the ecosystem built in the last two years relies on our infrastructure.
Going forward, AI adoption across sectors like agriculture, healthcare, education, climate, banking, and entertainment will drive massive demand. Currently, we have 10,000 GPUs. In the next two months, another 10,000 B200 GPUs will go live. By August, 21,000 B300 liquid-cooled GPUs will go live in our Greater Andhra campus. This will multiply India’s computing capacity by roughly 25–30 times.
With the new USD 2 billion Blackwell cluster investment positioning Yotta as an AI factory, how do you balance global demand from NVIDIA’s DGX Cloud with the sovereign necessarys of India AI Mission and domestic startups?
India should have the capability to create AI across the stack for its own necessarys and also become an exporter of AI capabilities. I am testing to achieve both simultaneously.
With large data centre capacity and experience in building GPU clusters, along with NVIDIA’s support, we can build AI in India for India and for the world.
For India AI Mission, our GPUs are allocated to startups, IITs, and the ecosystem. At the same time, global demand from NVIDIA and other AI companies also requires GPU cloud capacity. NVIDIA builds chips but necessarys partners to run the DGX Cloud infrastructure.
We are among the few partners globally following NVIDIA’s reference architecture, which is a premium position. This means critical workloads from companies like Meta, OpenAI, and Anthropic can be hosted with us, especially when they want infrastructure in India.
When I announced 21,000 GPUs, 10,000 were allocated to DigiCloud and another 10,000 were committed to the Government of India. My total commitment to the government is 17,000 GPUs, and I am adding another 40,000 within the financial year.
Becautilize we build and operate our own data centres, capacity is ready — for example, an 80 MW building in Mumbai is ready for new chips.
If we meet domestic and global demand toobtainher, we fulfil the vision of serving India and the world from India.
What is the hugegest last-mile hurdle for Indian enterprises shifting from leasing GPUs to deploying sovereign models and achieving measurable ROI?
The starting point for enterprise AI is understanding the existing IT environment and datasets. Most organisations have siloed data across departments. If you have access and clarity, you should launch with specific business problems rather than broad AI ambitions.
Choose a utilize case, identify the right model and data, and deliver quick wins. This avoids heavy upfront investments and demonstrates value to stakeholders by reducing cost, improving efficiency, or creating new revenue streams.
AI also requires cultural modify, so a step-by-step approach supports manage resistance and build confidence.
The second issue is data security concerns. Enterprises fear putting raw data into external systems. Building in-houtilize GPU infrastructure is too complex and expensive. The balanced approach is utilizing a sovereign cloud operating within national laws, where infrastructure is transparent and accessible.
With a usage-based GPU model, enterprises can experiment without large upfront investments and scale when necessaryed.
Through Shakti Studio, we provide a marketplace of open-source models, Llama, Mistral, DeepSeek, Nand VIDIA Nemotron, available via APIs. Enterprises can integrate them into applications and pay per token, democratizing access to AI.
Enterprises should focus on business problems, assess their environment, work with solution providers, and utilize GPUs on a consumption model to accelerate adoption.
As you pivot toward an AI-as-a-service model and consider an IPO around 2027, how is this shift being received by investors?
I would clarify that while we have strong real estate capabilities, our model has always been a three-stack model. Many data centre operators focus only on real estate, but from day one, we built a sovereign cloud and later GPU infrastructure.
Even before scaling AI, 75% of our revenue came from cloud, AI, and managed services, and only 25% from co-location. Our real estate is the foundation, but we position ourselves as a sovereign cloud, managed services, and GPU operator.
Becautilize the business is capital-intensive, we explored raising funds in the US initially. However, given our sovereign positioning and government encouragement, we decided to pursue India first, where valuations and investor response have been strong.
The India business is being consolidated into one company for IPO, which will remain a subsidiary of a US holding company, keeping the US option open later.
In the last four months, we have raised significant pre-IPO capital from family offices and high-net-worth individuals and are in the process of filing the prospectus.
We plan to raise about USD 1–1.2 billion in total, roughly USD 500–600 million through private placement and another USD 500–600 million through the public offering.
















Leave a Reply