I believe it's to build their 75 MW data center and rather serves as a safety net rather than capital that is desperately needed. So, fine with me.
“Our capital strategy for AI opportunities prioritizes customer prepayments and a range of debt financing solutions, with the at-the-market facility serving as a strategic backstop to enhance execution certainty and support ongoing
What about the fact that isolated remote sites, far away from cities are less suitable for inference or AI/HPC? These remote sites in Texas are more suited to training than inference. I struggle to see why its taken IREN so long to land a deal. The Sweetwater site has been there since start of 2024 and no hyperscaler has taken a bite.
- Hyperscalers are slow to commit—especially post-2022. Due diligence, custom builds, and power commitments take months. Just because Sweetwater’s been around since Jan doesn’t mean deals aren’t in motion.
- It’s often a 6–12 month ramp from build-ready to signed customer.
2. Data center demand for training (vs inference)
- GPT-4 wasn’t the end — OpenAI, Anthropic, Meta, Google, and xAI are all training successors with more parameters and training data.
- Training next-gen foundation models (GPT-5, Claude 3.5, Gemini Ultra 2, etc.) can cost hundreds of millions and take months of GPU time.
- Enterprises and governments are training custom models (on internal, proprietary, or local-language data).
- Even if the base model is trained, companies fine-tune models for specific use cases — customer service, legal, finance, etc. This also needs compute — though less than full pretraining.
- If you’re running AI to process huge datasets (e.g., summarizing 10,000 documents, generating a million images), latency doesn’t matter. That can be done overnight or in the background — Sweetwater-style remote sites are perfect.
Great analysis and straight to the point, love it!
Great take! My second biggest position 🙌
interesting, how did you find out about IREN initially?
I discovered IREN when Jan Beckers made it a top holding at BIT Capital. Personally, I see it as one of the cheapest AI plays out there :)
Are you from Germany? I discovered it when it became part of their crypto portfolio in 2023
Yes, living in Hamburg! I was late and did not start looking into it before the beginning of 2025
Thank you - this was an amazing read!
thank you so much for your feedback!
What is your thought on the company’s announced ATM just after they said they were fully founded?
I believe it's to build their 75 MW data center and rather serves as a safety net rather than capital that is desperately needed. So, fine with me.
“Our capital strategy for AI opportunities prioritizes customer prepayments and a range of debt financing solutions, with the at-the-market facility serving as a strategic backstop to enhance execution certainty and support ongoing
commercial engagement across our AI platform.”
What about the fact that isolated remote sites, far away from cities are less suitable for inference or AI/HPC? These remote sites in Texas are more suited to training than inference. I struggle to see why its taken IREN so long to land a deal. The Sweetwater site has been there since start of 2024 and no hyperscaler has taken a bite.
my take on this:
1. Time to land a deal
- Hyperscalers are slow to commit—especially post-2022. Due diligence, custom builds, and power commitments take months. Just because Sweetwater’s been around since Jan doesn’t mean deals aren’t in motion.
- It’s often a 6–12 month ramp from build-ready to signed customer.
2. Data center demand for training (vs inference)
- GPT-4 wasn’t the end — OpenAI, Anthropic, Meta, Google, and xAI are all training successors with more parameters and training data.
- Training next-gen foundation models (GPT-5, Claude 3.5, Gemini Ultra 2, etc.) can cost hundreds of millions and take months of GPU time.
- Enterprises and governments are training custom models (on internal, proprietary, or local-language data).
- Even if the base model is trained, companies fine-tune models for specific use cases — customer service, legal, finance, etc. This also needs compute — though less than full pretraining.
- If you’re running AI to process huge datasets (e.g., summarizing 10,000 documents, generating a million images), latency doesn’t matter. That can be done overnight or in the background — Sweetwater-style remote sites are perfect.