About The Lustre Collective

Founded by some of the original Lustre team members from Cluster File Systems, Inc., we have been the driving force behind every major Lustre release since Lustre 1.0 in 2003, ensuring the file system remains fully open, community-owned, and the undisputed performance leader for exascale AI & HPC.

Our Heritage – From Lustre 1.0 to Today

Members of our team have been involved since the inception of lustre at Cluster File systems (2001-2007), shipped Lustre 1.0 in 2003, and have led every single major community release for the past 22 years.

We delivered the code for Lustre 1.0, 2.x, ZFS integration, HSM, DNE, PFL, Compression, and Multi-tenancy along with the exascale features that power Frontier, Aurora, El Capitan, LUMI, and every other Top10 system running Lustre today.

Community Leadership & Open-Source Stewardship

  • Corporate member of OpenSFS
  • Primary responders on Lustre community mailing lists and community JIRA issues
  • Technical steering committee members and working-group chairs
  • Keynote speakers and organizers at every major LUG and Lustre Community Conference

Why The Lustre Collective Exists

To guarantee Lustre remains forever open and GPL-licensed, no matter who owns the trademarks or what proprietary vendors try to do. We fund aggressive open-source development through paid services, performance tuning, custom features, exascale deployments, and long-term support, exactly the model that has kept Lustre #1 for 22 straight years.

Lustre’s Unmatched Track Record – The Systems That Matter Most

2003–2004

Lustre 1.0 ships and immediately powers 4 of the Top 5 fastest supercomputers on earth (MCR, Thunder, ASCI Purple cluster, etc.), establishing Lustre as the de-facto standard overnight.

2022–2025

Frontier (world’s #1 supercomputer, ORNL) runs the largest operational Lustre file system on earth (>700 PB, >10 TB/s). Nearly every current exascale machine: Frontier, Aurora, El Capitan run Lustre.

2024–2025

xAI Colossus; the world’s largest single AI training cluster (200k+ NVIDIA H100/H200 GPUs) chose a Lustre-based solution as its primary storage. When Elon Musk needed to train Grok at planetary scale, he picked Lustre.

Ongoing

NVIDIA’s official DGX SuperPOD reference architectures and DGX Cloud deployments all recommend or require Lustre-based storage. Every major cloud vendor (AWS FSx for Lustre, Google Cloud + DDN, Azure partnerships) offers fully managed Lustre deployments.

Lustre has run at least 60% of the Top 100 supercomputers for the past 15 years.
No other filesystem even comes close.

Our team helped invent Lustre in 2001.
and was directly responsible for shipping every major release since 2003.

TLC will keep Lustre open, free, and the fastest parallel file system on earth.

When you partner with The Lustre Collective, you work directly with the people who own Lustre’s past, present, and future.

Ready to run the same storage that powers the world’s #1 supercomputer and the largest AI training cluster on earth?

Talk to a Lustre Expert Today