Unveiled for CES, the Plugable Thunderbolt 5 AI Enclosure (TBT5-AI) is built for organizations that demand privacy, control, and performance for local inference. 

Designed for secure desk-side AI for modern Windows laptops, TBT5-AI enables advanced AI workloads to run inside your environment, without the risk of cloud dependencies or data exposure.

Keep Sensitive Data Where It Belongs

As generative AI reshapes industries like healthcare, finance, and enterprise IT, protecting proprietary and regulated data has become critical. The TBT5-AI allows teams to run large language models and analyze private datasets directly at the desktop, ensuring that not a single byte of data leaves the corporate firewall. No upload and no loss of ownership.

Intelligence You Own

The TBT5-AI is a complete local AI platform engineered for enterprise use, compliance-sensitive workflows, and always-on operation.

“The TBT5-AI is about ownership. We are providing a fully integrated AI edge stack for teams that demand privacy and performance. It’s not just a GPU in a box; it’s Intelligence You Own.”
— Bernie Thompson, CTO, Plugable

Powered by Open Standards

The TBT5-AI is built on industry-leading open standards, giving organizations flexibility, transparency, and long-term control over their AI stack.

Plugable Chat (Open Source)

TBT5-AI includes Plugable’s open-source software layer (Apache 2.0 licensed), designed for secure “chat with your data” workflows with transparent, auditable data flows.

Microsoft Foundry Local

Manage and deploy models like OpenAI GPT OSS (20GB), Microsoft Phi-4, Mistral and Qwen entirely offline using a familiar control plane.

To see a list of models available in Foundry Local visit here: https://www.foundrylocal.ai/models

Google Model Context Protocol (MCP)

Secure, read-only access to local SQL databases and file systems, enabling AI to answer complex business questions using private, contextual data. 

Built for Real-Time AI Performance

At the core of TBT5-AI is next-generation performance designed for AI workloads that meet your needs.

Thunderbolt 5 Connectivity

80Gbps throughput for low-latency inference and responsive model interaction.

Configurable GPU Options

Scale performance to your needs with flexible GPU configurations -  from lightweight workloads to large-model inference.

Enterprise-Grade Design

An 850W ATX 3.0 power supply supports continuous, 24/7 operation in demanding environments.

Key Specifications

  • Interface: Thunderbolt 5 (up to 120 Gbps)
  • GPU Support: Configurable / customer-selected GPU options
  • VRAM: Supports high-memory GPU configurations (up to 96GB VRAM depending on GPU)
  • AI Stack: Microsoft Foundry Local + Google MCP
  • Power: 850W ATX 3.0

Where can I sign up?

Learn more or sign up to be notified of launch below.

If the form doesn’t load, open it here .


Loading Comments

Article ID: 746287857895