OpenAI's New Model Runs Locally on RTX Cards but Requires High-End Hardware
AI/Software

OpenAI's New Model Runs Locally on RTX Cards but Requires High-End Hardware

A new open-weight reasoning model from OpenAI promises local execution on powerful GPUs, benefiting AI enthusiasts desiring autonomy.

OpenAI has unveiled its new open-weight reasoning models, gpt-oss-10b and gpt-oss-120b, allowing AI to be run locally on powerful GPUs such as NVIDIA’s RTX. These models are designed for users who prefer not to share data back with the cloud but want high-performance AI capabilities on their setups.

The open-weight feature means that users have access to the “weights”, which are crucial values that determine how the AI interprets input data and forms responses.

Key Features:

  • Supports running on NVIDIA RTX cards, requiring at least 16GB of GPU memory.
  • Effective for computations and capable of complex reasoning, breaking down tasks into manageable parts.

OpenAI’s latest models are set to compete with other AI technologies, and AMD also supports the initiative with claims of running these models across various platforms, including cloud, edge, and local systems.

“AMD is proud to be a Day 0 partner enabling these models to run everywhere - across cloud, edge and clients. The power of open models is clear… and this is a big step forward.” — Lisa Su

For more information on utilizing these new AI models, visit NVIDIA’s website.

Next article

New Tariff Impacts on Semiconductor Industry as Focus Shifts to Domestic Production

Newsletter

Get the most talked about stories directly in your inbox

Every week we share the most relevant news in tech, culture, and entertainment. Join our community.

Your privacy is important to us. We promise not to send you spam!