
Credit: Outlever
The trend of "more for less" continues for the major LLM providers as outrunning feature parity becomes a near-daily sprint. OpenAI has unveiled GPT-4.1, the latest advancement in its series of models, with three variations for different business needs, each offering enhanced processing capabilities and cheaper pricing than their predecessors.
Meet the new models: The marque GPT-4.1 model is designed for coding and complex, high-volume tasks requiring deep contextual understanding, while GPT-4.1 Mini offers a balance of performance and cost for mid-range business applications, and GPT-4.1 Nano is optimized for lightweight, real-time tasks on edge devices or when keeping costs down is a major consideration.
More competitive pricing: In a bid to fend off growing competition from rivals, OpenAI has introduced a more competitive pricing structure with the new models, representing a 26% drop compared to GPT-4o:
GPT-4.1: $2.00 per million input tokens and $8.00 per million output tokens.
GPT-4.1 Mini: $0.40 per million input tokens and $1.60 per million output tokens.
GPT-4.1 Nano: $0.10 per million input tokens and $0.40 per million output tokens.
Powering up on processing: GPT-4.1 features an expanded context window, capable of handling up to one million tokens, up from the 128,000-token limit of its predecessor, GPT-4o. This enables the model to process and analyze extensive documents or datasets more effectively, a feature which will appeal to businesses dealing with large-scale data analysis and complex document processing.
In terms of coding proficiency, OpenAI says GPT-4.1 demonstrates notable improvements over earlier models. It achieves a 54.6% win rate on the SWE-Bench coding benchmark, surpassing previous versions.