GPU Technology: GPU History and GPU Prices Over Time (2024)

Temps de lecture : 12 minutes

If you need to sell graphics cards now, follow that link and fill out a form for a free quote.

In this article, we’ll cover the rich GPU history, as well as how graphics processing units have advanced over time up into the recent days of rapid Nvidia data center growth. Specifically, we’ll cover the beginnings of the GPU, then the explosion of ATI (then AMD) and nvidia GPU growth. We’ll go over Nvidia history with the Nvidia GPU timeline and the AMD GPU timeline. Then we’ll discuss the advent of the modern data center GPU with the recent Nvidia data center GPU offerings and AMD data center GPU offerings. Finally we’ll go over GPU price history with a list of GPU prices over time.

Sell your graphics card fast and easily!

Find out what your graphics cards are worth

Early GPU History: 1949-1985

The “Geometry Processing Unit”

The very first electronics capable of processing code in real time to display graphics also happened to be what is likely the father of all modern computers: MIT’s whirlwind flight simulator developed for the US Navy. It was the first computer that processed in parallel as opposed to simple linear batch computing. The technical phrasing would be bit-parallel as opposed to single-bit. While it was not finished until 1951, by 1949 the machine could be operated as the very first interactive graphic computer game.

The second system to process graphics in a digital way may well have been the flight simulator developed by Curtis-Wright in 1954 depicted below. 3D graphics and other gpu technology ahead of its time was in fact available as early as the 60s, but it was highly secretive and exclusive to the government, university labs, aviation companies, and automotive companies.

GPU Technology: GPU History and GPU Prices Over Time (1)

Then James Clark at Stanford in 1980 coined the first usage of a “VLSI geometry processor for graphics” which might be the first term ever used that roughly equates to a graphics processing unit. It ran at about 4 million floating point operations per second, or FLOPS, which is much more fun to say. That equated to 900 polygons every 30th of a second. This appears to be the first graphics chip capable of massive parallelism that roughly did the basic functions of modern GPUs, though it certainly wasn’t built with the same basic architecture, and lacked a great number of capabilities.

GPU Technology: GPU History and GPU Prices Over Time (2)

GPU History: The Arcade Era (‘70s to 80s)

Then the first consumer applications of a graphics unit were in retro arcade games, and had very limited capabilities. They essentially just carried graphics info from the processor to the display. In those years, GPU technology over time moved very slowly.

The graphics were very primitive, if you can remember some of the original arcade games. E.G.

GPU Technology: GPU History and GPU Prices Over Time (3)

After those first arcade games, in 1976, RCA made its video chip, the pixie, but it only supported monotone, and in a tiny 62×128 resolution, at that.

In 1979, the first Graphic User Interface was developed by Xerox at the Palo Alto Research Center as a large collaboration project. It had windows, icons, drop down menus, and many other familiar features. Steve Jobs eventually toured their facilities,

Three years later, Namco Galaxian advanced graphics chips to support color – namely sprites with multiple colors and tilemaps in the background. That was just before 1980.

IBM made what could be called the first video card with the IBM Monochrome Display Adapter

https://youtu.be/qM2TV7RrwHY

Then IBM released their 8 color supporting card in 1983, the ISBX 270, which was innovative at the time, if fairly expensive at $1000. For reference, in today’s dollars, that would be $2633.

Sony then coined GPU in reference to their PlayStation in 1984 (though it was designed by Toshiba).

Then, what became a titan of gpu history, ATI was founded in 1985 by the Lau brothers and Kwok Yuen Ho, Hong Kong immigrants living in Canada. ATI eventually was bought out by AMD. More on that later.

In 1986 they released the first in their Wonder series of GPUs. These cards were dominant at the time because they supported many monitors and graphics standards in one card, while others did not.

GPU Technology: GPU History and GPU Prices Over Time (4)

In 1987, the video graphics array connector, or VGA, was released. VGA became the dominant standard in graphics.

GPU Technology: GPU History and GPU Prices Over Time (5)

In 1989, to remedy the lack of standardization in the computer graphics industry, the Visual Electronics Standards Association was founded by ATI and seven other companies. Today more than 300 companies are members.

In 1991, S3 Graphics introduced their S3 911. It was named after the Porsche for some reason, and was fairly dominant. S3 truly became a leader in the space after the release of their Trio line, which led the pack for some time.

The development of graphics technology was greatly supported by the release of two notable APIs. Possibly the most ubiquitous API for graphics rendering, OpenGL, was released in June 1992. Many competitors came and went, but OpenGL remains the surviving victor today.

The other, Direct3d, was released in 1996, and remains a standard in the industry today (though it’s evidently fractions of a millisecond slower than OpenGL, for what that’s worth).

GPU Technology: GPU History and GPU Prices Over Time (6)

The S3 Virge chipset, launched in 1995, was actually the fastest DRAM accelerator of the era on Windows. OEMs purchased the Virge in large quantities for its value and 2D performance, but it was certainly not compelling for its 3D performance.

S3 later sold off its graphics division.

The 3dfx Voodoo add-on card was wildly popular, and spurred on greater development of 3D technology, for gaming in particular.

The voodoo line continued to be a dominant player in the market until Nvidia acquired them later on.

GPU Technology: GPU History and GPU Prices Over Time (7)

Possibly the first formal usage of the acronym GPU was by TriTech in 1996, with their Geometry Processor Unit.

It was one of many similar projects that never quite took off, though it did have interesting features like dedicated bump mapping hardware and displacement mapping capabilities.

Microsoft licensed it from TriTech a couple years later.

Modern GPU History – the AMD Vs Nvidia wars (‘90s to today)

Funnily enough, both Nvidia and ATI got off to a rough start in the ‘90s. Nvidia’s NV1 was hindered by the release of DirectX 1.0 shortly after its release, which it wasn’t compatible with.

ATI’s Rage 1 also struggled due to DirectX 1.0 compatibility, though it was a good 2D performer.

While ATI had already seen some success years prior, Nvidia really first came to prominence with the release of the Riva in 1997, which sold a million units within only four months. Its popularity came in large part due to the fact that it was fairly, dare I say, general purpose. It supported 2D, 3D, and video acceleration, and they weren’t placeholder functions either, as may have been the case with many GPU makers.

With that said, its success was hampered by its lack of driver support.

GPU Technology: GPU History and GPU Prices Over Time (8)

Their reign truly began with the Riva TNT 2. The 3dfx API, Glide, was losing to DirectX and OpenGL, which began their downfall. GeForce cemented it.

In 1999, they made the term GPU widely used with “the world’s first GPU,” the GeForce 256. It wasn’t really the first GPU, of course. From the Clark era, GPU continued to be used by academics to refer to geometry processing.

The value proposition of the GeForce 256 was based on its inclusion of transform and lighting (or T&L) hardware on the graphics chip itself instead of relying on the CPU. As a result, with a fast CPU to handle T&L satisfactorily, its value prop was negligible. Its circuitry was also fairly criticized. That, in addition to the high price tag, meant that it wasn’t as popular as later cards, though it did have its niche with games like Quake.

It also actually performed worse than the 3dfx Voodoo if the cards were paired with a fast CPU (not that this was a terribly common scenario, however.)

Nonetheless, the DRAM version stoked a lot of excitement and so they crushed 3dfx into bankruptcy/acquisition.

A nice info graphic of the first years below was made by Jon Peddie in his seminal text, “the history of visual magic in computers,” where he goes into the level of detail a book can allow.

GPU Technology: GPU History and GPU Prices Over Time (9)

Nvidia Timeline/AMD Timeline: Their Reign Begins

Nvidia found itself in a very unique position at the turn of the millennium. While companies like 3D Labs made multi chip units designed for the workstation market such as the Glint, Nvidia continued to capitalize on the rapidly growing video game market, which gained a much larger demographic of buyers.

As a result, Nvidia found itself not only with the gaming market, but also situated to dominate the workstation / enterprise market as the gaming market propelled its earnings and left it with a massive R & D budget.

Some of the notable gaming releases they grew from were the PlayStation 3, World of Warcraft, and the Xbox.

Nvidia released its second generation GeForce in 2000, which did very well despite its slow 166 Mhz DDR, as it was still the fastest card out until ATI released their Radeon 7200.

(As a side note, Nvidia released their first integrated graphics product in 2001 with nForce. )

The Radeon 7200 featured better memory speed, a new bandwidth optimization technology called HyperZ, and the most complete bump mapping technology to date. Its impressive capabilities were showcased with the following Ark demo:

GPU Technology: GPU History and GPU Prices Over Time (10)https://youtu.be/xSQcpVa7paM

(demos can be downloaded at Nvidia’s tech demo page.

Nvidia answered with their GeForce 2 GTS which offered nearly a half again percentage improvement, and won the OpenGL gaming niche and certainly 16 bit in Direct3D. Its dominance was really only hindered by its poor memory bandwidth optimization.

Around this time, Nvidia began to capitalize on the workstation segment with the quadro, which was essentially just the GeForce 2 architecture with a greater emphasis on precision and reliability (through the use of ECC memory). By repackaging the GeForce with more bells and whistles and segmenting features only as needed between the cards, Nvidia could charge a premium for the Quadro, remain competitive with gaming card pricing, yet prevent workstations from using the cheaper GeForce.

Although the Radeon addressed memory bandwidth issues with HyperZ among other features, it still did not compare very favorably to the Voodoo 5 5500 or the GeForce 2 GTS, though it did well enough in 32 bit color and still sold reasonably well.

Nvidia continued their lead with the GeForce 3:

GPU Technology: GPU History and GPU Prices Over Time (11)https://youtu.be/4tTWW2BRQGo

As you can see, they massively improved the rendering process with the newly improved architecture.

Then ATI answered with their Radeon 9700 Pro. It supported 64 and 128 bit color, DirectX 9, AGP 8X, and had impressive .15 micron chip specs.

GPU Technology: GPU History and GPU Prices Over Time (12)https://youtu.be/Xu-A0jqMPd8

Nvidia didn’t really have a competitor until 2004, with their GeForce 6800.

GPU Technology: GPU History and GPU Prices Over Time (13)https://youtu.be/ntNBctHHPo4

In 2006, we entered the modern era of GPUs with the 8th generation of GeForce cards, the GeForce 8800. It was wildly popular, and we started to see rendering, mapping, shading. lighting, rigging, post processing, etc that are in the same realm of quality as the cards of the last decade. For example, it could play Bethesda’s Skyrim, which is still a popular game today.

Around the same time, Nvidia became the only independent graphics chip maker still in business after the acquisition of ATI by AMD.

They developed the Tesla architecture which supported unified shaders, and did away with the fixed pipeline microarchitectures. It was used all the way until 40nm dies. This was extremely important for the transition to the general purpose GPU.

Instead of having a bunch of separate units, like vertex/pixel shaders, you had the more universal stream processors. They were more efficient in a wide range of use cases, and since they were simple, clock speeds could be ramped up.

In 2007, Nvidia releases Cuda, which allowed software developers/engineers utilize the parallel processing capabilities of their GPUs for more general purpose operations.

GPU Technology: GPU History and GPU Prices Over Time (14)https://youtu.be/nXeq2_P_O50

The General Purpose GPU (Or, the Rise of the Data Center GPU)

Today, the GPU name is an erroneous remnant of the past, as GPU technology has branched off massively from gaming over the last decade.

They are now systems on chips, or SoCs as they’re commonly called. They have all the circuity andfunctionalities you might expect from a range of separate components, but as one unified system. Specifically, you have a processor with a great deal of parallel processing, alongside a neural net acceleration engine, digital signal processors to translate analog image inputs and audio, a rasterizer, etc.

GPUs are used today for engineering app acceleration, physics modeling, rocketry, financial analysis and trading, medical imaging, clinical research, and machine learning, to name a few.

GPU Technology: GPU History and GPU Prices Over Time (15)

For example, possibly the most front-facing application, the GPU is widely employed as an AI inferencing tool in phones and vehicles.

GPU Technology: GPU History and GPU Prices Over Time (16)

While it isn’t really AI,GPUs are still referred to as“artificial intelligence inferencing engines,” which is really just a fancy way to say that it draws “inferences” or insights from existing data. For example, if you have Google photos or another cloud picture application, you may notice it identify other pictures with the same person in it, then group them together. This is accomplished by GPUs largely through “training” where Google might ask you “is this the same person?”

The reason GPUs lends themselves to this task is that to train a machine learning instance like that, it requires a massive amount of raw quantitative processing ability, where terabytes of images must be scanned one by one. The massive scalability of stream processors lends itself to these sorts of tasks very well.

Another example of recent GPU technology would be in 3D scanning of bodies, aswith Magnetic Resonance Imaging, or MRI, which Nvidia also largely innovated.

If you’ve been following the news closely, you may have seen the “folding at home” phenomenon, where supercomputers and crowdsourced computing allowed researchers to better understand the protein mechanics of Sars-Cov-2, or Covid-19. exIT Technologies was actually one of the top contributors to that project’s processing power, and we accomplished it largely by the use of many GPUs used in parallel.

GPU Technology: GPU History and GPU Prices Over Time (17)

Where a project like that would have taken months in prior years, GPU technology over time has grown enough that we can glean insights like the molecular docking mechanisms of Covid-19’s spike protein in days.

Ina more universal sense, general purpose GPUs greatly accelerate processes to reduce the time it takes engineers, researchers, software developers, etc., to solve problems, make new designs, and analyze massive sets of data. Hence the term, application acceleration.

Not only that, but the answers we get are more precise and reliable; when you need to reduce data to find an answer quickly, you sacrifice a great deal of accuracy and precision.

This year, Nvidia took a leap ahead of every other company in the world in terms of raw quantitative processing power with its A100 processor and second generation DGX systems.

It fits 54 billion transistors into just a few hundred millimeters. All that raw computing power can be subdivided into 7 separate GPUs which operate independently, or multi instance GPU (MIG).

GPU Technology: GPU History and GPU Prices Over Time (18)
GPU Technology: GPU History and GPU Prices Over Time (19)

The 3rd gen NVlink doubled the connectivity speed between the processors, and they built in sparsity to the hardware itself.

GPU Technology: GPU History and GPU Prices Over Time (20)

This, along with the 3rd gen tensor cores means that a single A100 GPU server provides 5 peat flops of output. It offered 7x better inferencing and 6x better training performance. In fact, the A100 provided the greatest leap in one generation of any Nvidia release.

GPU Technology: GPU History and GPU Prices Over Time (21)

In fact, in one swift stroke, Nvidia enabled a single system to do what could only be possible with a monster data center before. A single DGX SuperPOD made of A100 servers competes with the fastest supercomputers in the world.

Where in the past it took months to years to complete a massive supercomputer project, it only took Nvidia weeks to take the title for the fastest supercomputer in the world.

It will be interesting to see how the competition answers.

Nvidia GPU Timeline (Modern)

Note: we purchase all of the following GPUs. Get a free offer if you have spares you’d like to sell.

Tesla GPUs

Tesla GPUsRelease Date
TESLA M60August 2015
TESLA P100June 2016
TESLA M10June 2016
TESLA P4September 2016
TESLA P40September 2016
TESLA P6March 2017
Titan XpApril 2017
TESLA V100May 2017
Titan VDecember 2017
TESLA T4September 2018
TESLA V100SNov 2019
TESLA A100May 2020
Quadro GPUsRelease Date
Quadro P5000October 2016
P6000October 2016
Quadro P2000February 2017
Quadro P1000February 2017
Quadro P620February 2017
Quadro P600February 2017
Quadro P400February 2017
Quadro P4000February 2017
Quadro GV100March 2018
Quadro RTX 8000August 2018
Quadro RTX 6000August 2018
RTX 5000August 2018
Quadro RTX 4000November 2018
Quadro P2200June 2019
GeForce RTX GPUsRelease Date
GEFORCE RTX 2080 TiSeptember 2018
GEFORCE RTX 2080September 2018
GEFORCE RTX 2070October 2018
TITAN RTXDecember 2018
GEFORCE RTX 2060Jan 2019
GEFORCE GTX 1650 TiFebruary 2019
GEFORCE GTX 1660March 2019
GEFORCE GTX 1660 TiApril 2019
GEFORCE RTX 2080 SUPERJuly 2019
GEFORCE RTX 2070 SUPERJuly 2019
GEFORCE RTX 2060 SUPERJuly 2019
GEFORCE GTX 1660 SUPEROctober 2019
GEFORCE GTX 1650 SUPERNovember 2019
GEFORCE GTX 1650June 2020

AMD GPU Timeline (Modern)

Note: we purchase all of the following GPUs. Get a free offer if you have spares you’d like to sell.

Radeon Instinct GPUsRelease Date
Radeon Instinct MI25 AcceleratorDecember 2016
Radeon Instinct MI8 AcceleratorDecember 2016
Radeon Instinct MI6 AcceleratorDecember 2016
Radeon Instinct MI50 Accelerator (32GB)November 2018
Radeon Instinct MI50 Accelerator (16GB)November 2018
Radeon Pro GPUsRelease Date
Radeon Pro SSGJuly 2016
Radeon Pro WX 5100July 2016
Radeon Pro WX 4100July 2016
Radeon Pro WX 4150March 2017
Radeon Pro WX 7100March 2017
Radeon Pro WX 4170March 2017
Radeon Pro WX 2100June 2017
Radeon Pro WX 3100June 2017
Radeon Pro WX 9100September 2017
Radeon Pro WX 8200August 2018
Radeon Pro WX 3200July 2019
Radeon Pro WX 3200 (Mobile)July 2019
Radeon Pro W5700November 2019
Radeon Pro W5500February 2020
Radeon RX GPUsRelease Date
Radeon RX 540April 2017
Radeon RX 580April 2017
Radeon RX 550April 2017
Radeon RX 570April 2017
Radeon RX 560May 2017
Radeon Vega Frontier EditionJune 2017
AMD Radeon VIIFebruary 2019
Radeon RX 5700 XTJune 2019
Radeon RX 5500 XTDecember 2019
Radeon RX 590November 2019
Radeon RX 5600 XTJanuary 2020

Used GPU Prices

[gs_faq cat=”4428″]

Other GPU FAQs

GPU Technology: GPU History and GPU Prices Over Time (2024)
Top Articles
Latest Posts
Article information

Author: Horacio Brakus JD

Last Updated:

Views: 6237

Rating: 4 / 5 (71 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Horacio Brakus JD

Birthday: 1999-08-21

Address: Apt. 524 43384 Minnie Prairie, South Edda, MA 62804

Phone: +5931039998219

Job: Sales Strategist

Hobby: Sculling, Kitesurfing, Orienteering, Painting, Computer programming, Creative writing, Scuba diving

Introduction: My name is Horacio Brakus JD, I am a lively, splendid, jolly, vivacious, vast, cheerful, agreeable person who loves writing and wants to share my knowledge and understanding with you.