Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dude, this is asinine. Graphics cards have been doing matrix and vector operations since they were invented. No one had a problem with calling matrix multiplers graphics cards until it became cool to hate AI.


It was many generations before vector operations were moved onto graphics chips.


Only for those not following history of graphics chips.

https://en.wikipedia.org/wiki/TMS34010

> The TMS34010 can execute general purpose programs and is supported by an ANSI C compiler.

> The successor to the TMS34010, the TMS34020 (1988), provides several enhancements including an interface for a special graphics floating point coprocessor, the TMS34082 (1989). The primary function of the TMS34082 is to allow the TMS340 architecture to generate high quality three-dimensional graphics. The performance level of 60 million vertices per second was advanced at the time.

Like these, there were several others other the IBM PC own history.


I think they’re using “vector” in the linear algebra sense, e.g. multiplying a matrix and a vector produces a different vector.

Not, as I assume you mean, vector graphics like SVG, and renderers like Skia.


Nope, I mean it in the first sense. That happened with the GeForce 256 in 1999, and shader registers (the first programmable vector math) were introduced with the GeForce 3 in 2001. Before that 3D graphics accelerators -- the term GPU had not yet been invented -- simply handled rasterization of triangles, and texture look-ups. Transformation & lighting was handled on the CPU.


(I will use "GPU" because "3d accelerator" was very gaming PC oriented term predated by 3d graphic hardware for decade)

Only in consumer market - which is why GeForce 256 release had the game devs with engines using GL smug for immediately benefiting from hardware T&L which was the original function of earlier GPUs (to the point that more than one "3D GPU" was an i860 or few with custom firmware and some DMA glue to do... mostly vector ops on transforms (and a bit of lighting, as a treat).

The consumer PC market looked differently because games wanted textures, and the first truly successful 3D accelerator was 3Dfx Voodoo which was essentially a rasterizer chip and texture mapping chip, with everything else done on CPU.

Fully programmable GPUs were also a thing in the 2D era, with things like TIGA, where at least one package I heard of pretty much implemented most of the X11 on the GPU.

This was of course all driven by what the market demanded. Original "GPUs" were driven by the needs of professional work like CAD, military, etc. where most of the time you were operating in wireframe and using gouraud/phong shaded triangles was for fancier visualizations.

Games on the other hand really wanted textures (though limitations of consoles like PSX meant that some games were mostly simple colour shaded triangles, like Crash Bandicoot), offloading of which was major improvement for gaming.


Oops, sorry I misread. That makes more sense in context.

Yeah, I remember all the hype about the first Nvidia chip that offloaded “T&L” from the CPU.


On PCs, UNIX world was already exploring Renderman by then.


If you s/graphics/3d graphics does that still hold true?


Yes. The earliest consumer PC 3D graphics cards just rasterized pre-transformed triangles and that's it; the CPU had to do pretty much all the math (but drawing the pixels was considered the hard part). Later, "Hardware Transform and Lighting (T&L)" was introduced circa 2000 by cards like the GeForce 256.


And even then, you couldn't really get any sort of serious matmul out of it; they were per-vertex, not per-pixel.

Per-pixel matmul (which is what you really need for anything resembling GPGPU) came with Shader Model 2.0, circa 2002; Radeon 9700, the GeForce FX series and the likes. CUDA didn't exist (nor really any other form of compute shaders), but you could wrangle it with pixel shaders, and some of us did.


Oh man, I forgot about doing vector math using OpenGL textures as "hardware acceleration". And it would be many more years before it was reasonable to require a GPU with programmable shaders; having to support fixed-function was a fact of life for most of the 2000's.


There were actually some completely insane workarounds even before shaders. I don't think it was actually shipped in real software, but I saw something that used 11 or 18 passes or something to do dot3 texture blending even on unextended OpenGL 1.0. Painstakingly doing one color channel at a time, values above 0 and below 0 on source and destination also separately…

Granted, if you didn't have the “squared blend” extension, it would be an approximation, but still a pretty convincing one.


2D images has (height, width, color), so they are Vector3


GPUs may well have done the same-ish operations for a long time, but they were doing those operations for graphics. GPGPU didn't take off until relatively recently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: