AMD have grafted an SSD to their Fiji GPU creating the potential for terabytes of VRAM

AMD Radeon Pro SSG announced

Update July 27, 2016: More details have come out about the Radeon Pro SSG, using a Fiji GPU (not Polaris) and a pair of Samsung 950 Pro SSDs inside it.

The GPU side of AMD is kicking into high gear, but it’s worth checking out our in-depth look at the upcoming AMD Zen CPUs too.

The Samsung SSDs are about the fastest PCIe drives you can buy right now, and are set up as a pair of 512GB versions in RAID-0. They're connected to the AMD Fiji GPU via a PCIe bridge chip and require applications to be specially coded with an AMD API to fully understand the connection between chip and extended framebuffer.

Original story July 26, 2016: Calling it a “disruptive advancement” for graphics, AMD is paving the way for an exponential increase in framebuffer capacities.

Computer graphics render monkeys are all over Anaheim right now, at the SIGGRAPH technology conference, where AMD have announced a revolutionary new pro-level graphics card: one with solid state storage directly interfaced with the GPU.

File this under ‘cool shit that will never hit our desktops’ but AMD’s new Solid State Graphics (SSG) technology has been designed to deliver an exponential increase in the memory capacity available to a professional graphics card. The new Radeon Pro SSG has a pair of PCIe 3.0 M.2 slots tied into the Polaris 10 Fiji GPU, allowing the graphics chip to use them as an extra level of storage. The maximum is reportedly a full 1TB of solid state storage.

And you thought that 12GB on the Nvidia Titan X was impressive…

AMD Radeon Pro SSG unveiled

Right now the largest pool available on an AMD GPU is 32GB - which is still pretty good - but when a pro card is having to deal with huge datasets, or 8K video, it will quickly exhaust that capacity. Then the GPU has to go and have a long conversation with the CPU, begging resources from the system, whether that be straight DRAM or even slower local storage, and all of that takes a whole lot of extra time.

With the new technology, once the Polaris GPU runs out of the standard VRAM it then goes in search of the SSG pool, completely ignoring the CPU, which massively reduces the time it takes to go through attached storage.

AMD used the example of 8K video on stage at SIGGRAPH, where a traditional card slows down to a sedate 17fps playing back 8K footage, the Radeon Pro SSG could play the same footage back at 90fps and even skip through as you would with a locally-stored 1080p video.

Radeon Pro SSG release date

The Radeon Pro SSG though still uses traditional graphics memory because it offers far greater bandwidth than even the PCIe attached storage on offer in the SSG card. The maximum you’ll get out of a standard M.2 interface right now is around 2Gbps - AMD’s RX 480 comes with 256Gbps of bandwidth and the new Nvidia Titan X offers 480Gbps.

The new card is only at the developement kit stage so far, with full availability coming next year. But if you absolutely have to get your mits on one you can sign your soul over to AMD and cut them a cheque for $10,000.

But the obvious question is: will it play Crysis?

War Thunder
Sign in to Commentlogin to comment
AnAuldWolf avatarmemnarch avatarMrAptronym avatar
AnAuldWolf Avatar
863
1 Year ago

It's cool, but I wouldn't be excited for it even if it was going to be a thing. Why? The fidelity of games hasn't been halted by what the hardware can render, per se, but how much work it can automatically do for the artists to reduce development costs. At the moment, we've hit a point where (and possibly until quantum computing) we can't automate things any further. So the development costs stay high. Realistically, you can't really have games with much higher fidelity than what we have right now, the cost is too high.

That's why the leap between PS3 and PS4, and 360 and One, is so tiny. We're halted by how much it costs to make a high fidelity game, not that I was ever into fidelity anyway, but the point needs to be made. If another computer hardware boom is going to happen, we need to be looking at making really complex automation tools for artists.

1
memnarch Avatar
56
1 Year ago

I think that's pretty simplistic, the render distance in minecraft is GPU bound, the speed of a game of dwarf fortress is CPU bound.

Fidelity of the same old game spaces at the same old scales isn't whats interesting about advances in graphics tech, it's scale.

2
AnAuldWolf Avatar
863
1 Year ago

Yes, but we don't have the technology for scale right now, only fidelity. Fidelity is what's driving our graphics. Not scale. Scale is more often CPU-bound, after all. I know that you're trying to be clever, but your point is moot.

That you think scale comes from graphical hardware shows a distinct lack of understanding of how the hardware works. You want fidelity enhanced scale, not just scale. Which brings me back to my point.

0
MrAptronym Avatar
359
1 Year ago

There are still a lot of hardware restrictions for games. A ton. Yes, for supposedly 'photo-realistic' graphics, it is becoming prohibitively expensive. That doesn't encompass every game. In addition to the going wide aspect that Memnarch brought up, there are plenty of lighting and shader techniques that take more processing power than is really usable today. (Though, this being an article about breaking VRAM restrictions, I suppose that is less relevant.) Things like "Number of unique enemies on the screen" still pose limitations. Additionally, very high res monitors are still an issue. I'm sure that if you spoke to an actual game programmer they could rattle off a dozen hardware restrictions just in their specific area of specialty.

Additionally, quantum computing is not just a better computer. They are only faster when you can phrase a problem to take advantage of their unique properties. For other problems, they are no better than current computers. (and will probably actually be worse for a long time until the tech develops.) I don't know much about quantum computing, but it is not universally relevant, and I have never seen anyone show it having an advantage for any common graphical calculations.

In any case, this card seems aimed more at people video-editing and such. This isn't a card for people playing games. Working with very high res video requires unreasonable amounts of memory. There is definitely a need for some kind of expanded memory solution. I have no idea what exactly the speed gains are by skipping the CPU, but I imagine it is a fair bump.

1
AnAuldWolf Avatar
863
1 Year ago

Sigh. I know what quantum computing is. Good grief. I was talking about the better AI provided over the long term by fuzzy logic that would lead to better tools. Do you feel you need to make yourself look superior by patronising people? What do you get out of that? Is it a desire to win? I can feel your hate radiating from here and it's bizarre to me.

Those driven by biological imperatives are baffling to me.

And your point is moot. You're pushing up against the limitations of the human brain with things like enemies on screen. We can already have more than we can realistically deal with, but it's AI that makes those enemies interesting. And that AI is CPU-bound.

What you want is a high number of high-fideilty enemies on screen. Which brings us back to what I said about hardware being focused on fidelity and the costs of being able to produce the kind of experience you'd want.

I wish people would think before they chose to attack. I'd rather not be put in a position where I have to defend myself by explaining clearly simple concepts.

0
MrAptronym Avatar
359
11 Months ago

Sorry it has taken me so long to reply, I just moved and was without internet.

I don't know how you feel hate coming from my relatively bland speech, I assure you there was none there. I am explaining my view on things, just as everyone else in any comment section is. I have in no way attacked you, I have disagreed, and that is a pretty important difference. Whatever imperative is driving that clearly drives you as well. I am assuming you are not actually some advanced bot, so I would hazard a guess that you are driven by biological imperatives the same way every other organism on the planet is.

Please do not act condescending, if you wish to debate my points then feel free to debate them, don't make weird assumptions about my motives, that is just rude. You language about "explaining clearly simple concepts" is equally unnecessary and doesn't do anything to advance the conversation.

I don't really know that quantum logic (not technically the same as fuzzy logic, but close enough) has potential advantages in design and art automation, I suppose it could. I am by no means on expert on those sorts of tools, quantum computing or even fuzzy logic. However, it is sort of immaterial to this discussion, I guess I just misunderstood your point in the first post.

You claim we are bound by budget and the throughput of artists. That is true for some aspects of some games, but that is not the whole story. There are many aspects of games that still bump up against GPU limitations. I was very specific about 'unique enemies'. Yes, some games can get many copies of the same (or generated from the same pool) objects on screen, but to have many unique models and textures on screen at once is a different challenge. There are many other tasks GPUs limit, multiple light sources casting shadows and reflections are both good examples. Graphics programmers for a whole host of games have expressed the limits they run into.

As an example, right now FFXV's developers are having a hard time getting the game to run on consoles and admitted to developing the game above spec and having to then cut fidelity in the form of lower res textures, low render resolution, and a locked framerate. Their bounds were clearly not in the art budget. They have a game that could perform and look better on better hardware.

There are many tasks in a game that are more reliant on processing than sheer art throughput. You have graphics programmers as well as artists. Many of the aspects they are involved in developing could be done more exactly or on a larger scale if they weren't being bound by computer restrictions. The same is true with models as well, there are plenty of approximations we use that could be better. There are boundaries besides the ones you describe.

We have been told for years and years that we are basically at the limits for graphics hardware and it just has not come to be. I still see plenty of developers complaining about restrictions though.

1