graphics

Auto Added by WPeMatico

Ask Slashdot: What Is the Latest and Greatest In Computer Graphics Research?

Ask Slashdot: What Is the Latest and Greatest In Computer Graphics Research?

OpenSourceAllTheWay writes: In the world of 2D and 3D Visual Content Creation, new tricks that ship with commercial 2D or 3D software are almost always advertised as “fantastically innovative”. But when you do some digging as to who precisely invented the new “trick” or “method” and when, you often find that it was first pioneered many many years ago by some little known computer graphics researcher(s) at a university somewhere. Case in point, a flashy new 3D VR software that was released in 2018 was actually based around a 3D calculation method first patented almost 10 years ago. Sometimes you even find that the latest computer graphics software tricks go back to little-known computer graphics research papers published anywhere from 15 to 25 years ago. So the question: What, in mid-2018, is the latest and greatest in 2D or 3D computer graphics research? And which academic/scientific publications or journals should one follow to keep abreast of the latest in computer graphics research?

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Nvidia Shuts Down Its GeForce Partner Program, Citing Misinformation

Nvidia Shuts Down Its GeForce Partner Program, Citing Misinformation

In a blog post on Friday, Nvidia announced it is “pulling the plug” on the GeForce Partner Program (GPP) due to the company’s unwillingness to combat “rumors” and “mistruths” about the platform. The GPP has only been active for a couple of months. It was launched as a way for gamers to know exactly what they’re buying when shopping for a new gaming PC. “With this program, partners would provide full transparency regarding the installed hardware and software in their products,” reports Digital Trends. From the report: Shortly after the launch, unnamed sources from add-in card and desktop/laptop manufacturers came forward to reveal that the program will likely hurt consumer choice. Even more, they worried that some of the agreement language may actually be illegal while the program itself could disrupt the current business they have with AMD and Intel. They also revealed one major requirement: The resulting product sports the label “[gaming brand] Aligned Exclusively with GeForce.” As an example, if Asus wanted to add its Republic of Gamers (RoG) line to Nvidia’s program, it wouldn’t be allowed to sell RoG products with AMD-based graphics. Of course, manufacturers can choose whether or not to join Nvidia’s program, but membership supposedly had its “perks” including access to early technology, sales rebate programs, game bundling, and more.

According to Nvidia, all it asked of its partners was to “brand their products in a way that would be crystal clear.” The company says it didn’t want “substitute GPUs hidden behind a pile of techno-jargon.” Specifications for desktops and laptops tend to list their graphics components and PC gamers are generally intelligent shoppers that don’t need any clarification. Regardless, Nvidia is pulling the controversial program because the “rumors, conjecture, and mistruths go far beyond” the program’s intent.

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Intel Reportedly Designing Arctic Sound Discrete GPU For Gaming, Pro Graphics

Intel Reportedly Designing Arctic Sound Discrete GPU For Gaming, Pro Graphics

MojoKid shares a report from HotHardware: When AMD’s former graphics boss Raja Koduri landed at Intel after taking a much-earned hiatus from the company, it was seen as a major coup for the Santa Clara chip outfit, one that seemed to signal that Intel might be targeting to compete in the discrete graphics card market. While nothing has been announced in that regard, some analysts are claiming that there will indeed be a gaming variant of Intel’s upcoming discrete “Arctic Sound” GPU. According to reports, Intel originally planned to build Arctic Sound graphics chips mainly for video streaming chores and data center activities. However, claims are surfacing that the company has since decided to build out a gaming variant at the behest of Koduri, who wants to “enter the market with a bang.” Certainly a gaming GPU that could compete with AMD and NVIDIA would accomplish that goal. Reportedly, Intel could pull together two different version of Arctic Sound. One would be an integrated chip package, like the Core i7-8809G (Kaby Lake-G) but with Intel’s own discrete graphics, as well as a standalone chip that will end up in a traditional graphics cards. Likely both of those will have variants designed for gaming, just as AMD and NVIDIA build GPUs for professional use and gaming as well.

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Programmer Unveils OpenGL Bindings for Bash

Programmer Unveils OpenGL Bindings for Bash

Slashdot reader silverdirk writes: Compiled languages have long provided access to the OpenGL API, and even most scripting languages have had OpenGL bindings for a decade or more. But, one significant language missing from the list is our old friend/nemesis Bash. But worry no longer! Now you can create your dazzling 3D visuals right from the comfort of your command line!
“You’ll need a system with both Bash and OpenGL support to experience it firsthand,” explains software engineer Michael Conrad, who created the first version 13 years ago as “the sixth in a series of ‘Abuse of Technology’ projects,” after “having my technical sensibilities offended that someone had written a real-time video game in Perl.

“Back then, my primary language was C++, and I was studying OpenGL for video game purposes. I declared to my friends that the only thing worse would be if it had been 3D and written in Bash. Having said the idea out loud, it kept prodding me, and I eventually decided to give it a try to one-up the ‘awfulness’…”

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Ask Slashdot: Should CPU, GPU Name-Numbering Indicate Real World Performance?

Ask Slashdot: Should CPU, GPU Name-Numbering Indicate Real World Performance?

dryriver writes: Anyone who has built a PC in recent years knows how confusing the letters and numbers that trail modern CPU and GPU names can be because they do not necessarily tell you how fast one electronic part is compared to another electronic part. A Zoomdaahl Core C-5 7780 is not necessarily faster than a Boomberg ElectronRipper V-6 6220 — the number at the end, unlike a GFLOPS or TFLOPS number for example, tells you very little about the real-world performance of the part. It is not easy to create one unified, standardized performance benchmark that could change this. One part may be great for 3D gaming, a competing part may smoke the first part in a database server application, and a third part may compress 4K HEVC video 11% faster. So creating something like, say, a Standardized Real-World Application Performance Score (SRWAPS) and putting that score next to the part name, letters, or series number will probably never happen. A lot of competing companies would have to agree to a particular type of benchmark, make sure all benchmarking is done fairly and accurately, and so on and so forth. But how are the average consumers just trying to buy the right home laptop or gaming PC for their kids supposed to cope with the “letters and numbers salad” that follows CPU, GPU and other computer part names? If you are computer literate, you can dive right into the different performance benchmarks for a certain part on a typical tech site that benchmarks parts. But what if you are “Computer Buyer Joe” or “Jane Average” and you just want to glean quickly which two products — two budget priced laptops listed on Amazon.com for example — have the better performance overall? Is there no way to create some kind of rough numeric indicator of real-world performance and put it into a product’s specs for quick comparison?

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today's Technology?

Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today's Technology?

dryriver writes: There are occasions where multiple big tech manufacturers all announce the exact same innovation at the same time — e.g. 4K UHD TVs. Everybody in broadcasting and audiovisual content creation knew that 4K/8K UHD and high dynamic range (HDR) were coming years in advance, and that all the big TV and screen manufacturers were preparing 4K UHD HDR product lines because FHD was beginning to bore consumers. It came as no surprise when everybody had a 4K UHD product announcement and demo ready at the same time. Something very unusual happened this year at GDC 2018 however. Multiple graphics and GPU companies, like Microsoft, Nvidia, and AMD, as well as other game developers and game engine makers, all announced that real-time ray tracing is coming to their mass-market products, and by extension, to computer games, VR content and other realtime 3D applications.

Why is this odd? Because for many years any mention of 30+ FPS real-time ray tracing was thought to be utterly impossible with today’s hardware technology. It was deemed far too computationally intensive for today’s GPU technology and far too expensive for anything mass market. Gamers weren’t screaming for the technology. Technologists didn’t think it was doable at this point in time. Raster 3D graphics — what we have in DirectX, OpenGL and game consoles today — was very, very profitable and could easily have evolved further the way it has for another 7 to 8 years. And suddenly there it was: everybody announced at the same time that real-time ray tracing is not only technically possible, but also coming to your home gaming PC much sooner than anybody thought. Working tech demos were shown. What happened? How did real-time ray tracing, which only a few 3D graphics nerds and researchers in the field talked about until recently, suddenly become so technically possible, economically feasible, and so guaranteed-to-be-profitable that everybody announced this year that they are doing it?

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
A New Era For Linux's Low-level Graphics

A New Era For Linux's Low-level Graphics

Slashdot reader mfilion writes: Over the past couple of years, Linux’s low-level graphics infrastructure has undergone a quiet revolution. Since experimental core support for the atomic modesetting framework landed a couple of years ago, the DRM subsystem in the kernel has seen roughly 300,000 lines of code changed and 300,000 new lines added, when the new AMD driver (~2.5m lines) is excluded. Lately Weston has undergone the same revolution, albeit on a much smaller scale. Here, Daniel Stone, Graphics Lead at Collabora, puts the spotlight on the latest enhancements to Linux’s low-level graphics infrastructure, including Atomic modesetting, Weston 4.0, and buffer modifiers.

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
NVIDIA RTX Technology To Usher In Real-Time Ray Tracing Holy Grail of Gaming Graphics

NVIDIA RTX Technology To Usher In Real-Time Ray Tracing Holy Grail of Gaming Graphics

HotHardware writes: NVIDIA has been dabbling in real-time ray tracing for over a decade. However, the company just introduced NVIDIA RTX, which is its latest effort to deliver real-time ray tracing to game developers and content creators for implementation in actual game engines. Historically, the computational horsepower to perform real-time ray tracing has been too great to be practical in actual games, but NVIDIA hopes to change that with its new Volta GPU architecture and the help of Microsoft’s new DirectX Raytracing (DXR) API enhancements. Ray tracing is a method by which images are enhanced by tracing rays or paths of light as they bounce in and around an object (or objects) in a scene. Under optimum conditions, ray tracing delivers photorealistic imagery with shadows that are correctly cast; water effects that show proper reflections and coloring; and scenes that are cast with realistic lighting effects. NVIDIA RTX is a combination of software (the company’s Gameworks SDK, now with ray tracing support), and next generation GPU hardware. NVIDIA notes its Volta architecture has specific hardware support for real-time ray tracing, including offload via its Tensor core engines. To show what’s possible with the technology, developers including Epic, 4A Games and Remedy Entertainment will be showcasing their own game engine demonstrations this week at the Game Developers Conference. NVIDIA expects the ramp to be slow at first, but believes eventually most game developers will adopt real-time ray tracing in the future.

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics
Vulkan Graphics is Coming To macOS and iOS, Will Enable Faster Games and Apps

Vulkan Graphics is Coming To macOS and iOS, Will Enable Faster Games and Apps

The Khronos Group, a consortium of hardware and software companies, has announced that the Vulkan graphics technology is coming to Apple’s platforms, allowing games and apps to run at faster performance levels on Macs and iOS devices. From a report: In collaboration with Valve, LunarG, and The Brenwill Workshop, this free open-source collection includes the full 1.0 release of the previously-commercial MoltenVK, a library for translating Vulkan API calls to Apple’s Metal 1 and 2 calls, as well LunarG’s new Vulkan SDK for macOS. Funding the costs of open-sourcing, Valve has been utilizing these tools on their applications, noting performance gains over native OpenGL drivers with Vulkan DOTA 2 on macOS as a production-load example. Altogether, this forms the next step in Khronos’ Vulkan Portability Initiative, which was first announced at GDC 2017 as their “3D Portability Initiative,” and later refined as the “Vulkan Portability Initiative” last summer. Spurred by industry demand, Khronos is striving for a cross-platform API portability solution, where an appropriate subset of Vulkan can act as a ‘meta-API’-esque layer to map to DirectX 12 and Metal; the holy grail being that developers can craft a single Vulkan portable application or engine that can be seamlessly deployed across Vulkan, DX12, and Metal supporting platforms.

Read more of this story at Slashdot.

Go to Source

Posted by amiller in Blog, graphics

Intel’s Discrete GPU Development Unveiled

It looks like Intel is making progress in its development of a discrete GPU architecture. This of course comes in the wake of their hiring of Raja Koduri, AMD’s former Radeon Technologies Group (RTG) head. At the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco Intel unveiled slides pointing to the direction in which its GPU development is headed. They are essentially scaling up their existing iGPU architecture, and allowing it to sustain high clock speeds better.

Go to Source

Posted by amiller in All News, Blog, Discrete GPU, graphics, Hardware News, iGPU, intel