High-performance sensor architectures are allowing for faster data processing as more data continues to become available and requires processing for analysis and artificial intelligence applications, officials from NVIDIA and Lockheed Martin said on March 24 at the NVIDIA GTC Conference.

Lockheed Martin Associate Fellow Ben Luke and NVIDIA Solutions Architect Zoe Ryan spoke at the conference about how high-performance sensor architectures are speeding up data processing, with Luke calling it a “novel way of deploying signal processing code using the NVIDIA GPU (graphics processing unit) or data processing unit (DPU) along with a GPU.”

“One of the big challenges in modern sensors is that the data rates are ever-increasing,” Luke said. “There’s more data available, and there’s more need to process that data. There’s also a strong desire to move that processing farther to the left on this architecture, moving it closer to the edge and that results in size, weight, and power, and constraints that that are that are pressing on that architecture.”

Luke said currently most sensor architectures treat GPUs as a secondary device to central processing units (CPU) in complex high-performance platforms, leading to architecture inefficiencies. He said that may be appropriate for workloads that require a high amount of mixed processing and cohesiveness between CPUs and GPUs, but it’s inefficient for workloads when processing data in the GPU, then sharing that data over the network is the primary objective.

Ryan said that NVIDIA is currently using the architectures for a variety of workloads, including some being worked on in collaboration with Lockheed Martin.

Ryan said the first workload that NVIDIA imagined for this architecture is one with a high-throughput and low size, weight, and power (SWaP) signal processing at the edge. This refers to a workload that would require many computing resources over a long period of time for a computational task while utilizing a low SWaP solution.

Ryan added that NVIDIA has two other workloads in mind in collaboration with Lockheed Martin, one that provides real-time geospatial image processing and another on the machine learning side that would look to detect and predict fire lines. She said there are also envisioned workloads for general signal and radar processing.

Additionally, Ryan said NVIDIA is working on more ambitious and experimental workloads in the financial sector.

“To highlight kind of how we’re taking a step into different workloads that could be applicable, there’s ongoing work from the finance side to do market data processing and high-frequency trading applications,” Ryan said.

“That kind of highlights how these workloads are now stepping into new regions that maybe we hadn’t even imagined when we first came up with this design,” she concluded.

Read More About
About
Lamar Johnson
Lamar Johnson
Lamar Johnson is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags