Real-Time Processing With FPGAs: A Strategic Advantage in Defense

When it comes to imaging or perception applications in the defense industry, the cost of latency can mean a strategic advantage or mission failure. Defense imaging applications such as situational awareness, missile guidance, or image processing for sensor-enabled weapons require significantly faster processing and decision-making capabilities than machine vision inspection, for example.

Typical machine vision systems integrate multiple components, including cameras, software, and an industrial PC with a PCIe card for adding other devices, such as frame grabbers, GPUs, and NICs. By leveraging FPGAs, defense applications can limit the number of components and the processing time needed, delivering significantly faster photon-to-decision time — the time it takes to capture an image to trigger an action.

Shorter Processing Pipelines = Faster Feedback

While CPUs and GPUs offer a wide range of capabilities and benefits, FPGAs have distinct size, weight, power, and cost (SWaP-C) advantages, as well as performance advantages. Inefficient processing results in increased power consumption, which requires additional cooling, which results in heavy heat sinks, which can be cumbersome in certain applications. With FPGAs, this issue doesn’t exist.

FPGAs can execute custom image and signal processing tasks using tens of thousands of logic cells that are hard-coded to perform computations. Think of an FPGA as a custom-designed factory where raw data enters, powerful processing takes place in an assembly line, and processed knowledge exits.

FPGAs offer the low latency and parallel processing capabilities needed for defense applications. Processing in a typical machine vision application can take 1–10 ms with the type of system shown below (top row). With an FPGA inside a smart camera, on the other hand (bottom row), the processing time is significantly faster (4–20 µs), with 50–250x lower latency.

With several applications in the defense industry, sensors require real-time feedback to be effective. These applications include:

  • ISR (intelligence, surveillance, and reconnaissance) systems
  • Airborne and ground radar
  • Image processing for sensor-enabled weapons systems
  • Radar systems
  • Sensor fusion
  • Missile guidance
  • Warning/situational awareness systems (like DARPA’s Luke Binoculars)
  • Electronic warfare signal classification and countermeasures

If there is any delay (latency) in processing, failure can be catastrophic. FPGAs offer the real-time edge processing, ultra-low latency, and deterministic, real-time compute needed for mission-critical applications. Concurrent EDA can provide these solutions.

Solving Data Processing Challenges in Defense

An AMD Elite–certified partner, Concurrent EDA specializes in FPGA design services, camera distribution and customization (high speed, high data rate, SWIR, etc.), FPGA module distribution and customization, and more. For 20 years, we’ve been transforming custom algorithms into FPGAs and systems on chips (SoCs) and helping companies solve data processing challenges in mission-critical applications. Customers include the U.S. military (including the U.S. Navy, the U.S. Army, and related companies such as DARPA), major defense contractors, and a variety of prime contractors.

 

The U.S. Navy Afloat Forward Staging Base (Interim) USS Ponce (AFSB(I)-15) conducts an operational demonstration of the Office of Naval Research (ONR)–sponsored Laser Weapon System (LaWS) while deployed to the Arabian Gulf. (Source: U.S. Navy)

 

SWaP-C benefits of FPGA-based solutions:

  • Size
    • As small as 10 x 10 mm for a small FPGA
    • FPGA system on a module (SOM) from 4 x 5 cm
    • Open FPGA cameras
    • Direct RF processing with AMD RFSoCs
  • Weight
    • Typically 50 g for an SOM
    • Efficient processing → less heat → lighter heat sinks
  • Power
    • Hands-down winner over CPUs and GPUs
  • Cost
    • Reduced system integration and maintenance costs
    • Built-in security means no external security devices

Application examples

  • Real-time RF processing in an RFSoC
  • Real-time vision processing
  • Powerful edge AI capabilities

Image processing is a compute-intensive process, even for simple applications. Pairing a machine vision camera with an FPGA allows more computational tasks to occur in the hardware. This delivers the lower-latency processing required for quicker decisions, even at ultra-high speeds. For example, with the GigaSens HS 2-2247 camera — a 2.1 MP CoaXPress camera capable of frame rates above 2,200 fps at full frame — a 10-core i9 CPU will struggle. But FPGAs can enable real-time processing, even at 2,000-plus fps. So FPGAs that offer ultra-low latency offer a huge benefit for defense applications that need real-time feedback.

Why Latency Cannot Be Tolerated

In any imaging application where real-time feedback is required — whether it’s high speed or high data, in defense or on the factory floor — latency cannot be tolerated. This is especially true in mission-critical applications, of course. So what exactly is latency and why does it matter?

In an example highlighted in the video below, a moving laser is pointed toward a target, and a high-speed camera tracks the laser in the image. In a traditional PC setup, the image sensor captures photons and converts them into an analog voltage, which is then digitized to create an image. A camera-side processor (typically in an FPGA) interfaces directly with the image sensor, which performs preprocessing to improve the image and transmits the data over the camera interface.

A frame grabber PCIe card (typically using an FPGA) receives the incoming pixel stream and creates a frame. Once the frame is captured, the frame grabber transfers it across the PCIe bus into system memory (DRAM). From there, a CPU or GPU performs the required processing. In this architecture, the entire frame must be fully buffered, moved into DRAM, and processed by the CPU or GPU. In addition, if control is required, the system sends commands back over the PCIe bus to an I/O card. In some cases, the card may be the frame grabber, but either way, data still traverses the PCIe bus twice.

In this process, end-to-end latency grows into the millisecond range — often tens or even hundreds of milliseconds. From the second photons hit the image sensor, data must pass through the entire processing pipeline before reaching the I/O. That delay limits how fast the system can react and control a physical process.

Now consider an architecture in which processing is moved into the camera itself, with the image sensor paired with an onboard FPGA. As a pixel stream enters the FPGA, processing takes place in real time, before a full frame is even formed, allowing the system to extract the required information directly from the pixel stream. In the example in the video, the FPGA processes the incoming image data to track a laser beam in real time, significantly reducing latency compared to PC-based processing.

Vision Design Services

Concurrent EDA can help companies using applications that absolutely must have real-time feedback, whether they need camera customization or complete machine vision design services. You can rely on Concurrent for:

  • Processing Within the Camera at 40 Gb/s: Our engineers can embed your algorithms into a GigaSens camera, creating a custom OEM solution tailored to your application. Additionally, we can design custom cameras using any commercial image sensor, many of which are already available on a board.
  • Processing in the Frame Grabber at 40 Gb/s: We can help you achieve processing speeds of 40 Gb/s using custom logic implemented in the frame grabber. This turns any camera into a customized device using one-, two-, or four-lane CoaXPress or Camera Link.
  • Custom Design Using FPGA + Camera Modules: Our team also provides algorithm porting services for your FPGA platform, a service regularly used by OEMs and defense organizations.

FPGA Design Services

Need help turning your algorithms into OEM electronics at speeds ranging from 1 to 200 Gb/s? We utilize the latest FPGA modules, Jetson GPUs, and multicore CPUs. Our AMD Elite–certified engineers can determine which system is best for your application, budget, and deadline while designing and customizing FPGAs tailored to your needs. You can turn to us for:

  • Processing in a System on a Chip FPGA: Modern FPGAs incorporate multiple ARM CPUs capable of running Linux, effectively replacing a PC in a compact form factor. Need a customized OEM FPGA module? We sell more than500 FPGA modules, including cutting-edge Versal adaptive compute devices that combine CPU, FPGA, and AI capabilities.
  • Processing at the Edge: As an AMD Elite–certified partner, we can specify or customize a solution from AMD’s newEmbedded+ platform, which combines a Ryzen quad-core CPU running Linux with a Versal FPGA featuring FPGA logic and AI-specific tiles for edge computing. This 7 x 7 x 3 inch system consumes under 50 W and functions as a mini-PC with a customizable I/O card. One variant supports up to eight GMSL cameras. 

Common Image Signal Processing Tasks in Defense

Systems with FPGAs embedded in cameras can handle a wide range of high-performance image signal processing (ISP), delivering real-time processing across the varied lighting and sensor conditions likely to occur with defense applications. In defense applications that involve tracking, targeting, and control, why are ISP tasks important?

ISP tasks help ensure that incoming pixels are consistent and stable over time, which is crucial for real-time decision-making, tracking accuracy, and closed-loop control. Without high-performance ISP, systems might be noisy, biased, or unstable, which could lead to catastrophic consequences in the event of failure. Below is a list of common ISP tasks and why they are relevant in defense:

  • Flat field correction: Corrects pixel-to-pixel sensitivity in images, ensuring a uniform response across the field of view. This removes irregularities or artifacts that could be mistaken for targets.
  • Radiometric normalization: Reduces the differences between images taken at different times or by different sensors to ensure repeatable, consistent images. This maintains detection performance across different environments and enables reliable sensor fusion capabilities.
  • Temporal noise reduction: Reduces temporal and spatial noise by correlating data across image frames, helping improve detection in low-light or night operations and stabilizing the tracking of dim or distance targets while reducing false positives in EO/IR systems.
  • Lens distortion correction: Lens distortion can lead to poor images, which will negatively impact geolocation, sensor fusion, and targeting, for example. ISP corrects lens distortion for a range of use cases, including surveillance, weapons guidance, and missile seeking.
  • Image enhancement: Techniques such as sharpening, histogram equalization, and gamma correction help imaging systems obtain optimal images, maintain consistent interpretation across different operators, and avoid hallucinations.

Other ISP techniques relevant in defense applications include feature extraction, black-level subtraction, bad pixel correction, white balancing, color correction, sharpening, motion compensation/image stabilization, laser line/spot isolation, ROI extraction, automatic gain control, and tone mapping. If your business needs high-performance ISP in the FPGA, Concurrent EDA can help.

Faster Response Times in a Low-Power, Compact Form Factor

Imaging systems in the defense space operate under conditions that are drastically different from those on the factory floor. In these applications, imaging and vision systems directly drive action, so latency and jitter can compromise mission success and safety. Putting the processing as close to the camera as possible minimizes the distance data must travel and eliminates unnecessary buffering. FPGA-based solutions deliver the in-camera processing and ultra-fast response times that CPUs and GPUs alone cannot — in a more compact and lower-power solution.

Companies or integrators may default to CPU-GPU architectures for high-speed or high-data-rate applications for several reasons, including the fact that FPGA development requires difficult VHDL programming. At Concurrent EDA, we specialize in this programming. We can integrate your high-speed imaging, FPGAs, and high-performance ISP tasks to ensure you are getting accurate, consistent, and stable data in microseconds — not milliseconds. If designing your system for real-time feedback in the defense industry is a requirement, Concurrent EDA is uniquely positioned to help you succeed. Contact us today and learn how we can enable your technologies of today and tomorrow.

 

 

Contact Us

Email This email address is being protected from spambots. You need JavaScript enabled to view it. or contact us using the web form below!