FPGA Design Services for High-Speed Machine Vision Cameras
Ray Hoare from Concurrent EDA discusses how FPGA technology is transforming machine vision applications. Learn how algorithms can be embedded directly into cameras, frame grabbers, or embedded systems to process images in real time, which reduces latency from milliseconds to microseconds. The video covers applications such as image enhancement, classification, laser tracking, and 3D metrology, and showcases the high-speed GigaSens machine vision camera line.
Ray Hoare:
Hi and welcome. I'm Ray Hoare with Concurrent EDA. We are an FPGA design house that has been doing work in industry for 19 years now, and I would like to take this opportunity to give you a quick overview of some of the things that we do with FPGAs in machine vision.
Ray Hoare:
In this slide, we are showing a bunch of options to take algorithms and move them into those electronics. The idea is that we want to do the processing on the video or the images that you're grabbing with your machine vision camera before it gets put to disk. Because if you just grab a bunch of data and you just capture, capture, capture, capture, then put it to disk, well, you've got to then pull it off of disk and do the processing. And that takes a long time. So a lot of times in machine vision, we want to take a picture, make a decision, respond in control. And so in this picture we have three different places that we can put algorithms into your machine vision processing. One is actually in the camera. And I have a slide talking about that in our GigaSens camera. But essentially it's actually inside the camera. So we take the algorithms, we go through a FPGA design process, and we move those algorithms into the camera itself. And so the camera, instead of just sending out raw video or raw data, actually does the processing. And that's very helpful when you're trying to perceive things. And on the right we have a couple examples.
Ray Hoare:
If I'm trying to look at enhancing my image, there's one here. We have image enhancement and one is a grayscale image. But then we actually do some enhancing and turn it into a binary image. But it's actually even clearer when it's pure black and white. And that way if you're trying to see text or you're trying to see edges, we've enhanced the image. Then, the image going out is not only clear, but it's 10 times smaller because we're using one bit instead of 10. In the example of the tire, this is actually the same picture, but on the left-hand side you don't see the the writing on the tire. But then we do something called the histogram equalization, and then we actually move some of that content from the dark into the light, so we can perceive it and read it and start to see some of the text on the tire.
Ray Hoare:
Image classification: Looking at what's on that image and what is it? Laser tracking: If I'm trying to find a laser in an image, a laser spot in the image, or I'm trying to track something as it moves around, whether it be cells or something in a real environment. In this case, we took a laser and danced it around —— and I'll show you that —— and we can pick it out. Or if you're actually doing full 3D metrology —— in this case we took a laser line, and then we measured the height of the laser and then turned that into a colorized height map. And in this case, you can see that some of the coins are a little bit higher than the other ones. And in the paperclips, you can see colors —— color has been added to show height.
Ray Hoare:
But we can take these algorithms and move them into the camera. Or, if you have your own camera and you want to do processing, we can put them inside a frame grabber. A frame grabber is something that hooks to one or more cameras and does processing, grabs the frame, and then stores it to disk. In a number of frame grabbers you can actually do custom logic inside of the card itself. Or, if you're in an embedded environment, we can put it down into a chip and then we can embed that as an OEM product for you.
Ray Hoare:
Here is the GigaSens camera line. There are only two here right now, but we're expanding that. There is a 1.1 megapixel camera and a 2.1 megapixel. This first one actually has CoaXpress connectors at the back and the other one has 10 gig Ethernet coming out the back. And so we can put the processing in the camera, in the FPGA, open FPGA, whatever algorithms you want. We can do it for you, or you can do it. And you can see we can get some crazy fast speeds at hundreds of thousands of frames per second once we get that ROI down.
Ray Hoare:
In this example, we are looking at a laser tracking where on the left-hand side —— it's a little hard to see —— that little thing wiggling has a laser on it and it's pointing over here, moving around this target. And we're showing it on this screen. This is at Photonics West. We're actually doing the tracking of the laser in the FPGA in real time. So that processing is done... Actually this one was done in the frame grabber and the software is not doing a thing. And so we actually just wanted to show, "look, we can put a bounding box around it, but we could also then just output the XY coordinate." So if you're trying to do measurements you can just send the measurements. Or if you're trying to do tracking, you can just send out the tracking. And the reason this is important that you want to do it as close to the sensor as possible, is that there is latency. Latency is the amount of time between when the photons hit the sensor and the time when you can actually make a decision based off of that data. So that delay or latency is really important. If I'm taking that frame and I'm sending it to the frame grabber and then processing it into CPU, or sending it to a GPU, and then doing processing, and then spitting out the data —— we're into milliseconds of time. Whereas if we put it in the camera or in the frame grabber, we can actually get it down into microseconds. So sub milliseconds. Even as that last bit of the frame comes in, we can complete that processing and then spit out the result.
Ray Hoare:
So this can be extremely, extremely fast and really important for control loops. So if you're trying to track that laser and you're trying to move some mirrors, then that response time, detecting where it is and then how do you control the mirrors, is really important. That delay —— because you have an object that's in motion —— well, if you take too long, it's gone away. So you need to be able to respond very quickly. We can get control loops down into the microsecond. So sub millisecond timing. And that's really important for a lot of high-end applications. And this is kind of a cute little toy that fits on a nice demo top, but I'm sure you have other examples and we'd love to talk with you about them, whether it be additive manufacturing or you're doing laser power control. How much power should you put to the laser, whether that be welding or additive manufacturing? Or you're trying to move the laser, or you're trying to do control of the weld, or a variety of other things. So, this is one example where we've actually put it into the frame grabber. Same thing can be done on the cameras now.
Ray Hoare:
Thank you very much. I hope you enjoyed the video. Send us an email. You have my cell. Give me a call. Thanks. Bye.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp4 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including world-class support, advanced search, share transcripts, powerful integrations and APIs, and easily transcribe your Zoom meetings. Try Sonix for free today.
Distributed by Concurrent EDA, LLC
Pricing, Availability and Ordering
Email