Episode 38
Ray Hoare
Concurrent EDA

 

On this episode of Manufacturing Matters, Ray Hoare, CEO of Concurrent EDA joined Jimmy Carroll and John Lewis at SPIE Photonics West 2024 to dive into the latest developments and trends in industrial computing and processing, with a particular focus on FPGAs and their role in edge AI. Topics included pushing AI capabilities directly onto cameras, DPU processing, and exciting developments from AMD that are set to drive AI capabilities forward. Additional topics include GPUs vs. FPGAs for edge AI, high-resolution cameras, custom camera design, and more.

Concurrent EDA Logo

ray-clean-audio PG.mp3: Audio automatically transcribed by Sonix

ray-clean-audio PG.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Jimmy:
Hi, everybody. My name is Jimmy Carroll. I'm the vice president of operations at Tech B2B Marketing here at day two of SPIE Photonics West 2024, and I have the pleasure of being joined by Ray Hoare of Concurrent EDA. Ray, thanks so much for taking the time.

Ray:
I appreciate the opportunity.

Jimmy:
Yeah, of course. And also I'm joined by my colleague John Lewis. So Ray, for those who don't know, could you tell the audience a little bit about Concurrent EDA and what you guys do there?

Ray:
So for the past 17 years, we've been taking algorithms and moving them into FPGAs. So methods of compute and moving them to FPGAs, and FPGAs are field programmable gate arrays that enable you to do compute. And we've evolved with the chips and the technology. And now it's system-on-a-chip. And so it's hardware and software all in a nice package. And we embed that into cameras as well as devices.

Jimmy:
Okay. Cool. So I've got some follow-up questions both about FPGAs and processing in general. But I want to talk about AI because it's everywhere, literally and figuratively, with marketing hype and the industrial world and a consumer side, We were talking about this yesterday, John. It was maybe sometime around Vision Stuttgart 2016 where the AI hype cycle and specifically deep learning really took off, and it was being touted as maybe something it's not. It's magic. It's going to solve everything. But these days, deep learning has sort of settled in as a very useful tool that helps augment existing computer vision, machine vision systems in a number of different ways. So I'm just curious, from your perspective, what are some ways that you've seen your customers and friends in the industry use AI in their applications?

Ray:
If we think about AI, what does that exactly mean? So I think of AI as putting compute and extracting features and characteristics from the image in real time. So from our perspective, AI is artificial intelligence. It's something that we can embed in computers that help us make decisions, filter down that data. So for example if you have a high-speed camera and you want to capture all the data and you want to do processing on it, that is really a fire hose of data and you have to filter it, otherwise you're going to put it into a disk, which means you've got lots of disks, and then you're going to wait a while and then you're going to process it. So with compute we've got to move it towards the edge. And AI is a great way to do that. There's a bunch of methods people have done to classify images. Oh, is this a defect. Is this not a defect. So we see that moving into the cameras.

Jimmy:
John, I don't want to monopolize.

John:
Yeah, I was just curious. I've heard of techniques where you're combining, say, deep learning with the traditional machine vision tool, for example, combining a scratch or a dent in the surface finish of a piece of a metal, a stamped component or polished component.

John:
And it's hard to quantify what that dent or scratch looks like with traditional tools. But with AI or deep learning, you can identify it and flag it, and then you can maybe go to a traditional gauging tool or measure it and see if it's too big or too small. If it's small enough, maybe it's not a fail or a flawed product. If it's too big, then it would be. Is there a way on these new edge chips where you can combine the traditional tools with the AI or are they handled on separate chips?

Ray:
Yeah. So this is where things are getting interesting because right now we're at 28 nanometer. That's the size of the transistor. And now we're going four times smaller to seven nanometer. And we've gone from FPGAs to now we're calling them adaptive compute devices. And so FPGAs are dead; long live the FPGAs. We just call them adaptive compute devices right now because they're not just glue logic. They're not just sensor interface to camera interface. Now we have thousands of compute cores inside the chip that we can do processing with. And can we do everything inside the chip? Maybe. Maybe not. But it depends on what you're trying to do. But as the resolutions get bigger, we have to get that firehose of data down to something that's reasonable. Even so, that big honking GPU can handle it. So there's always going to be some edge filtering, processing, finding the scratch. We can do convolutions, which is a way of looking at the regions of pixels and extracting features and then say, Aha, I found something and filter that data. So we're filtering the key features and then sending it to the more complex AI back on the PC. But if we don't do that, you can't handle it on the back end.

Jimmy:
Ray, you mentioned GPUs, and that's something I wanted to ask about. So there's been a lot of developments there, particularly from Nvidia, with Jetson becoming so popular, and GPUs might get a lot of the love when it comes to AI processing at the edge. But what are some other options out there?

Ray:
Yeah, that's a good question. GPUs are great when they're in a computer, and they have lots of cores and they consume lots of power. And Jetson is a method to get it to the edge. But from a ops-per-watt perspective, FPGAs and these adaptive compute devices are far superior. There's no comparison. So if you look at compute per watt, FPGAs are dominant. And where you get a Jetson, you've got a few cores in there and that's good. And they're easy to use, but they're going to be hotter. They're not going to give you as much. It has its niche, but if you really want the compute power, you've got to go to an adaptive compute device. And AMD is pushing that this next year. That's what you're going to hear this year. This year it's AMD is out of them. It's AI, AI, AI. And inside of these adaptive compute devices, they're called versal, there are AI engines. And so they go from you've got 50 AI engines to you've got 400 AI engines in the FPGA, the adaptive compute, plus two cores, plus the transceivers. So everything that you really need taken from the PC moving it to the edge, it's getting there.

Jimmy:
So maybe this is just my perception. And if I'm wrong, please do correct me. Why are GPUs seemingly so popular? And why aren't these FPGAs being deployed more. Is it because of the difficulty of VHDL programming, and if so, what can people do?

Ray:
Yeah, so those are accurate. GPUs are really widely available. There's an actual programming language. It's a variation of C++. And so you can program it and you can get used to it. And they've extracted parallelism. So they have all this parallel processing. They can do essentially a bunch of threads and on the data, and it's programming, and we've done GPU programming, and we do that as a service. So primarily we're a services organization. We have a few product lines. But we really like taking those algorithms and moving them to the edge. So it's a pain. Let me just be straight up. The tools are getting better. And we have something called high-level synthesis, which means we can write in a variation of C code, C code with some pragmas, low level. But we can extract the parallelism. We can write it in this high-level language with C, but with parallelism in it. And we can actually get down to the chip. So there's lots of things we can do there. But there's also layers now coming with FPGAs that are making it easier as well.

Jimmy:
I don't want to ask too many questions in a row, John, but if you don't mind, I just want to ask, will chips like the ones from AMD, or the ones that are out there now, has that enabled people to push AI capabilities more directly onto the camera?

Ray:
Yes, yes. So right now we have a camera that's a high-speed camera that can do 2,000 frames per second, and we can do more processing in the camera than we can on a high-end Intel. Now that requires us as engineers to do that. And that's fine. We take people's algorithms and that's what we do. So that's our specialty right. So somebody has got some crazy algorithm, call me, because this is what I love. You can't be too hairy or nasty because, I'm like, Ooh what's that? You know, I like that. Ooh, this is fun. But we can then say, ah, okay, maybe trim this, trim this, and we can do that part in the camera and we can do it at this frame rate. That way we can do the processing, respond to what we see on the image really quickly. So if you think of a classical control loop, I have a stimulus coming in. I do some processing, I have some control going out, so that loop of processing and control has to be really, really fast. So if I'm trying to guide something mechanical, I'm trying to change some power . . .

Ray:
I can do that in an FPGA really, really quickly between frames, even at 2,000 frames per second. So we can handle 40 gig of processing, do that processing, do that response, control power, control actuators really quickly before the frame comes in. Now if you take that all the way back to a computer, well, you got to go from the camera to the transceivers to the frame grabber to the processor, over the PCIe bus, back to the frame grabber, and then back to what you're controlling. So if you really want to have that control loop be really, really quick, you've got to push it to the edge, so that's really where value add is, because I can't wait that long. I have to respond. And in reality, if you can do it in software, I tell my customers, do it in software. Don't put it in an FPGA. But if you can't, if you're running over a gigabit per second and over a giga operation per second, you have to do a GPU or an FPGA or an adaptive compute module.

John:
So the benefit is response time and speed. Can you talk a little bit about some of the applications where that kind of speed and response times are required?

Ray:
Sure. So some of the applications, if you're controlling a laser and you want to change . . . classically melt pool monitoring is a good application where I'm trying to control the power of the laser. And the more I put on the laser, that pool gets bigger or smaller. I didn't put enough power in. So I can detect that. I can push it back. That's an example. If I'm trying to look at, I see an event I want to trigger based off of an event. Maybe I'm going to trigger some other system, and we're doing the processing, we've seen that. We've seen it with things that move very quickly that we want to capture and react to, and there's some things in the defense space that we care about there.

John:
That makes sense.

Jimmy:
Yeah, that's what I thought you might have been talking about. Beyond AI, what are some of the applications? You mentioned that frame rates are getting higher and resolutions are getting higher. And a lot of these applications . . . we spoke to somebody yesterday in the AR/VR space and some folks in hyperspectral, and you're dealing with a lot of data. What are some other applications and folks that you've worked with where large amounts of data are required and you've been able to help them?

Ray:
So one of our demonstrations is a 3D extraction. So 3D metrology. So you're passing a laser over an object, and what you're doing is you're actually detecting the height of the laser and calculating that because you've got the camera facing down and you've got the laser coming at an angle, or the other way around. It doesn't matter. But there are different angles. And then we can compute based off the pixel height. Oh, a profile, at that one point, and then the object moves. And so what we did was we put all of that in the camera. So we're not actually sending back the image at all. We're just sending back the height. So all of a sudden you've got a 2D image that we've processed down into a one-dimensional sense of values that is part of the image. So now you send back the height map rather than sending back the image. And so we've taken the data, done the calculations, and you can tune the calculations, like, Oh I want this algorithm to be parameterizable. Sure. Can you parameterize C code? Of course you can. Well that's a register. You put that in there, I can tune the register. And so we can do different things. We can do different algorithms. Center of gravity. We can do other ways of detecting height and getting rid of noise.

Jimmy:
When you say things like, "Can you parameterize parameterize C code? Sure!" Like it's obvious. But what else? John, any follow-up there?

John:
So you're taking, like, you could output a volume. Profile. Calculate whatever volume it is based off the plane.

Ray:
Right. So if you think about where sensors are going, we're seeing 150 megapixel images. They're running five, 10 frames a second. Well that's a lot of pixels. We're into the gigapixel per second right. So it's like wait a second. Who's going to process that? I mean, do you really need all of that? Who's going to look at that? Well you need it for a certain snapshot in time. You need certain key pieces of it. And so if we're talking about AI and how do I grab that piece of information, put it to the AI engine, either we put it at the edge or we're extracting the things of interest, we're filtering it down, so we're not going to . . . our AI engine can't handle it.

John:
I wouldn't send all the data. I just want to send the important data.

Ray:
So AI, just so everybody knows, we essentially are extracting every feature we can think of that we know how to compute. This is very simple. So don't shoot me later. Some nasty comment: That's not AI, he doesn't know what he's talking about. It's simplified, just at a high level. If we look at the AlexNet or some of the other inferences, we can extract features from the image. So let's say I'm looking for an edge. I'm looking for a corner. Well I may be looking for an edge here or here or here or here or here. Okay. So those are edge extraction passes on the image. So then I take this one frame and I'm making another frame and another frame and another frame. Another frame, another frame. And I'm making 60 different frames with 60 different features. And then I'm saying okay well maybe I want features of those features combined together, and then I want to finally put weights on them. You've taken one frame of an image and you're turning it into hundreds of frames that you're processing. So the compute load on AI is enormous. So they always have the smaller images. So you can actually do it. You can't take a 150 megapixel image and say, oh, I'm going to put this in my AI engine. There's no way. I mean, you could, but it's going to take forever, literally. So you got to break it down into different regions. So if you push to the edge, we can push some of that compute, find the gold and then find out what kind of gold is it.

John:
Are there any applications say in barcode reading for this type of edge computing. Because you're talking about edge detection, like a data matrix code would have tofind your pattern, the L-shape. And in logistics and warehousing, these things are flying by at hundreds of meters a minute, and they have to be very quick. And they're taking a whole high-res image, let's say, because they have a big wide belt. You need a wide belt that they need to cover. But there's only a barcode here and maybe one here, so they can find the barcode, extract that data, and then do the decoding.

Ray:
Exactly, exactly. So if I'm doing edge detection, I'm looking for lines or maybe actually I'm looking for two lines that are close to each other within a certain number of pixels, which is typically how they're going to do it, because that gives you a local window operator. You can push that over the whole image. You're like, aha, I found something. And then you say, well, I found something in this region. And then you can extract that image and send it to the back end processor. Or in these adaptive compute devices, I now have two processors on the chip. Maybe I want to do it there. Absolutely, barcode is actually one of the easier ones because it's nice and regular, but if we're looking at cell deformation or looking into bio like, okay, well, I'm trying to find the circularity of a cell. So they do this, they shoot a cell into this pressure chamber essentially and on a chip. And they see how the cell deforms. And then they know, and I don't know this, but they know that if you deform it under this certain pressure, this kind of cell stays more round. This one goes flatter. And they can characterize circularity of that cell going down. So it's really cool stuff. All at really crazy high frame rates right.

John:
Yeah. So you're saying maybe AI isn't required for barcodes because it's too regular.

Ray:
It's too regular. Yeah. It's well established and there are algorithms to do that. So AI is really exciting when you want to do a classifier. And a classifier means, Hey, I want to know, Is it one of these five things? And this one may be none of the above, right. And that's okay. And so we're going to say, okay, well, does it look like a car or a person or an animal or a tree? I mean this is just simplified, right. So I can classify that image and I can say, well, it's most like this and I can have a certainty that it's, oh, that's definitely it. And so we're looking at certainty, and that's where all these AI models come in and training. So we can say, Aha, I can definitely classify with high certainty. This is what is. And did we know how we did it? no. That's the cool part, right. It's like statistics, right. Without actually knowing how we did it, we're doing training and it's looking at all these features and adding weights to these features. And the system through training is doing that, and it's really cool. So you can classify all sorts of things. So if you need to do a classifier, AI is perfect.

Jimmy:
You mentioned AMD. In the future of AMD, this year is going to be a lot with AI. But I'm curious in terms of AI and any other predictions you might have. Like is there anything you see as being a market that's kind of ripe for adoption or increased adoption when it comes to AI?

Ray:
That's a good question. It's quite a huge field.

Jimmy:
In industrial.

Ray:
Yeah. Well, we work certain customers, so I can't talk to all of those things. So that's usually where I get my insight. So I'll take a pass on that one. No crystal ball. We work with a lot of customers, and AI is definitely . . . and it's a mix. It's not just AI by itself. It's front-end processing. So the thing about AI is, if I give you a noisy image or I give you a bad image, maybe bad lighting or at Photonics West if you don't have good lighting and you don't have good lenses, you've got a crappy image, and then the processing has to deal with the crappy image. Okay, well, we need to then do everything we can to improve that image. And there's some front-end processing we can do. Maybe we're smoothing things out. Maybe we know in the environment that I want to get rid of something. I want to even out my light. And we can do that deterministically without AI. But then we're getting it all ready. So now my AI has a cleaner image and it can do a better job. So garbage in, garbage out. So we definitely see more image processing, traditional image processing going into the camera.

Ray:
While that may not be AI, it's absolutely necessary for AI. And then with AMD, with their adaptive compute, there's a plethora of stuff out there. So if you take, PyTorch algorithms, which is a language you can define AI in and or deep neural nets in, you can actually compile that into what's called a DPU, which is a DNN processing unit inside the FPGA. So we can actually be right in there. We've done this for a number of algorithms. We looked at some vibration analysis, and Oh wow, we can actually predict when something's going to fail based off the vibration that we're seeing now. And that's kind of obvious. But then you're like, well, how do I know? Well, I don't know. We trained the system. This is the cool part. It's not like we're going in blind, but we know that there's something going on here. We're not really sure. Maybe we can train it to classify it. Is it this, is it that, and and you can. And that's the cool part. That's why I think everybody's like, Oh my, it's going to solve everything. We're going to just be sitting on our couches.

Jimmy:
And these models are only as good as the data they're trained with. We talked about this yesterday, but there's so much talk of, mainly robots, but, "Automation technologies are going to take our job." Well no, not really. It still requires human input. Human in the loop. People to operate these systems, troubleshoot them.

Ray:
We're going to just do more. That's what we're going to end up doing. Because we have the workforce. We're going to apply the workforce. So maybe we're going to be doing more intelligent things. And then there's some things that robots are just not good at yet. You look at food service. There's all sorts of things . . . that human interaction, we need that. And it's not that it's only food service, but that's a great example of something that's, like, You can't you can't get a robot in there, and maybe it's going to deliver your food on a little robot. We've seen that. I'm like, Oh my gosh, really?

John:
I want a robot that can fold my clothes.

Jimmy:
Well, and if it was slow it wouldn't matter as long as I don't have to do it.

Ray:
It'd be great. Just make it magic. We want some magic.

Jimmy:
Yeah. How about a how about a robot that can plow my driveway? Snow blow?

Ray:
Oh, yeah.

Jimmy:
Right. That'd be nice. Right?

Ray:
While it's snowing and I'm sleeping. Yeah, I come up and my driveway is all done. That'd be great.

Jimmy:
Uh, Ray, anything else that we didn't ask about? We've covered a good amount here.

Ray:
So there really is some cool products coming out with high-resolution images. SVS has a 151 megapixel camera. They also have a 600 frame-per-second camera that's off a 25GigE, so pushing the envelope, higher speeds. We have a Gigasense, we call it, camera that actually we can put compute inside the camera, and we're just, Hey, what would you like to do? So this is the exciting part for us is, I don't know how fast you want to go. If we ROI down, we can do 80,000 frames a second and do compute at the same time. I don't care. So this is the fun stuff. So if you have gigabit-per-second processing to do, this is what we do. And this is the fun stuff. So I'd love to talk with people. You know just to give advice or feedback. And then if they want to move forward with high speed, that's what we do. We love it.

John:
Yeah. How can people get in touch with you?

Ray:
They can send an email to info@concurrenteda.com, and myself and someone else gets it, so that way we I don't drop the ball.

Jimmy:
And obviously he said it, but you can visit concurrenteda.com to learn more. And yeah, if you have any questions for us or for Ray, we'd be happy to pass them along. It's manufacturing dash matters.com. Reach out anytime. Questions, comments, concerns, whatever. So thanks for listening or watching.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you’d love including world-class support, powerful integrations and APIs, share transcripts, secure transcription and file storage, and easily transcribe your Zoom meetings. Try Sonix for free today.

Jimmy: [00:00:00] Hi, everybody. My name is Jimmy Carroll. I’m the vice president of operations at Tech B2B Marketing here at day two of SPIE Photonics West 2024, and I have the pleasure of being joined by Ray Hoare of Concurrent EDA. Ray, thanks so much for taking the time.

Ray: [00:00:14] I appreciate the opportunity.

Jimmy: [00:00:15] Yeah, of course. And also I’m joined by my colleague John Lewis. So Ray, for those who don’t know, could you tell the audience a little bit about Concurrent EDA and what you guys do there? 

Ray: [00:00:26] So for the past 17 years, we’ve been taking algorithms and moving them into FPGAs. So methods of compute and moving them to FPGAs, and FPGAs are field programmable gate arrays that enable you to do compute. And we’ve evolved with the chips and the technology. And now it’s system-on-a-chip. And so it’s hardware and software all in a nice package. And we embed that into cameras as well as devices.

Jimmy: [00:00:52] Okay. Cool. So I’ve got some follow-up questions both about FPGAs and processing in general. But I want to talk about AI because it’s everywhere, literally and figuratively, with marketing hype and the industrial world and a consumer side, We were talking about this yesterday, John. It was maybe sometime around Vision Stuttgart 2016 where the AI hype cycle and specifically deep learning really took off, and it was being touted as maybe something it’s not. It’s magic. It’s going to solve everything. But these days, deep learning has sort of settled in as a very useful tool that helps augment existing computer vision, machine vision systems in a number of different ways. So I’m just curious, from your perspective, what are some ways that you’ve seen your customers and friends in the industry use AI in their applications? 

Ray: [00:01:51] If we think about AI, what does that exactly mean? So I think of AI as putting compute and extracting features and characteristics from the image in real time. So from our perspective, AI is artificial intelligence. It’s something that we can embed in computers that help us make decisions, filter down that data. So for example if you have a high-speed camera and you want to capture all the data and you want to do processing on it, that is really a fire hose of data and you have to filter it, otherwise you’re going to put it into a disk, which means you’ve got lots of disks, and then you’re going to wait a while and then you’re going to process it. So with compute we’ve got to move it towards the edge. And AI is a great way to do that. There’s a bunch of methods people have done to classify images. Oh, is this a defect. Is this not a defect. So we see that moving into the cameras.

Jimmy: [00:02:48] John, I don’t want to monopolize.

John: [00:02:51] Yeah, I was just curious. I’ve heard of techniques where you’re combining, say, deep learning with the traditional machine vision tool, for example, combining a scratch or a dent in the surface finish of a piece of a metal, a stamped component or polished component.

John: [00:03:07] And it’s hard to quantify what that dent or scratch looks like with traditional tools. But with AI or deep learning, you can identify it and flag it, and then you can maybe go to a traditional gauging tool or measure it and see if it’s too big or too small. If it’s small enough, maybe it’s not a fail or a flawed product. If it’s too big, then it would be. Is there a way on these new edge chips where you can combine the traditional tools with the AI or are they handled on separate chips?

Ray: [00:03:43] Yeah. So this is where things are getting interesting because right now we’re at 28 nanometer. That’s the size of the transistor. And now we’re going four times smaller to seven nanometer. And we’ve gone from FPGAs to now we’re calling them adaptive compute devices. And so FPGAs are dead; long live the FPGAs. We just call them adaptive compute devices right now because they’re not just glue logic. They’re not just sensor interface to camera interface. Now we have thousands of compute cores inside the chip that we can do processing with. And can we do everything inside the chip? Maybe. Maybe not. But it depends on what you’re trying to do. But as the resolutions get bigger, we have to get that firehose of data down to something that’s reasonable. Even so, that big honking GPU can handle it. So there’s always going to be some edge filtering, processing, finding the scratch. We can do convolutions, which is a way of looking at the regions of pixels and extracting features and then say, Aha, I found something and filter that data. So we’re filtering the key features and then sending it to the more complex AI back on the PC. But if we don’t do that, you can’t handle it on the back end.

Jimmy: [00:05:08] Ray, you mentioned GPUs, and that’s something I wanted to ask about. So there’s been a lot of developments there, particularly from Nvidia, with Jetson becoming so popular, and GPUs might get a lot of the love when it comes to AI processing at the edge. But what are some other options out there?

Ray: [00:05:26] Yeah, that’s a good question. GPUs are great when they’re in a computer, and they have lots of cores and they consume lots of power. And Jetson is a method to get it to the edge. But from a ops-per-watt perspective, FPGAs and these adaptive compute devices are far superior. There’s no comparison. So if you look at compute per watt, FPGAs are dominant. And where you get a Jetson, you’ve got a few cores in there and that’s good. And they’re easy to use, but they’re going to be hotter. They’re not going to give you as much. It has its niche, but if you really want the compute power, you’ve got to go to an adaptive compute device. And AMD is pushing that this next year. That’s what you’re going to hear this year. This year it’s AMD is out of them. It’s AI, AI, AI. And inside of these adaptive compute devices, they’re called versal, there are AI engines. And so they go from you’ve got 50 AI engines to you’ve got 400 AI engines in the FPGA, the adaptive compute, plus two cores, plus the transceivers. So everything that you really need taken from the PC moving it to the edge, it’s getting there.

Jimmy: [00:06:45] So maybe this is just my perception. And if I’m wrong, please do correct me. Why are GPUs seemingly so popular? And why aren’t these FPGAs being deployed more. Is it because of the difficulty of VHDL programming, and if so, what can people do?

Ray: [00:07:06] Yeah, so those are accurate. GPUs are really widely available. There’s an actual programming language. It’s a variation of C++. And so you can program it and you can get used to it. And they’ve extracted parallelism. So they have all this parallel processing. They can do essentially a bunch of threads and on the data, and it’s programming, and we’ve done GPU programming, and we do that as a service. So primarily we’re a services organization. We have a few product lines. But we really like taking those algorithms and moving them to the edge. So it’s a pain. Let me just be straight up. The tools are getting better. And we have something called high-level synthesis, which means we can write in a variation of C code, C code with some pragmas, low level. But we can extract the parallelism. We can write it in this high-level language with C, but with parallelism in it. And we can actually get down to the chip. So there’s lots of things we can do there. But there’s also layers now coming with FPGAs that are making it easier as well.

Jimmy: [00:08:13] I don’t want to ask too many questions in a row, John, but if you don’t mind, I just want to ask, will chips like the ones from AMD, or the ones that are out there now, has that enabled people to push AI capabilities more directly onto the camera?

Ray: [00:08:29] Yes, yes. So right now we have a camera that’s a high-speed camera that can do 2,000 frames per second, and we can do more processing in the camera than we can on a high-end Intel. Now that requires us as engineers to do that. And that’s fine. We take people’s algorithms and that’s what we do. So that’s our specialty right. So somebody has got some crazy algorithm, call me, because this is what I love. You can’t be too hairy or nasty because, I’m like, Ooh what’s that? You know, I like that. Ooh, this is fun. But we can then say, ah, okay, maybe trim this, trim this, and we can do that part in the camera and we can do it at this frame rate. That way we can do the processing, respond to what we see on the image really quickly. So if you think of a classical control loop, I have a stimulus coming in. I do some processing, I have some control going out, so that loop of processing and control has to be really, really fast. So if I’m trying to guide something mechanical, I’m trying to change some power . . .

Ray: [00:09:36] I can do that in an FPGA really, really quickly between frames, even at 2,000 frames per second. So we can handle 40 gig of processing, do that processing, do that response, control power, control actuators really quickly before the frame comes in. Now if you take that all the way back to a computer, well, you got to go from the camera to the transceivers to the frame grabber to the processor, over the PCIe bus, back to the frame grabber, and then back to what you’re controlling. So if you really want to have that control loop be really, really quick, you’ve got to push it to the edge, so that’s really where value add is, because I can’t wait that long. I have to respond. And in reality, if you can do it in software, I tell my customers, do it in software. Don’t put it in an FPGA. But if you can’t, if you’re running over a gigabit per second and over a giga operation per second, you have to do a GPU or an FPGA or an adaptive compute module.

John: [00:10:52] So the benefit is response time and speed. Can you talk a little bit about some of the applications where that kind of speed and response times are required?

Ray: [00:11:02] Sure. So some of the applications, if you’re controlling a laser and you want to change . . . classically melt pool monitoring is a good application where I’m trying to control the power of the laser. And the more I put on the laser, that pool gets bigger or smaller. I didn’t put enough power in. So I can detect that. I can push it back. That’s an example. If I’m trying to look at, I see an event I want to trigger based off of an event. Maybe I’m going to trigger some other system, and we’re doing the processing, we’ve seen that. We’ve seen it with things that move very quickly that we want to capture and react to, and there’s some things in the defense space that we care about there.

John: [00:11:52] That makes sense.

Jimmy: [00:11:53] Yeah, that’s what I thought you might have been talking about.  Beyond AI, what are some of the applications? You mentioned that frame rates are getting higher and resolutions are getting higher. And a lot of these applications . . . we spoke to somebody yesterday in the AR/VR space and some folks in hyperspectral, and you’re dealing with a lot of data. What are some other applications and folks that you’ve worked with where large amounts of data are required and you’ve been able to help them? 

Ray: [00:12:25] So one of our demonstrations is a 3D extraction. So 3D metrology. So you’re passing a laser over an object, and what you’re doing is you’re actually detecting the height of the laser and calculating that because you’ve got the camera facing down and you’ve got the laser coming at an angle, or the other way around. It doesn’t matter. But there are different angles. And then we can compute based off the pixel height. Oh, a profile, at that one point, and then the object moves. And so what we did was we put all of that in the camera. So we’re not actually sending back the image at all. We’re just sending back the height. So all of a sudden you’ve got a 2D image that we’ve processed down into a one-dimensional sense of values that is part of the image. So now you send back the height map rather than sending back the image. And so we’ve taken the data, done the calculations, and you can tune the calculations, like, Oh I want this algorithm to be parameterizable. Sure. Can you parameterize C code? Of course you can. Well that’s a register. You put that in there, I can tune the register. And so we can do different things. We can do different algorithms. Center of gravity. We can do other ways of detecting height and getting rid of noise.

Jimmy: [00:13:47] When you say things like, “Can you parameterize parameterize C code? Sure!” Like it’s obvious. But what else? John, any follow-up there? 

John: [00:14:02] So you’re taking, like, you could output a volume. Profile. Calculate whatever volume it is based off the plane.

Ray: [00:14:10] Right. So if you think about where sensors are going, we’re seeing 150 megapixel images. They’re running five, 10 frames a second. Well that’s a lot of pixels. We’re into the gigapixel per second right. So it’s like wait a second. Who’s going to process that? I mean, do you really need all of that? Who’s going to look at that? Well you need it for a certain snapshot in time. You need certain key pieces of it. And so if we’re talking about AI and how do I grab that piece of information, put it to the AI engine, either we put it at the edge or we’re extracting the things of interest, we’re filtering it down, so we’re not going to  . . . our AI engine can’t handle it.

John: [00:14:54] I wouldn’t send all the data. I just want to send the important data.

Ray: [00:14:58] So AI, just so everybody knows, we essentially are extracting every feature we can think of that we know how to compute. This is very simple. So  don’t shoot me later. Some nasty comment: That’s not AI, he doesn’t know what he’s talking about. It’s simplified, just at a high level. If we look at  the AlexNet or some of the other inferences, we can extract features from the image. So let’s say I’m looking for an edge. I’m looking for a corner. Well I may be looking for an edge here or here or here or here or here. Okay. So those are edge extraction passes on the image. So then I take this one frame and I’m making another frame and another frame and another frame. Another frame, another frame. And I’m making 60 different frames with 60 different features. And then I’m saying okay well maybe I want features of those features combined together, and then I want to finally put weights on them. You’ve taken one frame of an image and you’re turning it into hundreds of frames that you’re processing. So the compute load on AI is enormous. So they always have the smaller images. So you can actually do it. You can’t take a 150 megapixel image and say, oh, I’m going to put this in my AI engine. There’s no way. I mean, you could, but it’s going to take forever, literally. So you got to break it down into different regions. So if you push to the edge, we can push some of that compute, find the gold and then find out what kind of gold is it. 

John: [00:16:33] Are there any applications say in barcode reading for this type of edge computing. Because you’re talking about edge detection, like a data matrix code would have tofind your pattern, the L-shape. And in logistics and warehousing, these things are flying by at hundreds of meters a minute, and they have to be very quick. And they’re taking a whole high-res image, let’s say, because they have a big wide belt. You need a wide belt that they need to cover. But there’s only a barcode here and maybe one here, so they can find the barcode, extract that data, and then do the decoding.

Ray: [00:17:09] Exactly, exactly. So if I’m doing edge detection, I’m looking for lines or maybe actually I’m looking for two lines that are close to each other within a certain number of pixels, which is typically how they’re going to do it, because that gives you a local window operator. You can push that over the whole image. You’re like, aha, I found something. And then you say, well, I found something in this region. And then you can extract that image and send it to the back end processor. Or in these adaptive compute devices, I now have two processors on the chip. Maybe I want to do it there. Absolutely, barcode is actually one of the easier ones because it’s nice and regular, but if we’re looking at cell deformation or looking into bio like, okay, well, I’m trying to find the circularity of a cell. So they do this, they shoot a cell into this pressure chamber essentially and on a chip. And they see how the cell deforms. And then they know, and I don’t know this, but they know that if you deform it under this certain pressure, this kind of cell stays more round. This one goes flatter. And they can characterize circularity of that cell going down. So it’s really cool stuff. All at really crazy high frame rates right.

John: [00:18:22] Yeah. So you’re saying maybe AI isn’t required for barcodes because it’s too regular.

Ray: [00:18:26] It’s too regular. Yeah. It’s well established and there are algorithms to do that. So AI is really exciting when you want to do a classifier. And a classifier means, Hey, I want to know, Is it one of these five things? And this one may be none of the above, right. And that’s okay. And so we’re going to say, okay, well, does it look like a car or a person or an animal or a tree? I mean this is just simplified, right. So I can classify that image and I can say, well, it’s most like this and I can have a certainty that it’s, oh, that’s definitely it. And so we’re looking at certainty, and that’s where all these AI models come in and training. So we can say, Aha, I can definitely classify with high certainty. This is what is. And did we know how we did it? no. That’s the cool part, right. It’s like statistics, right. Without actually knowing how we did it, we’re doing training and it’s looking at all these features and adding weights to these features. And the system through training is doing that, and it’s really cool. So you can classify all sorts of things. So if you need to do a classifier, AI is perfect.

Jimmy: [00:19:37] You mentioned AMD. In the future of AMD, this year is going to be a lot with AI. But I’m curious in terms of AI and any other predictions you might have. Like is there anything you see as being a market that’s kind of ripe for adoption or increased adoption when it comes to AI?

Ray: [00:19:58] That’s a good question. It’s quite a huge field.

Jimmy: [00:20:01] In industrial. 

Ray: [00:20:03] Yeah. Well, we work certain customers, so I can’t talk to all of those things. So that’s usually where I get my insight. So I’ll take a pass on that one. No crystal ball. We work with a lot of customers, and AI is definitely . . . and it’s a mix. It’s not just AI by itself. It’s front-end processing. So the thing about AI is, if I give you a noisy image or I give you a bad image, maybe bad lighting or at Photonics West if you don’t have good lighting and you don’t have good lenses, you’ve got a crappy image, and then the processing has to deal with the crappy image. Okay, well, we need to then do everything we can to improve that image. And there’s some front-end processing we can do. Maybe we’re smoothing things out. Maybe we know in the environment that I want to get rid of something. I want to even out my light. And we can do that deterministically without AI. But then we’re getting it all ready. So now my AI has a cleaner image and it can do a better job. So garbage in, garbage out. So we definitely see more image processing, traditional image processing going into the camera.

Ray: [00:21:23] While that may not be AI, it’s absolutely necessary for AI. And then with AMD, with their adaptive compute, there’s a plethora of stuff out there. So if you take, PyTorch algorithms, which is a language you can define AI in and or deep neural nets in, you can actually compile that into what’s called a DPU, which is a DNN processing unit inside the FPGA. So we can actually be right in there. We’ve done this for a number of algorithms. We looked at some vibration analysis, and Oh wow, we can actually predict when something’s going to fail based off the vibration that we’re seeing now. And that’s kind of obvious. But then you’re like, well, how do I know? Well, I don’t know. We trained the system. This is the cool part. It’s not like we’re going in blind, but we know that there’s something going on here. We’re not really sure. Maybe we can train it to classify it. Is it this, is it that, and and you can. And that’s the cool part. That’s why I think everybody’s like, Oh my, it’s going to solve everything. We’re going to just be sitting on our couches.

Jimmy: [00:22:40] And these models are only as good as the data they’re trained with. We talked about this yesterday, but there’s so much talk of,  mainly robots, but, “Automation technologies are going to take our job.” Well no, not really. It still requires human input. Human in the loop. People to operate these systems, troubleshoot them.

Ray: [00:23:01] We’re going to just do more. That’s what we’re going to end up doing. Because we have the workforce. We’re going to apply the workforce. So maybe we’re going to be doing more intelligent things. And then there’s some things that robots are just not good at yet. You look at food service. There’s all sorts of things . . . that human interaction, we need that. And it’s not that it’s only food service, but that’s a great example of something that’s, like, You can’t you can’t get a robot in there, and maybe it’s going to deliver your food on a little robot. We’ve seen that. I’m like, Oh my gosh, really?

John: [00:23:37] I want a robot that can fold my clothes. 

Jimmy: [00:23:43] Well, and if it was slow it wouldn’t matter as long as I don’t have to do it.

Ray: [00:23:45] It’d be great. Just make it magic. We want some magic.

Jimmy: [00:23:55] Yeah. How about a how about a robot that can plow my driveway?  Snow blow?

Ray: [00:24:00] Oh, yeah.

Jimmy: [00:24:00] Right. That’d be nice. Right?

Ray: [00:24:02] While it’s snowing and I’m sleeping. Yeah, I come up and my driveway is all done. That’d be great.

Jimmy: [00:24:08] Uh, Ray, anything else that we didn’t ask about? We’ve covered a good amount here. 

Ray: [00:24:13] So there really is some cool products coming out with high-resolution images. SVS has a 151 megapixel camera. They also have a 600 frame-per-second camera that’s off a 25GigE, so pushing the envelope, higher speeds. We have a Gigasense, we call it, camera that actually we can put compute inside the camera, and we’re just, Hey, what would you like to do? So this is the exciting part for us is, I don’t know how fast you want to go. If we ROI down, we can do 80,000 frames a second and do compute at the same time. I don’t care. So this is the fun stuff. So if you have gigabit-per-second processing to do, this is what we do. And this is the fun stuff. So I’d love to talk with people. You know just to give advice or feedback. And then if they want to move forward with high speed, that’s what we do. We love it.

John: [00:25:12] Yeah. How can people get in touch with you?

Ray: [00:25:15] They can send an email to info@concurrenteda.com, and myself and someone else gets it, so that way we I don’t drop the ball.

Jimmy: [00:25:27] And obviously he said it, but you can visit concurrenteda.com to learn more. And yeah, if you have any questions for us or for Ray, we’d be happy to pass them along. It’s manufacturing dash matters.com. Reach out anytime. Questions, comments, concerns, whatever. So thanks for listening or watching.