ONNX Runtime inference engine is capable of executing ML models in different HW environments, taking advantage of the neural network acceleration capabilities. Microsoft and Xilinx worked together to integrate ONNX Runtime with the VitisAI SW libraries for executing ONNX models in the Xilinx U250 FPGAs. We are happy to introduce the preview release of this capability today. Jump To: [06:15] Demo by PeakSpeed for satellite imaging Orthorectification Learn More: ONNX Runtime https://aka.ms/AIShow/OnnxruntimeGithub ONNX Runtime + VitisAI https://aka.ms/AIShow/VitisAi Follow: https://twitter.com/onnxruntime
For more tips like this, check out the working remotely playlist at www.youtube.com/FoetronAcademy . Also, if you need any further assistance then you can raise a support ticket and get it addressed.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article