ONNX Runtime inference engine is capable of executing ML models in different HW environments, taking advantage of the neural network acceleration capabilities. Microsoft and Xilinx worked together to integrate ONNX Runtime with the VitisAI SW libraries for executing ONNX models in the Xilinx U250 FPGAs. We are happy to introduce the preview release of this capability today. Jump To: [06:15] Demo by PeakSpeed for satellite imaging Orthorectification Learn More: ONNX Runtime https://aka.ms/AIShow/OnnxruntimeGithub ONNX Runtime + VitisAI https://aka.ms/AIShow/VitisAi Follow: https://twitter.com/onnxruntime
ONNX Runtime Print
Created by: Saima Farheen
Modified on: Mon, 31 Aug, 2020 at 8:08 PM
Did you find it helpful?Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.