With HALCON 18.05, YOU are be able to perform deep learning inference on a CPU
This CPU inference has been highly optimized for Intel®-compatible x86 CPUs. In tests, this resulted in a typical inference execution time on a standard Intel CPU (8 threads) that achieves performance similar to a midrange GPU.
Removing the need for a dedicated GPU greatly increases the operational flexibility. E.g., industrial PCs that usually are not designed for housing large and powerful GPUs can now easily be used for deep-learning-powered classification (inference).
Other main refinements:
- Improved Bar Code Reader
- Enhanced Deflectometry
- 3D Improvements
- Support for Hypercentric lenses
- HDevEngine Improvements