Develop and improve computer vision algorithms for
Object detection and classification
Multi-object tracking
Target re-identification
Vision-based guidance and localization
Contribute production-quality code in Python and C++ within an existing perception framework.
Optimize and extend current perception pipelines to improve accuracy, latency, and robustness.
Deploy and fine-tune models for real-time performance on embedded/edge hardware platforms.
Collaborate with the Perception Lead and cross-functional teams (controls, navigation, hardware) to integrate and validate perception modules.
Support field testing and hardware integration activities, troubleshooting real-world performance issues.
Perform performance profiling, debugging, and root-cause analysis under real-time and resource constraints.
Contribute technical input to feature refinement and continuous improvement of the perception stack.
Stay current with relevant computer vision advancements and apply practical improvements where appropriate.
Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Robotics, or a related field.
7+ years of experience developing computer vision systems in production environments.
Strong hands-on experience with object detection and tracking in real-time systems.
Advanced proficiency in Python and strong experience with modern C++.
Experience deploying and optimizing CV/ML models on embedded or edge compute platforms.
Solid understanding of performance optimization, system integration, and debugging in complex software systems.
Experience in robotics, autonomous systems, aerospace, or defense environments is highly preferred.
Proven ability to troubleshoot and improve deployed systems under operational constraints.
Strong collaboration and communication skills within multidisciplinary engineering teams.
Comfortable working in fast-paced, execution-focused environments
Willingness to travel as required for integration and testing activities.