Job Description

  • Responsible for the development of perception detection algorithms for cameras, lidar, and other sensors.
  • Debugging and testing; Responsible for deep learning target detection, semantic segmentation, and other model building.

Requirements

  • Master’s degree or above in Computer Engineering, Software Engineering, Robotics, Automation, or related fields.
  • Good foundation in mathematics, algorithms, and programming development experience, proficient in C++.
  • Familiar with Linux systems, with ROS experience preferred.
  • Familiar with point cloud processing algorithms, such as point cloud filtering, segmentation, clustering, fitting, tracking, etc.
  • Familiar with basic vision algorithms, such as object detection and tracking, lane line detection and tracking, camera calibration, optical flow, distance estimation, etc.
  • Familiar with basic machine vision algorithms, such as optimization algorithms, traditional classifiers, CNN, RNN, GAN, NAS, etc.
  • Practical project experience is preferred.
  • Familiar with Lidar 3D point cloud processing and registration, 3D computer vision, SLAM, VIO, State estimation theory, real-time multi-sensor data processing and fusion (GNSS, IMU, wheel speed sensors, etc.), filtering techniques such as EKF/PF/UKF.

To see more jobs that fit your career