Addition To Our RRL
Addition To Our RRL
Addition To Our RRL
A laser-based machine vision system for instant image recognition. The method detects 3D
position using an infrared laser ranging sensor. When combined with the proposed image analysis
algorithm, the method can be used in the detection field. An active triangulation-based ranging system
comprised of several independent laser systems is proposed. Each laser system generates a light
scattering sheet, which is projected onto the detected object [4]. Another study by Nirmalya et al.
proposed a development and path planning in mobile robots which utilizes image processing and Q –
learning for indoor environments navigation. It plans the shortest path from current state to the goal
state using images captured from ceiling of the indoor environment. In the system, template matching is
used to detect the position captured image of a mobile robot and then processed with matlab. Matlab
detects the robot's position and any obstacles that may be present within the map, the mobile robot's
goal position is fed in the matlab environment then the software creates the Q - Learning created path
based on the processed image [4].
A cost-effective and hard surveillance robot constructed with a Microcontroller, a motor shield,
and a Mobile platform running the operating system. The robot is equipped with a video camera and a
Wi-Fi robot link, and the operator can control the robot's movement using the mobile robot control
platform. The camera on the smartphone sends video feedback to the remote operator over the internet
at the same time, allowing the operator to navigate the robot from a distance [5].
Huegle et al. presented a method for automated and precise calibration to enable vision-based
robot control using a multi-camera setup, with 3 components: intrinsic calibration of each individual
camera, extrinsic calibration of each individual camera, and determining the camera-to-robot
relationship. In general, camera calibration entails determining the camera's intrinsic parameters and
distortion coefficients. Unless the camera lens is changed, these parameters remain constant. Extrinsic
parameters in multi-camera systems include the relative rotation and translation between the cameras,
which are required in depth estimation applications [6].
[1] W. Xiong, “Research on Fire Detection and Image Information Processing System Based on Image
Processing,” Proceedings - 2020 International Conference on Advance in Ambient Computing and
Intelligence, ICAACI 2020, pp. 106–109, Sep. 2020, doi: 10.1109/ICAACI50733.2020.00027.
[2] C.-Y. Lu, C.-C. Kao, Y.-H. Lu, and J.-G. Juang, “Application of Path Planning and Image Processing for Rescue
Robots,” Sensors and Materials, vol. 34, no. 1, pp. 65–80, 2022, doi: 10.18494/SAM.2022.3546.
[3] E. Liu, “Research on video smoke recognition based on dynamic image segmentation detection
technology,” Proceedings - 2019 12th International Conference on Intelligent Computation Technology
and Automation, ICICTA 2019, pp. 240–243, Oct. 2019, doi: 10.1109/ICICTA49267.2019.00058.
[5] J. Azeta et al., “An Android Based Mobile Robot for Monitoring and Surveillance,” Procedia Manuf, vol.
35, pp. 1129–1134, Jan. 2019, doi: 10.1016/J.PROMFG.2019.06.066.
[6] O. Kroeger, J. Huegle, and C. A. Niebuhr, “An automatic calibration approach for a multi-camera-robot
system,” IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, vol.
2019-September, pp. 1515–1518, Sep. 2019, doi: 10.1109/ETFA.2019.8869522.