Subject Areas : Journal of Radar and Optical Remote Sensing and GIS
Ali Faragi 1 , Abbas Bashiri 2 , Mehdi Nasiri 3
1 - Master's student in Electrical Engineering, Electronic Warfare, Imam Hossein University
2 - Instructor of electronic, department of Information and communication technology, Imam Hossein University, Tehran, Iran
3 - Senior electrician
Keywords:
Abstract :
Designing and developement a laser imaging system with depth measurement and image defogging capabilities
Abstract
Most smart and unmanned aerial vehicles use optical imagers for imaging and distance measurement. But in the foggy environment, the quality of the images taken by these systems is not good enough and there is even a possibility of destroying the images. Because the light gets scattered in contact with water vapors and fog and destroys the image recorded in the imager. Therefore, image processing is very important in these systems, but in heavy fog, distance measurement faces a serious problem. Other alternative methods are generally not economical or efficient. In this article, a new method is introduced for distance measurement and imaging on the sea surface. This method scans the environment by using two stereo imagers whose optical axis is parallel and a linear laser located on one of the cameras, and using trigonometric relations, the difference of light lines recorded in the imagers is calculated and the image and sample 3D is created from the environment. The analysis of the obtained results shows that the system is able to measure the distance of the environment with an error of less than one centimeter, and due to the type of arrangement of imagers and laser, it overcomes the effects of fog in the images with a much lower cost than other hardware.
Keywords: distance measurement in fog, stereo imaging in fog, removal of fog effects, 3D imaging
Introduction
Foggy weather often makes it difficult to capture clear images because water particles of significant size dominate the air. These particles not only scatter and absorb scene light, but also scatter some atmospheric light towards the imager. Therefore, the images captured by the imager are degraded and usually have low contrast and poor visibility(Xu et al., 2016).
Fog is caused by the large presence of water droplets in the air due to bad weather conditions. These water droplets scatter sunlight and light reflected from other objects. Due to scattering, the contrast of an outdoor scene fades and a white cast is created on the viewer's or photographer's side. This produces a poor quality image. The amount of fog in an image mainly depends on the distance between the scene and the photographer. Therefore, estimating the depth map of a foggy image is important to recover the fog-free image(Anwar & Khosla, 2017).
Defogging an image is a challenging task because the fog depends on the depth, the details of which are unknown. If the input is only one image, depth map prediction is a challenging problem. Therefore, many deblurring approaches have used multiple images and additional information to recover details. However, without any prior information, real-time single-image fog removal is in demand for many real-world applications such as automated driving systems and surveillance systems(Kokul & Anparasy, 2020).
Pizer and his colleagues proposed the contrast limited adaptive graph equalization method for defuzzification. CLAHE limits the noise gain by establishing a maximum value. However, there are only few works on color images using CLAHE(Pizer et al., 1987). John et al proposed a de-fogging algorithm based on video background and foreground separation. First, they used a physical model-based single-image dehazing algorithm to improve the background image and simultaneously obtain the global illumination parameter. Then, the estimated brightness parameter was used to improve the foreground image of each frame. Finally, the enhanced video was obtained by merging the background image and the enhanced foreground(John & Wilscy, 2008). Yun et al proposed an improved previous dark channel deblurring algorithm that uses a multi-phase surface set formulation method to replace the soft matte algorithm to recover each frame from the video, and then proposed a color correction method to solve the color jump problem(Yoon et al., 2012). Yang et al. considered background image transmission as the global component for dehazing. They first extracted the background through the differential frame method. Then, they estimated the transition of the background image using the MSR algorithm and used the bilateral filter to optimize the transition, and finally, the global transition was applied to improve the next frame(Yang et al., 2013). Li et al. presented a method to estimate the depth of the scene by stereo method and recover the blurred image. They jointly studied stereo vision and image deblurring problems and designed an algorithm that simultaneously estimates scene depth and deblurs input images. Its method is based on the fact that depth cues from stereo and fog thickness are complementary. Their method works best in scenes with dense fog, when both near and far depth cues are strong(Li et al., 2015).
Salazar et al. have proposed a new method based on depth approximation via dark channel prior, local Shannon entropy, and fast guided filter to reduce artifacts and improve image recovery in sky regions with low computation time. Their focus is on improving processing time without losing recovery quality and avoiding image artifacts during image de-fogging. By analyzing images with different resolutions, the proposed method in this work shows the lowest processing time in similar software and hardware conditions(Salazar-Colores et al., 2020).
Liu et al. have proposed multi-band polarization imaging to overcome single-band defects. In their study, sea fog and non-sea fog polarization imaging experiments are conducted in an indoor simulated environment and compared and analyzed by creating a synthetic simulation system to describe sea fog concentrations with different concentrations. Then, the polarization information of each waveband transformed by the Stokes component is fed to the two-dimensional discrete wavelet algorithm for image fusion processing. Compared to imaging using a conventional camera, polarized images of foggy scenes have a purer background, less noise, less distortion, and more distinct edge features(Liu et al., 2023).
Foggy environment poses a significant challenge to the regular and effective operation of many imaging systems. Existing imaging equipment is subject to interference from the external environment. Also, the obtained images are often highly degraded and mainly the information of the scene features is blurred, with low contrast and color distortion, which is not suitable for imaging systems. Image extraction affects its analysis, perception, recognition and other subsequent processing, which significantly reduces the performance of the visual system and limits the practical value of the image. Also, the main feature that fog has against rain and snow is its stability, which destroys images. This issue is extremely challenging in sea surface imaging and due to the low contrast of the sea surface, sky, ships and ships, less systems can work properly. Basically, the purpose of overcoming the effects of water vapor and fog in the image is to remove the interference of weather factors of the degraded image and to increase the clarity and color saturation of the image. In this way, the valuable features of the image can be recovered to the maximum extent so that the image can be better used in many vision systems such as remote sensing observation and automated driving. Therefore, it is of great practical importance to study how to effectively reconstruct the original clear image from the image captured in the foggy environment and improve the robustness of the vision system.
Materials and Methods
The purpose of this article in the first stage is to design an active 3D scanner that has an acceptable accuracy for measuring relatively far distances by using the simplest tools consisting of ordinary digital imagers and a cheap linear laser and using appropriate algorithms. Since matching is the most important part of accuracy in the imaging process with two cameras, in this article a simple and accurate algorithm is used for matching, which is based on the comparison of laser line displacement in two imagers. The accuracy obtained by this method is independent of the shape and color of the image and is within the pixel limit.
Also, the biggest advantage of this method over stereoscopic scanners is its narrow band imaging capability. This makes it possible for the scanner to take images and depth measurements with minimal processing and without using image processing algorithms in conditions of fog and water vapor.
In this article, a simple and yet accurate 3D laser scanner is introduced using a computer, linear laser, imager and turntable, and the final system is shown in the figure below.
|
The laser scanner system built in this article |
The stages of construction and analysis of this system are specified in the figure below.
|
Diagram of the stages of construction and commissioning of the proposed system |
In this system, two similar CCD imagers with a resolution of 2 megapixels and an imaging rate of 24 frames per second and lenses with a focal length of 16 mm and a green laser with a wavelength of 532 nm with a linear lens are used. The reason for not using a laser with a shorter wavelength and as a result less light scattering was the high cost of lasers with a shorter wavelength. A simple DC motor with gearbox is used to make the rotating axis. Labview software is used for processing and calculations due to its high processing speed and fast adaptability to imaging systems. For imaging with a 3D scanner, the motor rotates at a constant speed in a completely dark environment. As the motor rotates, the system moves.
To adjust the imagers, the same methods as stereo imagers are used. To use the maximum depth measurement range, we set the imagers parallel to the optical axis. At this stage, we need to adjust the images of two imagers with a subject at a distance of one kilometer or more, until both of them display a single image. In the next step, we put the laser on one of the imagers and adjust its light line perpendicular to the image. Then, by observing the laser line in the second imager and the trigonometric relations, we perform the distance measurement operation. We will test the system in an environment with very thick fog and at the end we will check the obtained results.
Results and Discussion
The first camera, on which the laser is located, always observes the laser line at a fixed point and records one line in each frame. By putting these lines together, a relatively clear image of the scene is recorded. The figure below shows the image created by the first camera, which is created by placing the recorded lines in each image frame.
|
monospectral image made of the environment by the first camera |
The second camera, which is far from the laser, has the task of depth measurement and creating a 3D image. Considering that both cameras shoot simultaneously, every point recorded in the first camera is recorded in the second camera at the same moment. The figure below shows an image of a second camera frame. This line is broken when the laser light hits the distant or nearer objects and this causes the depth measurement process.
|
The image recorded in one frame from the second camera |
By using the components of the cameras and the relations governing the geometric pattern of the images, all three dimensions of the desired points can be calculated and the matrix of super points is formed.
|
The 3D image made by the second camera, whose scale is in centimeters |
As it is clear in the figure above, as the objects approach the system, their spectrum becomes closer to pink and as they move away, they become darker. In some parts of the scene, due to the reflection of light on transparent and shiny objects, the system has an error and imagines those areas close to the system.
By using the obtained point cloud matrix, the environment can be converted into a 3D model. The points that are ahead of the wall and the table are system errors and are caused by the collision of laser light with objects with a shiny surface. Also, the corner points and holes that cause the return light of the laser to not be recorded in the second camera, cause errors. However, the amount of error is not so much that the dimensions of the target cannot be recognized. The 3D model made is shown in the figure below.
|
3D pattern made by the matrix of obtained points |
After building the 3D scanner system and designing the processing algorithm, fog is produced in the laboratory environment and the imaging process is performed with the built system. In the first camera where the laser is located, due to the presence of very thick fog, its image is almost completely destroyed and no details can be seen in the foggy areas. As shown in the figure below, the sampling line of the first camera is saturated in fog and that line is completely destroyed.
|
The image of the first camera in conditions with fog (left) and without fog (right) |
But in the second camera, the effects of fog almost did not affect its image. The reason for this is that the main problems of fog imaging are Rayleigh scattering, which reflects a large part of the light in the same direction when the light hits the water droplets. But due to the fact that the second camera is not in the same line with the laser light, its image is not destroyed up to the distance that the system is capable of measuring. Also, there is no need for complex image processing algorithms and the use of heavy processing load. This issue makes this method highly practical and also quite economical compared to other methods. The figure below shows the image made by the first imager, the presence of thick and heavy fog destroys the image and it becomes completely saturated in areas where the fog is very intense.
|
Image made by the first camera in heavy fog conditions. As it is known, thick fog has destroyed many parts of the image, which cannot be reconstructed with image processing algorithms. |
In the figure below, which shows the image made by the second camera, the details of the scene are clearly recorded and the halo effects and fog saturation are not visible.
|
The image recorded by the second camera in heavy fog conditions, which clearly records the details of the scene and has the least distortion and noise. |
As can be seen in the test results, this method has a great advantage over the passive stereo scanner by using cheap equipment and using light processing algorithms. It also provides a good image and depth measurement in heavy fog and water vapor. And by using the distance matrix made in this article, a 3D model of the environment can be presented.
Conclusion
In this article, a system with two cameras and a linear laser was built for monospectral imaging and passing through fog and water vapor. Then we tested the system in an environment with very thick fog and finally we checked the results obtained in the proposed system and it was found that the objects in the recorded images are clearly defined in terms of size and shape in the proposed system and the fog has the least effect on the image. had. As it turned out, the presence of water vapor and fog caused the saturation of the laser light in the first camera, but in the second camera, due to the angle it had with the light line, the scattering effects were less effective. This system has a high processing speed compared to image processing algorithms for de-fogging, which have a large processing load and are slow. Compared to other 3D laser scanning systems, this system is faster due to linear rather than point sampling, and the most important thing is the much lower price of the proposed system compared to other laser systems, which increases its usability.
References
Anwar, M. I., & Khosla, A. (2017). Vision enhancement through single image fog removal. Engineering Science and Technology, an International Journal, 20(3), 1075–1083. https://doi.org/10.1016/j.jestch.2016.11.015
John, J., & Wilscy, M. (2008). Enhancement of weather degraded video sequences using wavelet fusion. 2008 7th IEEE International Conference on Cybernetic Intelligent Systems, 1–6. https://doi.org/10.1109/UKRICIS.2008.4798926
Kokul, T., & Anparasy, S. (2020). Single Image Defogging using Depth Estimation and Scene-Specific Dark Channel Prior. 2020 20th International Conference on Advances in ICT for Emerging Regions (ICTer), 190–195. https://doi.org/10.1109/ICTer51097.2020.9325450
Li, Z., Tan, P., Tan, R. T., Zou, D., Zhou, S. Z., & Cheong, L.-F. (2015). Simultaneous video defogging and stereo reconstruction. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4988–4997. https://doi.org/10.1109/CVPR.2015.7299133
Liu, N., Fu, Q., Guo, H., Wang, L., Tai, Y., Liu, Y., Liu, Z., Shi, H., Zhan, J., Zhang, S., & Liu, J. (2023). Multi-band polarization imaging and image processing in sea fog environment. Frontiers in Physics, 11. https://doi.org/10.3389/fphy.2023.1221472
Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., ter Haar Romeny, B., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355–368. https://doi.org/10.1016/S0734-189X(87)80186-X
Salazar-Colores, S., Moya-Sanchez, E. U., Ramos-Arreguin, J.-M., Cabal-Yepez, E., Flores, G., & Cortes, U. (2020). Fast Single Image Defogging With Robust Sky Detection. IEEE Access, 8, 149176–149189. https://doi.org/10.1109/ACCESS.2020.3015724
Xu, Y., Wen, J., Fei, L., & Zhang, Z. (2016). Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement. IEEE Access, 4, 165–188. https://doi.org/10.1109/ACCESS.2015.2511558
Yang, Y., Bai, S., Guo, Y., & Tang, J.-B. (2013). Video Fogging Hiding Algorithm Based on Fog Theory. 2013 Ninth International Conference on Computational Intelligence and Security, 503–507. https://doi.org/10.1109/CIS.2013.112
Yoon, I., Kim, S., Kim, D., Hayes, M., & Paik, J. (2012). Adaptive defogging with color correction in the HSV color space for consumer surveillance system. IEEE Transactions on Consumer Electronics, 58(1), 111–116. https://doi.org/10.1109/TCE.2012.6170062