Use of Two Inpainting Techniques To Restore Partially Detected Cartographic Features
Use of Two Inpainting Techniques To Restore Partially Detected Cartographic Features
1Dept. of Mathematics and Computing, UNESP, Presidente Prudente, São Paulo, Brazil
2Dept. of Cartography, UNESP, Presidente Prudente, São Paulo, Brazil
Abstract— The continuous use of methodologies to extract cartographic features of digital images have been
of great importance in the area of cartography. Many techniques can be used by features extraction processes,
however, the results obtained by these techniques usually have partially detected features, culminating in loss
of quality of the extraction process. To keep searching for better results, it is possible to use techniques based
on inpainting, that has as its main purpose image restoration and removal of occlusions. Therefore, the main
objective of this article is to show a methodology of reconstruction of partially detected features using two
inpainting techniques proposed by [1] and [2], aiming to improve the quality of results in the process of
extraction of cartographic features and digital images. Observing the final analysis of the results obtained
with the techniques in three entry images, the technique of [1] showed an improvement of 0.61% compared to
the extracted feature. While the technique of [2], an improvement of 6.82%. The good results obtained
regarding the improvement of the quality of the process of extraction of partially detected cartographic
features will be of great use in the area of cartography.
Keywords— Remote Sensing, Inpainting, Digital Image Processing, Cartography, Partially Detected
Features.
entry image and 𝐼𝑡𝑛 (𝑖, 𝑗) is the improved version of the of the source region around a pixel p, the confidence term
entry image. of p will get a higher value.
Equation (1) shows that 𝐼 (𝑛+1) (𝑖, 𝑗), which is originated Equations (3) and (4) define the priority of a patch, so
from 𝐼𝑡𝑛 (𝑖, 𝑗), will be an improved version of the entry we select the one with the highest priority, and fill the
image. As n grows, the algorithm tends to have better target region with the patch from the source region that is
results. most similar to it.
To ensure the correct definition of the direction field, the 𝐶(𝑝) = 0, ∀𝑝 ∈ Ω
{ (3)
diffusion process is intertwined with the inpainting process 𝐶(𝑝) = 1, ∀𝑝 ∈ 𝜔
described, that is, the next step is the application of few 𝐷(𝑝) = −0.1, ∀𝑝 ∈ Ω ∪ 𝜔 (4)
iterations of image diffusion. This diffusion prevents the
where 𝐶(𝑝) and 𝐷(𝑝) is the confidence term and data
lines from crossing each other, resulting in a smoothing
term of a pixel, respectively, Ω is the area of interest and ω
effect. [1] uses the anisotropic diffusion, determined by the
is the region that doesn’t belong to the area of interest.
following:
𝜕𝐼
The similarity between two patches is measured by the
(𝑥, 𝑦, 𝑡) = 𝑔𝜀 (𝑥, 𝑦)𝜅(𝑥, 𝑦, 𝑦)(∇𝐼(𝑥, 𝑦, 𝑡)), ∀(𝑥, 𝑦) ∈ following equation:
𝜕𝑡
𝐴𝜀 (2) 𝑎𝑟𝑔𝑚𝑖𝑛
𝛾𝑝 = 𝛾𝑞 ∈𝜃𝑑(𝛾𝑝 , 𝛾𝑞 ) (5)
𝜀
where 𝐴 is the dilation of A with a ball of radius ε, κ is
the Euclidean curvature of the isophotes of Iand𝑔𝜀 (𝑥, 𝑦) is Each pixel p’is filled with the corresponding pixel in 𝛾𝑞 ,
the smooth function in 𝐴𝜀 . by using equation (6):
The only input parameters of the algorithm are the image 𝑝′ ∈ 𝛾𝑝 ∩ Ω (6)
to be restored and the mask that delimits the portion to be Then, the confidence term is updated to:
inpainted of the input image. The algorithm performs a
𝐶(𝑞) = 𝐶(𝑝), ∀𝑞 ∈ 𝛾𝑝 ∩ Ω (7)
pre-processing step where the entire original image goes
through the smoothness process of anisotropic diffusion. All of these processes are repeated iteratively until the
After that, the image enters an inpainting loop, where only target region is completely filled.What differentiates the
some values within A are modified. At each iteration, an technique proposed by [2] from the common exemplar-
anisotropic diffusion step is applied. This process is based algorithms is a new definition of the priority of the
repeated until a stable state is reached. patches taken and the similarity equation. The new priority
definition is described in equation (8).
In the restoration loop X inpainting steps occurs using
equation (1), then Y diffusion steps with equation (2), and 𝐷(𝑝), 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑓𝑖𝑟𝑠𝑡 𝑝ℎ𝑎𝑠𝑒
𝑃(𝑝) = { (8)
again X steps of equation (1), and so on. The total number 𝐶(𝑝), 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑠𝑒𝑐𝑜𝑛𝑑 𝑝ℎ𝑎𝑠𝑒
of steps is T. This number may be pre-determined or the The first phase concentrates the geometric propagation
algorithm may stop when image changes are below the of the target region, and the second, the propagation of the
given limit. The value of T depends on the size of A. texture. The algorithm automatically estimates the number
B. Deng et al. [2] Inpainting Algorithm of iterations required for the execution of the first phase.
The algorithm proposed by Liang-Jian Deng Ting-Zhu As for the similarity equation, it was changed to
Huang, Xi-Zhao is not based on partial differential equation (9).
equations (EDPs). It fills regions of interest by copying and 𝑎𝑟𝑔𝑚𝑖𝑛
𝛾𝑝 = 𝛾𝑞 ∈𝛾′𝑞 𝑑(𝛾𝑞 , 𝛾𝑝 ) (9)
pasting the portions of the source regions, so that the
texture of the image remains the same. The type of where 𝛾𝑝 and 𝛾𝑞 are patches being compared, 𝛾′𝑞 is the
technique exploited by this algorithm is called exemplar- largest patch with it’s center being 𝛾𝑞 ’s center and
based. 𝑑(𝛾𝑞 , 𝛾𝑝 ) is the sum of the quadratic differences of the
Originally, exemplar-based algorithms are based on two pixels that already filled the two patches.
attributes: a confidence term and a data term. The data term C. Quantitative Metrics
propagates the target region geometrically, and the term of
In [3], [4], [5] the lack of quantitative metrics to evaluate
confidence describes the dependence of the area of the
the results of an inpainting process is addressed. The
patch to be copied and pasted in relation to the
reason why this happens is that there is usually no
neighbouring pixels of the source region, that is, the texture
reference image, and because the content of the area to be
propagation of the original image. If there are more pixels
9. Reference image.
[1] [2]
III. CONCLUSIONS
14. Result of the inpainting technique of Bertalmio et al. Observing the final analysis of the results obtained with
[1]. the techniques proposed by [1] and [2] in the entry images,
we can understand that the results were highly satisfactory.
The technique of [1] showed an improvement of 0.87%
compared to the extracted feature. While the technique of
[2] obtained an improvement of 11.03%. This difference
may happen due to the first technique not fully allow
removal of occlusions and incorrectly detected features.
This type of removal is frequent in most study cases, as we
can see in the test images presented, which can be [11] W. Zhou,Objective Image Quality Assessment: Facing The
considered a weak point for [1] technique. Real-World Challenges. Electronic Imaging, 2016.
ACKNOWLEDGEMENTS
The authors thank the Foundation for Research Support
of the State of São Paulo - FAPESP (Process 2017/13029-
8) and the National Council of Scientific Research - CNPq.
REFERENCES
[1] M. Bertalmio, G. Sapiro, V. Caselles, C. Ballester,Image
inpainting. Proc. Siggraph Computer Graphics, 2000.
[2] L-J.Deng, T-Z.Huang, X-L.Zhao,Exemplar-Based Image
Inpainting Using a Modified Priority Definition. PLoS ONE,
2015.
[3] A. C.Kokaram,Detection of Missing Data in Image
Sequences. IEEE Transactions on Image Processing, 1995.
[4] O.Feier, A.Ioana, Digital Inpainting for Artwork
Restoration: Algorithms and Evaluation, 97f, 2012.