Signature Elevation Using Parametric Fusion for Large Convolutional Network for Image Extraction
DOI:
https://doi.org/10.21015/vtse.v12i2.1810Abstract
The image acquisition process involves finding regions of interest and defining feature vectors as visual features of the image. This encompasses local and global delineations for specific areas of interest, enabling the classification of images through the extraction of high-level and low-level information. The proposed approach computes the Harris determinants and Hessian matrix after converting the input image to grayscale. Blob structuring is then performed to identify potential regions of interest that can adequately describe texture, color, and shape at different representation levels and the Harris corner detector is used to identify keypoints within these regions. Moreover, scale adaptation method is applied to the determinants of the Harris matrix and the Laplacian operator to extract scale-invariant features. Meanwhile, the input image undergoes processing through VGG-19, DenseNet, and AlexNet architectures to extract features representing diverse levels of abstraction. Furthermore, the RGB channels of the input image are extracted and their color values are computed. All extracted features local, global, and color are then integrated in feature set and encoded through a bag-of-words model to rank and retrieve images based on their shared visual characteristics. The proposed technique is tested on challenging datasets including Caltech-256, Cifar-10, and Corel-1000. The presented approach shows remarkable precision, recall and f-score rates in most of the image categories. The proposed approach leverages the complementary strengths of multiple feature extraction techniques to achieve high accuracy.
References
S. R. Dubey, S. K. Singh, and R. K. Singh, "Local neighbourhood-based robust colour occurrence descriptor for colour image retrieval," *IET Image Process.*, vol. 9, no. 7, pp. 578–586, 2015.
S. Hamad, A. Iqbal, S. Naz, N. ul, M. Imran, and B. Al-Haqbani, "Content-based image retrieval using texture color shape and region," *Int. J. Adv. Comput. Sci. Appl.*, vol. 7, no. 1, 2016.
M. Verma and B. Raman, "Local neighborhood difference pattern: A new feature descriptor for natural and texture image retrieval," *Multimed. Tools Appl.*, vol. 77, no. 10, pp. 11843–11866, 2018.
S. R. Dubey, S. K. Singh, and R. K. Singh, "Boosting local binary pattern with bag-of-filters for content based image retrieval," in *2015 IEEE UP Sect. Conf. Electr. Comput. Electron. (UPCON 2015)*, 2016.
K. T. Ahmed, S. A. H. Naqvi, A. Rehman, and T. Saba, "Convolution, approximation and spatial information based object and color signatures for content based image retrieval," in *2019 Int. Conf. Comput. Inf. Sci. (ICCIS 2019)*, 2019.
A. Alzu’bi, A. Amira, and N. Ramzan, "Content-based image retrieval with compact deep convolutional features," *Neurocomputing*, vol. 249, pp. 95–105, 2017.
K. T. Ahmed, S. Jaffar, M. G. Hussain, S. Fareed, A. Mehmood, and G. Y. U. S. Choi, "Maximum response deep learning using markov, retinal & primitive patch binding with googlenet & vgg-19 for large image retrieval," *IEEE Access*, vol. 9, 2021.
N. Sharma, V. Jain, and A. Mishra, "An analysis of convolutional neural networks for image classification," *Procedia Comput. Sci.*, vol. 132, no. Iccids, pp. 377–384, 2018.
A. Abbas, M. M. Abdelsamea, and M. M. Gaber, "Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network," *Appl. Intell.*, vol. 51, no. 2, pp. 854–864, 2021.
C. S. et al., "Going deeper with convolutions," in *Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.*, vol. 07-12-June, pp. 1–9, 2015.
K. T. Ahmed, H. Afzal, M. R. Mufti, A. Mehmood, and G. S. Choi, "Deep image sensing and retrieval using suppression, scale spacing and division, interpolation and spatial color coordinates with bag of words for large and complex datasets," *IEEE Access*, vol. 8, pp. 90351–90379, 2020.
M. N. A. et al., "Colour features extraction techniques and approaches for content-based image retrieval (cbir) system," *J. Mater. Sci. Chem. Eng.*, vol. 09, no. 07, pp. 29–34, 2021.
G. Kumar, "A detailed review of feature extraction in image processing systems," pp. 5–12, 2014.
K. Tehseen, A. Muhammad, and A. Iqbal, "Region and texture based effective image extraction," *Cluster Comput.*, vol. 21, no. 1, pp. 493–502, 2018.
K. Tehseen, S. Ummesafi, and A. Iqbal, "Content based image retrieval using image features information fusion," vol. 51, no. November 2018, pp. 76–99, 2019.
MathWorks, "rgb2gray." https://www.mathworks.com/help/matlab/ref/rgb2gray.html#d126e1415946. Accessed: 2021-09-15.
H. Bay and A. Ess, "Speeded-up robust features (surf)," vol. 110, pp. 346–359, 2008.
M. Saadetoğlu and Ş. M. Dinsev, "Inverses and determinants of n×n block matrices," Mathematics, vol. 11, no. 17, p. 3784, 2023.
N. Ma, X. Zhang, H. T. Zheng, and J. Sun, "Shufflenet v2: Practical guidelines for efficient cnn architecture design," in *Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics)*, vol. 11218 LNCS, pp. 122–138, 2018.
A. S. Lundervold and A. Lundervold, "An overview of deep learning in medical imaging focusing on," *Zeitschrift für Medizinische Physik*, vol. 29, no. 2, pp. 102–127, 2019.
K. Kanwal, K. T. Ahmad, R. Khan, N. Alhusaini, and L. Jing, "Deep learning using isotroping, laplacing, eigenvalues interpolative binding, and convolved determinants with normed mapping for large-scale image retrieval," *Sensors (Switzerland)*, vol. 21, no. 4, pp. 1–39, 2021.
A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images." https://www.semanticscholar.org/paper/Learning-Multiple-Layers-of-Features-from-Tiny-Krizhevsky/VFAST Transactions on Software Engineering Volume 11, Issue 4, 2023 5d90f06bb70a0a3dced62413346235c02b1aa086. Accessed: 2020-12-02.
D. Bauso, *Game Theory with Engineering Applications*. 2016.
K. Kanwal, K. T. Ahmad, R. Khan, A. T. Abbasi, and J. Li, "Deep learning using symmetry, fast scores, shape-based filtering and spatial mapping integrated with cnn for large scale image retrieval," *Symmetry (Basel)*, vol. 12, no. 4, p. 612, 2020.
A. Naeem, T. Anees, K. T. Ahmed, R. A. Naqvi, S. Ahmad, and T. Whangbo, "Deep learned vectors’ formation using auto-correlation, scaling, and derivations with cnn for complex and huge image retrieval," *Complex Intell. Syst.*, vol. 9, no. 2, pp. 1729–1751, 2023.
F. O. Giuste and J. C. Vizcarra, "Cifar-10 image classification using feature ensembles," arXiv preprint arXiv:2002.03846, 2020.
A. Shakarami and H. Tarrah, "An efficient image descriptor for image classification and CBIR," Optik, vol. 214, p. 164833, 2020.
M. Bansal, M. Kumar, M. Sachdeva, and A. Mittal, "Transfer learning for image classification using VGG19: Caltech-101 image data set," Journal of Ambient Intelligence and Humanized Computing, pp. 1-12, 2023.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC-By) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This work is licensed under a Creative Commons Attribution License CC BY