Treffer: Efficient Object-Related Scene Text Grouping Pipeline for Visual Scene Analysis in Large-Scale Investigative Data.
Weitere Informationen
Law Enforcement Agencies (LEAs) typically analyse vast collections of media files, extracting visual information that helps them to advance investigations. While recent advancements in deep learning-based computer vision algorithms have revolutionised the ability to detect multi-class objects and text instances (characters, words, numbers) from in-the-wild scenes, their association remains relatively unexplored. Previous studies focus on clustering text given its semantic relationship or layout, rather than its relationship with objects. In this paper, we present an efficient, modular pipeline for contextual scene text grouping with three complementary strategies: 2D planar segmentation, multi-class instance segmentation and promptable segmentation. The strategies address common scenes where related text instances frequently share the same 2D planar surface and object (vehicle, banner, etc.). Evaluated on a custom dataset of 1100 images, the overall grouping performance remained consistently high across all three strategies (B-Cubed F1 92–95%; Pairwise F1 80–82%), with adjusted Rand indices between 0.08 and 0.23. Our results demonstrate clear trade-offs between computational efficiency and contextual generalisation, where geometric methods offer reliability, semantic approaches provide scalability and class-agnostic strategies offer the most robust generalisation. The dataset used for testing will be made available upon request. [ABSTRACT FROM AUTHOR]
Copyright of Electronics (2079-9292) is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)