Sherlock in OSS: A Novel Approach of Content-Based Searching in Object Storage System

Object Storage Systems (OSS) inside a cloud promise scalability, durability, availability, and concurrency. However, open-source OSS does not have a specific approach to letting users and administrators search based on the data, which is contained inside the object storage, without involving the entire cloud infrastructure. Therefore, in this paper, we propose Sherlock, a novel Content-Based Searching (CoBS) architecture to extract additional information from images and documents. Here, we store the additional information in an Elasticsearch-enabled database, which helps us to search for our desired data based on its contents. This approach works in two sequential stages. First, the data will be uploaded to a classifier that will determine the data type and send it to the specific model for the data. Here, the images that are being uploaded are sent to our trained model for object detection, and the documents are sent for keyword extraction. Next, the extracted information is sent to Elasticsearch, which enables searching based on the contents. Because the precision of the models is so fundamental to the search's correctness, we train our models with comprehensive datasets (Microsoft COCO Dataset for multimedia data and SemEval2017 Dataset for document data). Furthermore, we put our designed architecture to the test with a real-world implementation of an open-source OSS called OpenStack Swift. We upload images into the dataset of our implementation in various segments to find out the efficacy of our proposed model in real-life Swift object storage.


INTRODUCTION
T Remendous amount of data is being produced every day which is stored and retrieved from various cloud servers.A recent estimation [1] shows people create about 328.77 million terabytes of data on a daily basis.In order to store huge amounts of data, object storage is well-known to have an upper hand because of its flexibility and consistency.One of the distributed and consistent open-source cloud systems is OpenStack.OpenStack provides cloud object storage, allowing us to preserve and pull up large amounts of data using an API called OpenStack Swift.OpenStack Swift is scalable and has been designed to be durable, and available for the whole data set.Swift is a well-suited storage system for unstructured data that can be increased immensely [2].Swift object storage stores every single piece of data as an object, unlike the storage systems, for instance, file-based storage or block storage, which stores data as a file.This storage system is built to house massive amounts of data at a time because of its flexibility.The retrieval of relevant data has become a significant issue as the amount of data increases significantly [3].Storing consumer and business data in either public clouds or private clouds has made it difficult to efficiently and effectively retrieve meaningful data [3].
As a result, cloud-based storage is being developed using object storage.Various well-known cloud service providers such as Amazon S3, OpenStack Swift, Caringo Swarm, and many others provide object storage.Although the problem of storing massive amounts of structured and unstructured data is solved due to the complex architecture of objectbased storage systems, retrieving or searching for a certain object/file has become a major challenge [4].Object storage, for instance, OpenStack Swift uses the HEAD or GET method in order to get an object from the storage which is not very efficient when it comes to content-based searching.Moreover, the exact path of the object is also needed in order to get some data from this storage system, which is an inefficient task if there are massive amounts of data.
Even so, compared to block and file storage, object storage may produce higher delay and require more processing time, but it also has a number of advantages, including scalability, cost-effectiveness, robustness, and easier management.It offers great redundancy and data durability, making it particularly useful for managing massive amounts of unstructured data.Another benefit of using Open Stack Swift as an Object Storage is, Swift object storage allows simultaneous access from several servers, therefore server binding is not a problem.It provides fault tolerance, scalability, and adaptability without impairing system performance.
Furthermore, the linear searching method inside this storage is very time-consuming as the different replica copies are located in different regions.Searching in object storage is not significant or efficient that way, since it stands.Content-Based Search (CoBS) primarily denotes the search that investigates the contents of inputted data rather than the metadata connected with the data, such as keywords, tags, or descriptions.In this usage, "content" may indicate colors, forms, materialistic details, or any other information obtained from the data itself.Manually annotating photos by inserting keywords or information into a huge database takes time and may not catch the keywords intended to identify the data.
The interest in CoBS is starting to grow as the usage of data is increasing and metadata-based systems are struggling to work on a large amount of data.Existing technology can rapidly search for information about any data, but this requires humans to manually characterize each image in the database.This can be difficult for extremely big databases or photos created automatically, such as those from surveillance cameras.Images that utilize various synonyms in their descriptions may also be overlooked.Systems based on classifying photos in semantic classes like "cat" as a subclass of "animal" may avoid the miscategorization problem, but will take more labor by a user to locate images that may be "cat", but are only classed as an "animal".Many standards for categorizing photos have been proposed, but all encounter scaling and miscategorization difficulties.Besides, to our knowledge, there has been no work based on content searching across different types of information in one architecture.
In this paper, we propose Sherlock, a CoBS architecture for an object storage system that enables us to extract additional metadata from images and keywords from documents and store them in a metadata database that helps us search for our desired data based on its contents.In our paper, we are referring to content as the objects present in images and documents.In order to do so, we first identify the type of the file.Considering the file is an image, we extract the information using an object detection Convolutional Neural Network (CNN) model named DarkNet, YOLOv4 [5] and YOLOv8 [6] architecture to detect objects.On the other hand, for document files, we extract the information using one of the Natural Language Processing (NLP) architectures, BERT [7].Afterwards, we retrieve additional data such as the object path in the form of an HTTP link.The data is passed to an Elasticsearch Cluster (ESC) [8] and the object is uploaded to object storage systems like OpenStack Swift.When the user searches for an object, our proposed interface takes input from the user, performs a search in the ESC, and returns a list of objects.The user can be able to access the objects from Swift.In this way, there is only one GET request to the object storage system.Besides, the enriched content metadata are created using BERT and DarkNet and stored in ESC ensuring more relevant content searching for the user.
We propose the following contributions to this paper based on our findings -

•
Our work is the first to come up with an architecture that can jointly do CoBS inside images and documents.

•
We create the OpenStack Swift JOSS client User Interface (UI) in order to access Swift and the Elasticsearch cluster at the same time using user-level authentication tokens.

•
We rigorously test our BERT, YOLOv4 (Darknet), and YOLOv8 algorithms with different custom-weighted files to get maximum accuracy.We use three different datasets and calculate the response time of the YOLOv4 and YOLOv8 object detectors as well as their precision in detecting multiple objects in a single image.

•
We use a pre-trained BERT model to extract keywords from Documents.Besides, in the Elasticsearch cluster, we perform multiple query requests from our User Interface to get the elastic cluster's response time and the average query time.Lastly, we add different filters to the search engine using the elastic cloud API.

RELATED WORK
The concept of searching in Object storage is not entirely new to us.Platforms like Amazon S3, and OpenStack Swiftall have their own kind of searching approaches.Although there has been very little research related to searching in object storages they have not been implemented on platforms like OpenStack Swift and other object storages.As a result, we study these research papers in order to understand their work and the complications they faced while working with object storage.We segment the search into two different categories.The first part aims to review previous relevant works in the field of searching in object storage, specifically different types of metadata-related searches.The second part emphasizes on Query related searches in Object storage.

Metadata Searching
Leung et al. [9] suggest a scalable index-based file metadata search system that outperforms competing solutions in terms of query performance while using less disk space, [9] named "The SpyGlass".The type of programs that operate with millions of data generate need to be analyzed petabytes of data which are divided into millions of files or objects, according to Singh et al. [10].They propose a Metadata Catalog Service (MCS) which can store and access different types of metadata, and users can query for any type of metadata they want.

Query Searching
Searching in object storage is now common in cloud systems.Through our studies and findings, we try to find out the drawbacks, and issues of searching, and how they can be solved.A study [26] describes that Swift is a proxy serverbased design that has the scale-ability of clusters.They propose a change in Swift's architecture that will provide much faster bandwidth with minimal latency while interacting with technology like RoCE, InfiniBand, and Remote Direct Memory Access (RDMA) [20].
Imran et al. [27] present some probable problems with metadata-related issues that we may face.In cloud storage, a lot of metadata are created which hampers the performance of the system.They propose an optimized solution for storing massive metadata which has improved load balancing modules and merges storage facility.Xue et al. [28] use HAproxy and UCARP to handle when huge amounts of metadata and it also reduces the buffering and accelerates read and write performances and overall throughput.Metadata is basically stored in a system as small files and with the increasing use of automated technology and remote sensing technology, lots of metadata is produced every day.
Biswas et al. [23] show how Access Control List (ACL) maintains accessibility and data security for all users.With ACL, it can be described who to give access to or not.As for the storing policies they make, two types of policies for two types of data.One is LaBAC for user label data and objects label values, another is content level for JSON paths and labels.They find a drawback to their work, stating as it can only work with objects with applications or in JSON.Objects without a JSON file create issues with sending the full content of the files without requesting.

Content-based Image Retrieval System
Ren et al. [11] introduce an Approaching-and-Centralizing Network, which can jointly optimize sketch-to-photo synthesis and image retrieval, in which the retrieval module aids the synthesis module in producing large amounts of different photo-like images that gradually approach the photo domain.
Choe et al. [14] propose a CNN based CBIR approach to diagnosing Interstitial Lung Disease with Chest CT.Monowar et al. [15] introduce a Deep CNN-based selfsupervised image retrieval system.Keisham et al. [16] present a Deep Search and Rescue (SAR) Algorithm-based CBIR approach.The steps involved in the proposed Deep Neural Network-SAR (DNN-SAR) are pre-processing, multiple feature extraction, feature fusion, clustering, and classification.
Schall et al. [29] come up with a protocol for testing deep learning based models for their general-purpose retrieval qualities.After analyzing the currently existing and commonly used evaluation datasets they conclude with the result that none of the available test sets are suitable for the desired purpose and present the GPR1200 (General Purpose Retrieval) test set.
Wang et al. [13] propose a secure and efficient ciphertext image retrieval scheme based on image content retrieval (CBIR) and approximate homomorphic encryption (HE).
Noor et al. [17] propose a novel approach to retrieve images faster by customizing the attributes in bit pixels of distinct luma and chroma components (Y, Cb, and Cr) of progressive JPEG images.

Keyword Extraction from Document
Researchers [30] present a multimodal key-phrase extraction approach, namely Phraseformer, using transformer and graph embedding techniques.Xiong et al. [31] propose Semantic Clustering TextRank (SCTR), a semantic clustering news keyword extraction algorithm based on TextRank that uses BERT to perform k-means clustering to represent semantic clustering.Then, the clustering results are used to construct a TextRank weight transfer probability matrix.Finally, Iterative calculation of word graphs and extraction of keywords is performed.
The recent solutions for searching in object storage and their findings are presented in Table 1.Their drawbacks inspire us to come up with a new solution with the help of object detection and natural language processing that is robust.To the best of our knowledge, our proposed methodology is the first to focus on these aspects.

BACKGROUND
This section goes over the fundamental architectural framework of Swift, YOLO, BERT, and Elasticsearch.

Architectural Overview of Swift
OpenStack Swift is a highly scalable object storage that is designed keeping in mind the phrase that "failure is a common occurrence".As a result, Swift is divided into 4 subsections: Proxy, Account, Container, and Object nodes.A proxy server is located in the first layer.Data that goes in and out of the storage has to go through the HTTP file transfer protocol.The requests for data are done by API requests.The task of the proxy server is to capture the requests and work accordingly.The proxy server determines the location of the data or its storage node by the URL.There are Rings, which keep the address of the information like names and entries  Zones can be any storage device, for instance, a hard drive to a full server.After that, there are containers and accounts.The list of containers in a particular account is stored in that account's database.Swift has multiple object nodes which are independent of each other.These object nodes are easily replaceable in the event of any failure.However, Swift has an internal replication system that replicates the stored object into a minimum of three different nodes.So when one node is replaced the objects are not lost [18].Figure 1a presents the architectural overview of OpenStack Swift and Figure 1b presents the different consistency processes and layers in proxy and storage nodes of OpenStack Swift.

YOLOv4
YOLOv4 [32], You Only Look Once Version 4, is a sophisticated technique for single-stage object detection that utilizes regression techniques to gain a good precision score and can be performed concurrently, It is the predecessor of the YOLO line of algorithms.This algorithm is introduced in 2020 by Bochkovskiy et al. [32] and builds upon the capabilities of previous versions such as YOLOv1, YOLOv2, and YOLOv3., achieving the finest potential balance between detection efficiency and precision at this time.Its architecture, which includes the backbone, the neck, and the prediction, is shown in Figure 2.
The authors suggested the following backbones for the YOLOv4 object detector: CSPResNext50, CSPDarknet53, and EfficientNet-53 [32].As for our purpose, we use CSPDark-net53.In YOLOv4, the CSPDarkNet53 architecture is introduced.This architecture includes a residual module where the feature layer is re-entered, resulting in increased feature information.This entitles the model to learn the distinction between output and input.

YOLOv8
YOLOv8 is the successor of all the previous YOLO models presented.It is introduced by Jocher et al. [6].YOLOv8 has an architecture similar to one of its ancestors, YOLOv5.It is based on PyTorch.And it has a Python backbone rather (a) Overview of the storage architecture [21] (b) Different consistency processes and layers in proxy and storage nodes of OpenStack Swift Fig. 1: OpenStack Swift than Darknet, which is based on C used in YOLOv4.This is convenient for the users to make it customizable and bring improvements through the model.
On the MS COCO dataset , YOLOv8m achieve an AP of 53.9% with a 640-pixel image size (compared to 50.7% for YOLOv5 on the same input size) at a speed of 280 FPS on an NVIDIA A100 and TensorRT [33].

BERT
Bidirectional Encoder Representations from Transformers (BERT) architecture is based on a multi-layer Transformer encoder, developed by Vaswani et al. [34].Devlin et al. [7] present BERT Transformer, which is based on bidirectional self-attention.This bidirectional process eliminates the limitation that self-attention may only integrate context from one side: the left or the right.Unlike previous embedding generation architectures, such as Word2Vec [35], BERT does not take input vectors that represent words.Instead, it takes segment, token, and position embeddings as input.The token embedding is a WordPiece embedding with 30,000 tokens [36].
In this study, we employ the fundamental BERT model, which is available on TensorFlow Hub.It includes 12 transformer blocks, 12 self-attention heads, and 768 hidden size.

ElasticSearch Overview
Elasticsearch is a full-text search library based on the opensource search engine Apache Lucene.It is capable of performing a full-text search.It can conduct a structured search, analytics, or a combination of all three as this is built for realtime, distributed search and data analysis [37].The highly adaptable query API of Elasticsearch allows for the simultaneous use of filtering, sorting, pagination, and aggregation in a single query [8].Elasticsearch is capable of easily handling unstructured data, and allowing for the indexing of JSON files without the need for a prior schema.It automatically attempts to identify class mappings and adjusts for new or removed fields.It also offers built-in functionality for clustering, data replication, and instantaneous fail-over, all of which are transparent to the user [8].

SYSTEM DESIGN AND IMPLEMENTATION
Figure 3 (on left) presents the methodology of our proposed system.First, it checks if the file is an image or a document.Based on the file type, the content is sent to extract the crucial information.For images, the content is sent to extract the metadata.And for documents, the content is inputted to extract the keyword.YOLO (for image) and BERT (for document) process the metadata and send the data to the Elasticsearch cluster.The object detection/keyword extraction is done on the client app.When the metadata is uploaded in the Elasticsearch cluster, the content is uploaded to the Swift server.

Developing Client-side
We use Java client for OpenStack Swift (JOSS) [38], mentioned in the right side of Figure 3, to build the client app.We use Elasticsearch as it offers multi-language support for handling request and response data, language detection libraries, and plugins and integrations to provide additional languagespecific functionality.And our JOSS client connects well with Elasticsearch.The location path of the content in our storage server is saved in the Elasticsearch cluster.We use Elasticsearch as it offers multi-language support for handling request and response data, language detection libraries, and plugins and integrations to provide additional languagespecific functionality.And our JOSS client connects well with Elasticsearch.When the user searches for content, the client app performs a search in the Elasticsearch cluster and returns content suggestions to the user with the Swift location path.This ensures minimum load on the Swift server and accurate searching based on the metadata.Because of this extraction, the proposed system has a sound knowledge of each content.
OpenStack has a few libraries to interact with the Swift object storage system [26].We use the Java library for Openstack Swift (JOSS) [38] to develop our client app.It is a desktop-based application where we have our object detection model as well.Object detection is done using the client device's computational power.Then we upload the metadata data to the Elasticsearch cluster and the content to the Swift server.JOSS provides many features to interact with the Swift servers including authentication, object uploading, content location path generation, and so forth [39].

Developing Keyword Extraction
BERT is one of the state-of-the-art models to solve problems related to Natural Language Processing (NLP) which uses attention-based mechanisms.In our case, we take a document (docx/pdf) and then take all the text from that document and put it in the BERT model and we the get best five three-word keywords out of the document we input.

Developing Object Detection
Figure 4 shows how our proposed YOLOv4 algorithm works after it gets an image.YOLOv4 provides us with fast and accurate object detection with the help of bounding boxes and non-maximum suppression.However, in our case, we do not want an edited image with bounding boxes.As a result, we propose a different workflow for the YOLOv4 where after getting an image it will make a copy of that image and perform necessary detections.We also followed the same approach for the latest YOLOv8 too And we used a pre-trained YOLOv8m like we used a pre-trained model for YOLOv4 which is already trained with the MSCOCO-17 dataset which has 80 classes of data.
At that time, the actual image will be sent directly to the storage server.And after that, the detection is done the copied image will be dumped and the object detection set with other metadata related to the image including the object URL path will be written into a JSON document.Lastly, the JSON document will be pushed into the Elasticsearch server.

Developing the Storage System
Figure 1a shows the overall architecture of the storage system.After Detection is done and the object is pushed to Swift, it goes into Swift using the Swift proxy pipeline.In our model, we use a multi-node Swift setup.There are 3 proxy servers and multiple object servers.The load balancer chooses the suitable proxy server for the object.The proxy server sends the object to the Ring, from where the object is sent to the appropriate object server.In our model, we do not change how Swift handles these requests in order to maintain its scalability and compactness.

ElasticSearch Cluster
Figure 5 shows the workflow for JSON document in Elastic cluster.We set up an Elasticsearch server in a different Virtual Machine with a Logstash pipeline where the JSON file generated by the object detector gets pushed.Our Logstash pipeline filters out the unnecessary data from the JSON file such as the coordinates of the bounding boxes, and class id.It also formats the JSON file in a way that is easier for Elasticsearch to index properly based on the image file name and the contents of the image which in our case are the detected objects.

PERFORMANCE EVALUATION
We measure the performance of our suggested architecture using a real-world implementation.Before this, we first elaborate on proposed experimental settings.

Experimental Setup
We set up multiple virtual machines using the Google Cloud Platform (GCP).One machine work as a proxy server and the other one as the object server node.Account and container servers are included in the object server machines which are shown in Figure 6.We use our local machine to detect and upload images for this testing.The configuration for our local machine is as follows: Intel(R) Core i5-7300HQ CPU having a 2.50 GHz base clock speed, 8 GB ram, and an Nvidia GTX 1050 Graphics Unit.Next, we use another graphics unit Nvidia RTX 2070 where we run our testbed with YOLOv8.
Furthermore, we use another virtual machine as the Elasticsearch server, which is configured to get data dumps from our local machine.For all the Virtual machines, we use the same kind of setup: 4 GB ram, 60 GB storage, and the operating system is Ubuntu 18.04.

Dataset
For our object detection module, we use the Microsoft COCO Dataset [40].We take a part of the dataset which consists of 26,000 images and 80 classes in total.After, we divide our dataset into three segments for our testing purposes having 1000, 5000, and 20000 Images each.We use pretrained YOLOv4 and YOLOv8m models to test these images.
For our keyword extraction, we use the SemEval2017 Dataset [41] which consists of paragraphs selected from 500 ScienceDirect journal papers from Computer Science, Material Sciences, and Physics domains.

Experimental Results
In this section, we report the results after using the datasets in our system, starting with the image dataset, and afterward the document dataset.
Image Dataset Test.From Table 2, we can see different matrices from our testing sets.The precision level is important for our model because it indicates how well our system is able to give the user the appropriate image they want.In our case, we get an mAP (Mean Average Precision) of 0.71 for 5k images and 0.73 for 20 thousand images with a total      Upload time test.In the uploading part, we limit our upload speed to 2mbps (megabit) and calculate the upload time of the images.In Figure 8b and 8e, we get a relatively higher curve because of the low upload speed compared to the file sizes of the images.
Total time for proposed model.After we calculate the Detection & Upload time separately, we go ahead and initialize our system to find out the combined time it needs for uploading and detecting the different image sets.From Figure 8c and 8f, we can see the curve going higher ever so slightly.
Uploading and detection time comparison.From Figure 9, we can see the comparison between the Uploading time and the Detection-Upload combined time, indicating that, the additional requirements of object detection and our system necessitates take a little more time to deliver the image to Swift.But the difference is relatively insignificant compared to the work done behind the scenes.One important thing to note is that we use a comparatively older graphical processing unit model, Nvidia GTX 1050 which is without CUDNN functionality, and as a result, our GPU usage was up to 15-20% at max.So using a newer version of GPU or even an older version with CUDNN enabled will exponentially decrease the detection time which in turn will decrease the overall upload & detection time of the system bringing the two curves in Figure 9 much closer to each other.Table 3 and Table 4 shows the average time our models took to detect & upload different data segments.
Result evaluation for document.Table 5 presents the tentative document and the extracted keywords.We find the documents which are most similar to the document and these are those keywords that can best represent our document.We use cosine similarity between vectors and the document.We select only top three keywords

Search Analysis
Here, we discuss the functionalities of searching in our system.
Completion Suggester.The completion suggester provides auto-complete/search-as-you-type functionality.This navigational feature helps users to get relevant results as they type, improving search precision.We achieve completion suggestions with the help of Elasticsearch API, which we integrate with our Client API.
Search based on Image Content and Metadata.Elasticsearch API uses the Elasticsearch cluster as an endpoint base where all the documents are stored and indexed.Leveraging the API, we successfully implement search based on both image content and metadata.
After we perform a search using 80 keywords (as the dataset has 80 classes), we calculate the average query time and average request time given in the table 6.We observe that although the average request time fluctuates a little bit, the average query time remains very low and almost constant.

DISCUSSION & COMPARATIVE ANALYSIS
The model which we propose is a novel approach to searching for images in OpenStack Swift.As a result, direct com-  and make the entire storage system more object-aware.The middleware increases the speed and effectiveness of searching within the object storage, making it simpler to find and retrieve particular objects.It also increases the storage system's object awareness, giving it more functionality and flexibility for managing and accessing objects.Overall, the proposed solution offers OpenStack Swift users a beneficial improvement, increasing the effectiveness and functionality of their object storage system.However, we divide our comparison part into two sub-sections.We compare how different swift models and content-based image search models do searching compared to our model based on some parameters.Different Swift Models.After using each of the platforms, we set up some parameters for effective image search in swift storage and compare them with our proposed model.From Table 9, we can see the comparison of the various implementation of Swift using different techniques for searching.
In Table 7, we compare the user availability level of the different implementations based on the model's availability and scalability to work in different environments.
Different CBIR Engines.There are multiple CBIR engines that extract different features from images to conduct a search.In Table 8, we compare our proposed model with different models from other related works based on what features get extracted and how the search is conducted.Moreover, we compare the precision of these various models.
When we upload any picture to the server, after inputting the image to YOLOv4 or YOLOv8, the image does not lose it.It shows no change in the image quality (SSIM 100% and VQMT 100%) after the images are passed through the detection algorithm.

CONCLUSION AND FUTURE WORK
In our work, we integrate machine learning features and OpenStack Swift to come up with a better solution to the problem of efficient searching.With the help of Elasticsearch, we are able to complete the entire design.Although our main objective is to find a solution to the searching method of Swift, we also come up with a secondary objective to make a user-centered content-based image searching [44] system using a text-based database where a user can manipulate the YOLOv4 and YOLOv8 algorithm based on their preferences, which would neither hinder the performance of the Swift storage nor the Elasticsearch cluster as they are independent of each other.As YOLOv4 and YOLOv8 can do object detec-tion for both images and live video feeds, it adds a variety of choices for different kinds of users.
Different search techniques based on content level metadata are not thoroughly focused on in OpenStack Swift literature.Hence in our paper, we externally integrate an object detection framework and an Elasticsearch cluster to our overall swift storage and perform multiple tests to find out the viability and responsiveness of our model.Although we get results with very little delay, there is still room for improvement.As a result, we select three future goals in order to make our system more robust and easy to use.At first, we plan to integrate the whole system more compactly using a desktop-based application.Then, we aim to add an authentication token system to our Elasticsearch server to keep the documents safe from unauthorized access.Afterward, our target is to use our system to store live video feeds in order to find out the viability of our system as a state-ofthe-art video surveillance application.

TABLE 1 :
Noteworthy findings from literature review that are stored on the cluster.It also keeps track of the path of the data.The way Rings keep the mapping work is by introducing zones, devices, partitions, and replicas.

TABLE 2 :
Dataset testing metrics

TABLE 3 :
Average time to do different tasks for different data sizes with YOLOv4 (Time = second)

TABLE 4 :
Average time to do different tasks for different data sizes with YOLOv8 (Time = second) precision of 68.5%.In Figure7, we can see both single and multiple objects are being detected by the model.Detection time test.Figure8aand 8d represents the different times it takes to detect 1,000, 5,000, and 20,000 images.We can see a very low upward-sloping curve which tells us the detection time is very little compared to the number of

TABLE 5 :
Extracted keywords from BERT images.We are able to achieve this because of the YOLO algorithm which as its name suggests, ONLY LOOKS AT AN IMAGE ONCE.However, removing the bounding box drawing method only increases the speed very slightly which can be overlooked.

TABLE 6 :
Average query time and request time for different data sizes parison with other Swift models is not possible.And again, it adopts a middleware model to improve searching efficiency