Identification technology based on geometric features of tooth print images

Identity recognition technology is a type of technology that realizes identity verification based on certain biological characteristics. After entering the Internet era, this technology has become a popular research direction in the computer field. In this paper, the image of the tooth print is used as the biological feature to carry out the research on the identification algorithm. This paper adopts the target detection algorithm based on neural network to detect a single tooth imprint area of the target, build a target detection network. The experimental results show that the method has a good segmentation effect on the target area, and the accuracy rate is 91.66%. According to the contour features of the collected tooth print images, a set of tooth pore area ratio feature extraction methods are designed. To objectively evaluate the recognition and classification method, the support vector machine is used as the final classifier. The recognition accuracy rate is 94.09%, and the verification accuracy rate is 94.09%. The test accuracy rate is 91.46%, and the classification effect is excellent. This paper has made a lot of breakthroughs and obvious progress based on the previous research on the tooth impression model.


Introduction
Mass casualties associated with large-scale disasters require the management of significant resources. The unexpectedly violent nature of such incidents often remains many victims, and there is an urgent need for reliable and economical methods of identification. Traditional identification methods are inefficient in many situations, such as plane crashes and fire accidents, that destroy macro-biometrics such as fingerprints or faces. Scans and radiographs are an unrepeatable part of forensic dentistry, as bones and teeth are the most persistent parts of a body demolished in a sudden mass disaster (Yazdanian et al., 2022). Forensic dentistry, as a scientific method for the identification of human remains, is widely regarded as effective during disasters. In the event of natural disasters such as tsunamis, earthquakes, fires, and air disasters, these biological characteristics are difficult to be completely preserved, and in some criminal cases, human characteristics are seriously damaged and lose their research and application value, while conventional DNA identification (Ziętkiewicz et al., 2012) It is not widely used due to its high verification cost and long identification period. CONTACT Ning Wang 614856415@qq.com Therefore, in view of the above inferences, the technology of identification based on dental image features came into being. Teeth are the hardest organs in the human body and are divided into three parts: crown, neck, and root. They have high temperature resistance (Amin et al., 2017), corrosion resistance, high hardness, and other characteristics (Pretty & Sweet, 2001;Rothwell, 2001), especially the hardness of tooth surface enamel up to 7-8 degrees (hardness unit, up to 10 degrees), second only to the hardness of diamond, its chemical structure and physical structure are also relatively stable. The commonly used methods for individual identification mainly fall into five categories: autopsy examination, fingerprint comparison, DNA comparison, anthropological examination and dental (Ramanathan et al., 2018) examination. In some severely damaged corpses, fingerprints and DNA may not be easy to extract, and teeth are one of the sturdiest and the latest tissues to be decomposed, and can withstand high temperatures of 1600°C. Therefore, forensic individual identification technology is often used in such cases. The preferred method has the advantage of being economical and fast.
In personal identification and recognition applications based on dental images (Lee et al., 2018;Miki et al., 2017;Schwendicke et al., 2019;Tuzoff et al., 2019), two-dimensional dental X-ray films are widely used because they contain rich information on tooth characteristics. However dental images are prone to problems such as large image noise, low contrast, and inconspicuous contour edges, which make the traditional methods of tooth contour extraction unsatisfactory and have great limitations. Now more and more scholars use deep learning to conduct research, and have made breakthrough progress, which has played a significant role in more challenging research.
The higher the accuracy of tooth segmentation, has an obvious auxiliary effect on subsequent tooth identification and classification, and plays an important role in forensic identification based on the personal identity of teeth, obtaining effective dental imaging data and features, and providing valuable information for forensic individual identification. in accordance with. In view of this, according to the current prospect and status of identification, this paper adopts fast and practical identification technology of different traditional methods, which has certain scientific research value and practical application significance. Figure 1 shows three dental images. The contributions of this paper are as follows: • A scheme study of object segmentation in tooth print images based on deep learning. This chapter mainly lists the design and verification based on the traditional scheme of medical images in the early stage, and compares it with the deep learning method, and obtains stable and effective target segmentation results, which lays the groundwork for the next feature extraction and recognition system.
• A post-segmentation adaptive threshold strategy method is proposed to measure the experimental results of the target detection network after segmentation. • Based on the combination of low-dimensional feature information screening, this paper proposes an identification and verification scheme based on the tooth hole area ratio feature of the tooth print image. According to the feature information of the tooth hole area in the tooth imprint image, the tooth imprint image feature of the tooth hole area ratio is constructed, and the multiplicity and robustness of the feature are added to the geometric feature information, and the experiments in three classifiers The results show that the classification effect of SVM is the best.
The specific organization structure of this paper is as follows: Section 2 introduces the background and significance of the research in the paper, as well as the current research status, that is, the related work of the paper, Section 3 introduces the target detection algorithm for tooth print images, and Section 4 combines low-dimensional feature information screening. On the basis of, the tooth print image features of the tooth hole area ratio are proposed. Section 5 proposes the design of the classifier, Section 6 conducts experimental analysis and comparison, and Section 7 summarizes the full text. Figure 2 shows an overview of the strategy flow. Given a dental impression image, we first detect each tooth mark and obtain their bounding boxes. The bounding box is then set to segment the region of interest for each tooth marker. Finally, geometric features are extracted from the contour images and used for recognition.

Related work
The research on identification based on dental features is still in progress since it was proposed, and it has become one of the hot research topics in practical applications such as forensic identification. In the previous traditional identification technology based on teeth (Nomir & Abdel-Mottaleb, 2007), according to the input dental X-ray (Banday & Mir, 2019;Lin et al., 2012;Rajput & Mahajan, 2016) or non-X-ray (Kumar, 2016;Minaee et al., 2019;Miranda et al., 2016) dental image images, the images are preprocessed by traditional methods, Contour segmentation, edge detection, coding, feature extraction and classification technologies, and then realize the identification of individual identity. Dental images are prone to problems such as high image noise, low contrast, and inconspicuous contour edges, which make the traditional methods of tooth contour extraction unsatisfactory and have great limitations. Now more and more scholars use deep learning to conduct research, and have made breakthrough progress, which has played a significant role in more challenging research. Shah et al. (2006) used active contours without edges to accomplish the task of tooth contour extraction, which is based on the intensity of the entire area of the tooth image and does not require sharp boundaries between teeth. The technique can thus extract region contours in the presence of additive noise and without well-defined image gradients. Kondo et al. (2004) proposed a method to automatically segment teeth from 3D digitized images captured by a laser scanner. They avoid the complexity of directly dealing with 3D mesh data by proposing an innovative idea to detect features on two range images computed from 3D images.

Tooth image segmentation
With the rapid development of neural networks, convolutional networks have emerged in image processing, which has brought great help to various scientific research work and greatly improved work efficiency. Xu et al. (2019) proposed a new method for 3D dental model segmentation via deep convolutional neural networks (CNN). Learning a general and robust segmentation model by leveraging deep neural networks. Rao et al. (2020) proposed a symmetric fully convolutional network with residual blocks and dense conditional random fields (DCRF) to automatically achieve accurate segmentation of tooth images. Sun et al. (2020) proposed a method to segment and identify individual teeth from a digital dental model via a deep graph convolutional neural network, capable of simultaneously segmenting and identifying both gums and teeth automatically and accurately. The network performs vertex feature learning through a featuredirected graph convolutional neural network (FeaStNet), dynamically updating the mapping between convolutional filters and local patches from a digital dental model, enabling efficient and accurate tooth segmentation.

Tooth recognition technology
The main idea of tooth recognition technology is to match the dental images of individuals before and after death, and to determine the individual's identity through the matching results. The international research on the use of dental images for forensic individual identification can be traced back to around 1950. The dental images before the corpse's death and the images obtained during autopsy are matched to identify the individual's identity. Fahmy et al. (2004) proposed the architecture of a new Automatic Dental Identification System (ADIS), defined the functions of its components, solved the problem of developing an automated system for postmortem identification using dental records, and greatly provided law enforcement agencies with locating missing persons. New technical support. Biggs and Marsden (2019), based on post-mortem CT scans introduced into the Disaster Victim Identification (DIV) environment can bring a lot of additional detail, showing a model of the victim's dentition using 3D printing, eliminating the need for a charred body A self-confident method of dental identification can be carried out on a disfiguring incision, which is not only fast to identify, but also costs less than £1 to make, making it suitable for wide distribution. Jensen et al. (2019Jensen et al. ( , 2020 also presented recommended guidelines for the use of postmortem computed tomography (PMCT) in the forensic dental identification process. Currently, wholebody PMCT is widely used for pre-necropsy diagnosis of fractures, organ changes, hemorrhage, and foreign body localization, but it may also aid in the dental identification process in single and multiple fatal cases. Li et al. (2019) proposed a 7-layer deep convolutional neural network with global average pooling to identify tooth categories. Muresan et al. (2020) proposed a new method for automatic tooth detection and classification of dental problems, highlighting 14 different dental problems that may arise. Use the labelled data to train CNN to obtain semantic segmentation information. Then, multiple image processing operations are performed to segment and refine the bounding box corresponding to the tooth detection. Finally, each tooth instance is labelled, and histogram-based majority vote is used to identify issues affecting it. Vedavathi (2021) proposed an ANN tooth validation model to distinguish individuals and restore individual subtleties. In 2019, Cui et al. (2019) from the team of Professor Wang Wenping of the University of Hong Kong first proposed a method based on convolutional neural network (ToothNet) to segment and classify teeth in 3D dental CBCT images. By enhancing the image contrast along the tooth shape boundary, a segmented image can be obtained; then the segmented image is input into a classification network for tooth classification.
In 2020, Ke et al. (2020) proposed to use deep learning algorithms to solve the problem of individual identification based on two-dimensional dental panoramic X-ray images and verified the feasibility of applying deep learning algorithms to tooth identification on a small data set. Then on this basis, they proposed DentNet  for individual identification of panoramic X-ray images, which uses convolutional neural network to obtain features for matching and obtains the final individual identification result through similarity matching algorithm, Finally, the Rank-1 accuracy rate obtained under the test set consisting of 173 people is 85.16%. In 2021, Lai et al. (2020) proposed a new method to assist human recognition by automatically and accurately matching 2D panoramic dental X-ray images using a deep convolutional neural network LCANet.

Object segmentation of tooth mark image
Recently, there have also been many image segmentation algorithms using neural networks. They have achieved amazing results in semantic segmentation for the task of a label (only distinguishing categories) and are often used in the segmentation of daily scenes such as streets. Chen et al. (2017) proposed a system named DeepLab, which reuses neural networks already trained for image classification for semantic segmentation, and further expands spatial pyramid pooling, combining deep convolutional neural networks and fully connected conditional random fields, significantly improved semantic segmentation. Jader et al. (2018) proposed a segmentation system based on a convolutional neural network of masked regions to complete the detection and segment each tooth in a panoramic radiograph. Due to the complex texture of the tooth print model, we used, the blurred and indistinguishable outline of the tooth print, and too many interference potholes, as shown in Figure 3, the image segmentation methods such as threshold segmentation, watershed, and neural network mentioned above cannot always be achieved. The expected effect, but in the process, inspired by the neural network to achieve high accuracy in the classification task by labelling the data, we consider the tooth print as a category detection, locate its boundaries and coordinates, and divide Regions of interest (ROIs), which indirectly complete the segmentation of tooth marks.

Target detection network
YOLO is a target detection method proposed by Joseph Redmon and Ali Farhadi at the University of Washington in 2015. After the deadline for the paper, it has been developed to the fourth edition of YOLO v5. As a new type of deep neural network, this series of methods directly uses the image as input, returns the position of the target at the output layer, and transforms the target detection problem into a regression problem. The end-to-end target detection is achieved, and the accuracy and speed have reached a satisfactory level.
The advantages of the YOLO model algorithm network can be summarized in the following aspects: (1) Since the process of the YOLO v1 algorithm is relatively simple, when performing the detection task, it is not necessary to extract the candidate area of the input content. Therefore, compared with other target detection algorithms, the execution speed of the algorithm is very fast, which can basically reach 40 to 50fps, and the Fast YOLO algorithm, which performs lightweight processing on the basis of the YOLO v1 algorithm, can even reach three times the detection speed of the standard YOLO v1 algorithm.
(2) In addition to being faster, the YOLO v1 algorithm has two other advantages. On the one hand, it has a high accuracy rate. When the target to be detected in the input image is close to the background, the background will not be falsely detected as a target, and the background false detection rate is relatively low. On the other hand, the generalization ability of YOLO v1 is relatively strong. When detecting relatively abstract pictures such as oil paintings, it can learn the generalized features of the target well and reduce the error rate of detection. Figure 4 is a schematic diagram of grid division. The backbone network structure of the algorithm is shown in Figure 5. Similar to GoogLeNet, the network of YOLO v1 consists of 24 convolutional layers, 4 pooling layers and 2 fully connected layers. First, we need to unify the size of the input image to 448 × 448, then go through multiple convolutional layers and pooling layers to get a 7 × 7 × 1024-dimensional tensor, and finally, a 7 × 7 × 30-dimensional tensor is output through two fully connected layers. Among them, the convolutional layer is used to extract features, and the fully connected layer is used to obtain the predicted category probability and location information.
The first two items of the loss function formula represent the position prediction of the bounding box, where obj ij represents the indicative function, that is, when the j-th frame of the i-th grid is responsible for a certain target, the result is 1, and the rest are: 0. Items 3 and 4 of the formula represent the confidence prediction of the frame, and the last item represents the prediction of the category. The indicative function obj ij is used to determine whether the centre of the target falls on the grid centre i.
This paper finally adopts the third version of the YOLO series network as our tooth mark region detection network. Compared with the first version introduced above, it uses Faster-RCNN  to introduce the concept of a priori frame, that is, according to the aggregation The class algorithm roughly determines the size information of the bounding box of the target object, thereby reducing the difficulty of predicting the bounding box and training the network to the actual position. At the same time, a better darknet-53 (Redmon & Farhadi, 2018) is designed to replace the previous framework as the basic classification network, which introduces a similar Residual structure of ResNet . The   network has improved the training and prediction ability of unknown categories of targets, and the novel tooth mark targets used to train this article have more accurate results. The specific sample data production and training process, as well as the result display and analysis will be in this paper. section introduction.

Sample data production
The original data in this article is the dental model image shown in Figures 1 and 2, which is also the input image of YOLO v3. The object we want to detect and locate is not the dental mold in the image, but the dental impressions inside the dental mold, that is, there is only one category in the target detection part of this article -dental impressions. For this reason, when we make sample data, what we have to do is to mark the bounding box for the tooth prints in the multiple dental model images as the label of the tooth print category. Each tooth model image contains 10-14 tooth print targets, which are marked with labels. The resulting sample data is shown in Figure 6.
In the label file after production, the information is saved as shown in Figure 7. The image name and path are saved at the beginning, and then the size information of the image (including the number of image channels, and the colour image is 3) is recorded, and then the label information of each target is recorded, including the target category, the coordinates of the upper left corner of the bounding box, and the coordinates of the lower right corner of the bounding box.
In this paper, 50 dental imprint models were selected, and 6 dental imprint images of different perspectives were used for sample label production, and 300 label images were obtained, totalling approximately 3600 dental imprint samples, including a small part as shown in Figure 8. To enhance the anti-interference ability and tolerance of the training model, so as to leave a certain elastic space for the quality of dental mold making in practical application.

Adaptive threshold strategy and result analysis
Since the training of the network is based on tooth marks, and a tooth model image contains 10 or more tooth marks, even if the training set is used for testing, there will be unpredictable results, and the target detection of tooth marks in this chapter will be the final It is to segment out the tooth print area for subsequent operations, which can be regarded as a preprocessing process. Tooth model images, regardless of training set and test set (the training set of the target detection network is also the input of the subsequent algorithm). The results are shown in Table 1. After observing the test results of all the dental model images, as long as the detected tooth prints are detected, their bounding boxes almost all locate the tooth prints well, with only a slight deviation of  individual tooth prints within the allowable range. Therefore, this paper determines whether it is correctly segmented based on whether all the dental impressions in a dental model image are completely detected. A total of 923 dental mold images were tested, and the confidence threshold was set to 0.5. The NMS algorithm excluded secondary bounding boxes with an IOU greater than 0.35. After the test statistics, 665 images of all dental imprints were detected, with an accuracy rate of 72.05%. Most of the inaccurately detected images miss individual tooth marks (the bounding box confidence of the missing tooth marks is less than 0.5), and a small number of images detect individual tooth marks (the excess tooth mark bounding box confidence is greater than 0.5). The algorithm can increase or decrease the threshold adaptively by comparing the detected number of teeth marks with the manually inputted number of teeth marks until a specified number of teeth marks are detected. The adaptive strategy is shown in Figure 9.
After the adaptive threshold strategy, the final correct segmentation results are 846, the accuracy rate is 91.66%, and the task of segmenting a single tooth print area is well completed. There are 258 images that cannot be accurately recognized at one time in the detection of the threshold value of 0.5, of which 181 have been corrected by the adaptive threshold, and the correction rate is 72.05%. The boundary positioning effect of the printed target has been relatively good, but the confidence of the edge in the bounding box fluctuates, Figure 9. The adaptive strategy. and the comprehensive recognition effect of the model is better.

Feature extraction
This section proposes a concept based on the design of tooth-hole area ratio features, so that based on the design of geometric feature proportions, the low-dimensional space of the tooth-hole area ratio features is introduced to enrich the features, and the stability of spatial information is increased. The identification system provides more stable identification performance, and the identification accuracy rate is improved. In terms of geometric features, more stable pore area ratio features are introduced, such as area maximum value ratio features, incisor, canine, molar position type area ratio, to extract effective geometric structure features.
We made 50 pieces of training data, each consisting of 10-12 tooth mark targets, trained 500,000 times to get a rough model. The test results are shown in Figure 10. As can be seen from the figure, the bounding box of the tooth cavity is relatively accurate, which proves the feasibility of our algorithm idea, but some results do not detect all the tooth marks. There is an effect of large overlap.

Calculation method of tooth print image area
According to the tooth hole image of a single tooth model image, each tooth hole image is segmented by the target detection in Chapter 2, and the tooth imprint segmentation area is obtained. The tooth imprint area is obvious. We extract the coordinates of all the inner and outer contours, calculate the area of the contour, and after repeated tests, it is almost certain that in this type of image, the middle contour is selected as the tooth imprint contour, and the area of the contour is used as the contour. As a benchmark, a threshold is set to exclude redundant contours, and at the same time, the inner contour is obtained through the relationship between the inner and outer contours, and the result shown in Figure 11 is finally  obtained. The tooth impression contour area is calculated as follows.
Among them, m 1 x , m 2 x are the ordinates of the two contour points whose abscissas are equal to x, and Table 2 shows the area of some tooth marks in Figure 12.

Feature selection of pore area ratio
According to the contour segmentation result of the image, that is, according to the bounding box and coordinates of a single tooth imprint of the tooth model image obtained by the target detection in the second chapter, the number sequence information storage of a single tooth imprint is obtained. The four-point coordinates of its single tooth print are obtained through the detection results of its network training, as shown in formula (3).
x = (e 1 , e 2 , e 3 , e 4 ) In each tooth imprint image, the area ratio between the largest area and the smallest area in the tooth imprint  is calculated according to the segmentation, as shown in formula (4).
The area ratio of different types of teeth, such as the area ratio between canines and molars, the area ratio between incisors and canines, the area ratio between molars and incisors, and finally the area between the different types of tooth marks. Ratios are added as an area ratio to one of the geometric features.
In formula (5), i represents the i-th dimension feature of the feature vector, and m and n represent incisors or canines or molars. Equation (5) only considers the local features of the model, in order to improve the multi-scale of the model we will use the ratio of the area of the incisor area to the area of the molar area.
In formulas (6) and (7), S c is the sum of the areas of the two central incisors, S l is the area of the first premolar and the first molar, and S r is the area of the right first premolar and the first molar.
In formula (8), k represents the number of tooth marks. The constructed eigenvector is Equation (9). v = (r 0 , r 1 , · · · , r 13 ) (9) Table 3 is the description of the feature meaning of the area ratio feature of the tooth print image structure. Figure 13 is an image of dental impressions taken from two individual dental molds, respectively. Table 4 shows the eigenvalues of different individuals. The ratio of left central incisor to left first premolar r 3 The ratio of left lateral incisor to left second premolar r 4 The ratio of left canine to left first molar r 5 The ratio of left central incisor to left first molar r 6 The ratio of left canine to left premolar r 7 Ratio of right central incisor to right first premolar r 8 Ratio of right lateral incisor to right second premolar r 9 Ratio of right canine to right first molar r 10 Ratio of right central incisor to right first molar r 11 Ratio of right canine to right premolar r 12 Ratios of left central and lateral incisors to left canines and first molars r 13 Ratio of right central and lateral incisors to right canines and first molars Figure 13. Tooth print images of two different individuals.  Table 4 shows the eigenvalues of the tooth marks of two different individuals.

Classifier
This paper uses Support Vector Machine SVM (Meyer and Wien, 2015) as the classifier model in the matching algorithm, which is a supervised learning model. When dividing the sample data by finding a hyperplane, the principle of the calculation is to maximize the interval.
As shown in Figure 14, the blue triangle icon and the green circle icon in the figure represent two types of datasets, respectively. It can be seen from the figure that the patterns are distinguished by straight lines and curves. At this time, the hyperplane is represented as a straight line, and the dividing plane between data is usually expressed as shown in formula (10), where w represents the normal vector of the hyperplane, and b is intercept.
For almost completely linearly inseparable cases, the kernel function of SVM will map the data to a high-dimensional space and obtain a hyperplane for segmentation. It is also important to choose an appropriate kernel function. When the data sample is too large, a two-dimensional linear kernel will not be selected when it is similar to the sample data. When the characteristics of the sample data are insufficient, the Gaussian kernel will be selected for a small sample data size.
After the divided hyperplane is obtained by calculation, the error with the actual situation is required. The larger the error, the more inaccurate the model, and the smaller the error, the more accurate the model. Therefore, the error result needs to be fed back to the trained model through repeated iterations to obtain the optimal calculation model, and then achieve less than the set threshold or less than A certain error empirical value, its error formula is (12).
The process of solving is simplified to a quadratic programming problem of convex function, and the parameters and decision function of its hyperplane are obtained by solving its dual problem. The classifier is not only suitable for linear data, but also for nonlinear data through a kernel function. According to the characteristics of the data feature dimension and small amount of data in this experiment, combined with the characteristics of SVM suitable for small sample data, so the Gaussian kernel function is selected, which is realized by choosing to call the scientific computing sklearn package in Python.

Experimental platform and experimental data preparation
The materials and tools used in this paper to make dental molds are composed of alginate printing material  powder, measuring spoon, measuring cup, knives, mixing bowls, and dental trays. Impression materials are harmless to the human body. The collection steps are as follows: Step 1: Take 2 spoons of the impression material and pour it into the mixing bowl. Use a measuring spoon to measure the impression material.
Step 2: Pour 2 levels of water into the impression powder and start stirring.
Step 3: Put the stirred alginate material evenly on the tray.
Step 4: Put the dental tray containing the alginate material into the mouth, the person to be collected bites the dental tray, and takes the impression about 1 minute later.
Step 5: Obtain the dental model image with a digital camera. The lighting, brightness, shooting distance, etc. are strictly consistent during the shooting process. Figure 15 is an example diagram of the tooth imprint samples of 5 different individuals collected by the above method.
The experiments in this paper are run on a PC host with a CPU of i5-10400F, 2.90 GHz, 8GB of memory, and an operating system of 64-bit Windows10. The software development platform of this experiment is PyCharm2020, Python3.6. The experimental data set is 856 dental model images that have been accurately segmented by the third chapter, including 50 dental model images (50 categories), 11-24 for each person, and the image pixels are 4608 × 3456. All images Raw camera image taken for the phone. 10 people's dental model images were randomly selected as the validation set of out-of-database data, and the remaining 40 people's images were constructed as the database. The 40-person model randomly selects two images as the test set, one image as the application set, and the rest as the training set. Figure 16 is an example of the experimental data set.

Experimental results
For classification tasks, common performance evaluation indicators can also be used as evaluation criteria for this experiment for evaluation on the test set. We conducted experiments on three performance evaluation indicators: model accuracy, precision, recall. The calculation formulas of these three models are shown in (13), (14) and (15), where TP, FN, FP, and TN indicate that the person in the data is correctly identified, and the person in the library is incorrectly identified as a non-identical person. People in the library and people who are not in the library are wrongly identified as people in the library, and people who are not in the library are correctly identified as people who are not in the library. It can be known from the formula that the accuracy rate is the proportion of all positive and negative examples that are correctly predicted to the total sample, and it cannot fully reflect the quality of the model.
After calculating the eigenvectors of the input area ratio of a single tooth impression, the classifier first compares the number of contours and the ratio of the number and area of teeth, if they are the same or not, then jump to the next cycle. If not, skip to the next cycle directly. Then substitute the feature vector of the maximum value area ratio of the tooth print image into the formed Gaussian function to calculate the Gaussian function value. If the value is greater than or equal to one-tenth of the peak value, it is consistent, otherwise, it directly skips to the next cycle. In the same way, the same is true for the type-area ratio characteristics of tooth marks. During this time, if it doesn't meet the conditions, it will output which re-screening failed. If it passes the repeated screening, it will output and return the corresponding target id at this time, indicating that the tooth-print image to be matched matches the person's tooth-print. The threshold selected as one-tenth of the peak value is obtained through repeated experiments in the classifier. Although increasing the threshold can reduce the number of matching results, it will also increase the probability of excluding the real target. Figure 17 demonstrates the flow of the personal identity authentication and identification system based on the area ratio of the teeth and the geometric features. Table 5 is the test and comparison of the cross-validation results of different models. Table 6 shows the experimental results of three performance evaluation indicators: model Accuracy, Precision and Recall.

Experimental comparison and analysis
Extract features from 90 test dental model images, and obtain 60 positive examples and 60 negative examples, that is, 120 input data, which are randomly shuffled and put into the trained model for model testing.
The labels corresponding to the input data are shown in Figure 18, and the output results obtained after   classification and recognition by the three models are Figures 19-21, respectively. This paper compares the output results with the real situation, marks the results that are different from the real situation in red, and counts their test accuracy. The comparison between the test accuracy rate and the verification accuracy rate is shown in Table 7. It can be found that the test accuracy rate is slightly lower than the verification accuracy rate, but the difference is not significant. From a numerical point of view, the situation is similar to that of verification, and the accuracy of SVM is still the highest. This result is quite satisfactory, it shows that the features designed in this paper are relatively good, there is no serious over-fitting phenomenon, and the recognition effect of the model is relatively good.     Table 8 presents the identification accuracy of 40 images in the application set. The identification is accurate only when the identity probability ranks in the top TOP-N of the result sequence. It can be seen from this that the proposed geometric feature model achieves 85% detection in TOP-1 and 100% in TOP-3. Among the 6 images that TOP-1 fails to detect, 2 are for The detection probability of the real identity is P_i = 1.00, so the best accuracy rate of TOP-1 is more than 90%, but there is another predicted identity with probability 1.00 in the two images. Therefore, for the sake of objectivity, we treat it as a recognition error. Compared with the classic dental X-ray method (Zhou & Abdel-Mottaleb, 2005), the model in this paper is more accurate in the recognition of singlejaw teeth and compared with the deep learning-based X-ray method (Lin et al., 2012), we are slightly insufficient in TOP-1, But we got to 100% accuracy faster. Comparing the literature (Jain & Chen, 2004) that also uses tooth print images, our accuracy has been greatly improved. In fact, the model has reached 39/40 = 97.5% in TOP-2 detection. The algorithm model in this paper has successfully completed the identification task.

Conclusions
This paper studies the geometric feature extraction and recognition algorithm based on tooth print images, and learns from data and literature that the use of tooth recognition technology is very helpful for cadaver recognition. Geometric feature extraction and recognition of printed images.
In the aspect of image segmentation, a deep learningbased automatic detection and segmentation scheme of tooth imprint image targets is proposed. Compared with the traditional segmentation method, the deep learning neural network scheme automatically tracks and detects the target area of interest as a After the training and verification of the improved network, an ideal and reliable training model is obtained to detect the target and realize the adaptive segmentation of the tooth print image. The final experimental results show that the detection accuracy of the target area of the tooth print image has also reached 91.66%.
Based on the combination of low-dimensional feature information screening, this paper proposes an identity verification scheme that combines the geometric structure features of the tooth-hole area ratio. According to the feature information of the tooth hole area in the tooth print image, the geometric feature vector from the maximum area ratio to the different types of tooth hole area ratio is constructed, and the extracted feature information increases the diversity and robustness of the features. The classification recognition accuracy using the appropriate model reaches 94.09%. Through training tests and cross-validation experiments, the classification and verification model of the algorithm in this chapter was finally designed and tested, and a more comprehensive verification and analysis were carried out. The verification accuracy rate is 92.53%. The test accuracy rate is 91.46%, and the ideal experimental results are obtained, realizing the research goal of personal identification.
The identification technology based on the characteristics of tooth marks can play an important role in judging the identity of the corpse. This paper utilizes tooth print images and uses low-dimensional geometric features and tooth hole area ratio features for identification. Although the method has achieved relatively good results, many areas still need to be improved. With the continuous development of future technology and the deepening of research, the search for a more suitable and more comprehensive algorithm design with more accurate and stable experimental results has become the focus of this research. one of the guides.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This work was supported by National Natural Science Foundation of China [grant numbers 62176237, 61873082] and the ZheJiang province Natural Science Foundation of China [grant number LY20F020022].