VoxRec: Hybrid Convolutional Neural Network for Active 3D Object Recognition

Deep Neural Network methods have been used to a variety of challenges in automatic 3D recognition. Although discovered techniques provide many advantages in comparison with conventional methods, they still suffer from different drawbacks, e.g., a large number of pre-processing stages and time-consuming training. In this paper, an innovative approach has been suggested for recognizing 3D models. It contains encoding 3D point clouds, surface normal, and surface curvature, merge them to provide more effective input data, and train it via a deep convolutional neural network on Shapenetcore dataset. We also proposed a similar method for 3D segmentation using Octree coding method. Finally, comparing the accuracy with some of the state-of-the-art demonstrates the effectiveness of our proposed method.


I. INTRODUCTION
With the fast development of 3D scanning and modelling devices, 3D model's repositories have become huge. These repositories include a mixture of different 3D models which requires to be categorized. Moreover, using VR (Virtual Reality) in some academic environments is obtained much attention, which improves the performance of students and, in some cases, decreases the cost and risk of using other tools for teaching. Due to a large number of models, it is an arduous task to organize them manually. Thus, having 3D recognition methods is necessary for each environment. 3D recognition helps them to organize their dataset and classify any other new 3d models. While, recently, many researchers have concentrated on such the research area, there are still many challenges remained murky. These challenges are not just about the accuracy of their provided methods; there are many other factors to reach an acceptable method which The associate editor coordinating the review of this manuscript and approving it for publication was Xiaogang Jin .
have to be specified according to individual environments. For example, presented researchs on 3D models of human body organs are severely limited. Distinguishing two pieces of human bones with similar shape would need a model to focus on more details rather than just shapes. Therefore, to ease the classification procedure, we require to use an automatic recognition approach, avoiding tedious work and being specialised to the target environment. This triggered us to focus on this research area, and this paper proves that our straightforward method can outperform many other popular methods in terms of 3D recognition.
In this paper, we propose a novel approach that trains a large scale dataset in terms of recognition; it not only focuses on the overall shape of the object but also uses surface features to be more accurate. This method takes advantage of the different presentations of 3D objects, such as surface features and volumetric representation. Each of these representations proved to be helpful in some aspects, and gathering them together would give us an opportunity to benefit from all of them. It means the proposed method has to be a combination of five convolutional neural networks (CNNs). In the first stage, a 3D convolutional method has been prepared to process volumetric data and extract enough features of the input surface shapes. One of the main critiques of volumetric data is the vast memory consumption. In this research, instead of using high-resolution volumetric data, low-resolution has been used. However, to reach the details and train them precisely, the other provided features have been evaluated.
We divide the suggested neural network model to five networks; one of them is focused on 3D volumes, and the other fours train the surface features. The considered surface features in this method are surface normal and surface curvature. These features that are provided in some of the famous datasets can improve the accuracy prepared by 3D volume to be competitive in 3D recognition purposes. These data have less information in size and more in aspects of identification. It would assist the learning method to achieve a reasonable result in a proper time. It also optimizes the way of mesh recognition by encoding the information, extracting essential features, and learning the encoded data. Besides, it has been beneficial to different environments due to accessing local features as well as the global shape. The surface normal and surface curvature have been considered for using surface features in 3D model recognition. These features generally describe a point or a local area of an object. To take advantage of such features, we design a procedure to convert them into a histogram, which represents the distribution of the features across a whole object. A convolutional neural network (CNN) can be assisted by such the histograms and volumes to distinguish objects in a dataset.
Moreover, we added a part annotation stage to our method for improving the recognition accuracy. In the first step, Octree encoding converts the point cloud to a series of binary data before training. The encoding level contains a modified version of an octree-like method, which was inspired by octnet [1]. The encoded object and surface features would be the input of the training procedure. After the recognition stage, the result can be verified by simple evaluation and the fault classified object. Our main contributions include: • A new machine learning approach for recognition of a large 3D objects dataset is proposed. The approach preprocesses the 3D models to extract normal and curvature features. Merging these features and voxel data, significantly improves the efficiency of the 3D recognition method.
• Expanding the suggested recognition method by applying a 3D mesh part annotation. The method replaces the voxel data with encoded octree data, which significantly improves the efficiency of our approach. Despite the ability of segmentation, it exploits the segmented result for the improvement of the recognition method. This opinion improves the accuracy to be competitive with other state-of-the-arts.
• The suggested method takes advantage of various features and merges them into one neural network. This process not only improves the recognition accuracy but also provides the method to be more generalizable due to using different objects dataset.

II. RELATED WORK
There are many pieces of research in the computer vision and graphics which have been dedicated to establishing a way of recognizing 3D objects. Several representations are employed to describe 3D models, such as shape descriptors, voxels, and projected view representations. Besides, a variety of methods are used to assess this information and provide the favour results, such as methods for the informative region selection, feature extraction, and classification [2]. In this section, we describe some of the papers which take advantage of classification methods. In 2017, Czajewski and Kołomyjec [3] published a remarkable paper in 3D mesh recognition based on color and 3D depth (RGB-D) images. The proposed method used Viewpoint Feature Histogram and Camera Roll Histogram as their descriptor. ICP (Iterative closest point) was then employed for the main matcher. From their description, their recognition performance is better than the convolutional neural network (CNN)-recurrent neural network(RNN) method from Socher et al. 2012 [4]. The CNN-RNN method proposed a model that merges convolutional and recursive neural networks (RNN) for extracting features and analyzing RGB-D images. Reference [5] by Beserra Gomes et al.
suggested the moving fovea method to down-sample 3D data and decrease the processing time of the object classifier system from point clouds. They stated their object recognizer could run 7x faster than non-foveated approaches. The central concept shows that the point density should be higher close to the fovea. This density is declined according to the distance from the fovea. It means that they reduce the number of points and calculation at the same time. VoxNet [6] by Maturana and Scherer concentrated on light detection and ranging(LIDAR) and RGB-D cameras to enhance the robot perception from a real environment. They suggested method, combining a volumetric occupancy grid representation with a supervised 3D CNN. It has to be mentioned; the VoxNet is a groundwork of many other proposed methods afterward. Another related work is [7] by Zhirong et al., which focused on obtaining a volumetric representation of a 3D model from 2.5D range data. This opinion achieved an exciting result on depth sensors such as the Kinect. Meanwhile, Su et al. by [8] proposed to render 12views from 3D meshes and categorize the rendered images rather than working on 3D meshes. To do this, they applied VGG [9] that is already trained on ImageNet data. MVCNN-MultiRes [10] enhance MVCNN by using rendered images from different resolutions. Moreover, the 3D object recognition of the ModelNet dataset is addressed by FusionNet [11] using two data representations: pixel representation and volumetric representation. They merged two different voxel CNN networks with a single multi-view network and the result performed 92% accuracy on modelnet10 and 90% accuracy in modelnet40, which was the most recognition accuracy in 2016.
CNN have performed the best performance in various computer vision tasks, including action recognition [12] and object recognition such as large-scale classification [13]. Through jointly encoding convoluted information in the training process, 2D convolutional networks have achieved the best performance in object detection and classification. Other investigations used 3D CNN structures to deliver recognition and detection tasks in a video by tuning the networks using video frames [14]. Gkioxari and Malik [15] proposed an action detection system that aimed to detect bounding boxes of actions frame-by-frame from a video. On the other hand, a series of supervised methods achieve great performances on mesh part annotation and part labelling. But, for example, Yi et al. [16] rely on databases of segmented objects, which is a highly labor-intensive process. Moreover, catching the right scale part is a very intensive task for a non-expert manual annotator. To handle these challenges, SyncSpecCNN [17] is researched on a spectral CNN method on a graph of triangulated vertices by Yi et al. Or VoxelNet [18] by Zhou and Tuzel investigated on using convolutional method and volumetric data to deliver an accurate 3D recognizer in 2017. The results achieved by them not only work for classification purposes but also to localize the objects in Kitti dataset.
Zhi et al. by Lightnet [19] propose a real-time method of anticipating class label and orientation information, without extra annotation data. They introduced a shallow network for 3D object recognition accuracy that overcomes some of the state-of-the-art methods in the number of training parameters. The other research paper is [20] that leverages GAN (Adversarial Generative Network) to generate object structure implicitly. This network is also able to establish a 3D model from a low-dimensional probabilistic model. In addition to these, it provides robust shape descriptors enabling us for recognition purposes. In 2014, Liang et al. published [21] that focused on 3D object recognition and position estimation from multiple projected views of a 3D mesh. Their model adopted two DBFs (deep belief network) to obtain the image features and use escape connection to the last layer to match the features and analyze the input data. Besides, they apply a new DBF that merged two traditional DBFs and estimated the camera position, the same as the classification method. In this method, they also applied the K-mean clustering to overcome the weakness in object detection,which provides accurate results.
PointNet [22] cannot recognize local features via the metric space points, restricting its ability to achieve fine-grained patterns. It has some difficulties in complicated scenes. An improved version of PointNet, so-called PointNet++ by [23] presented a hierarchical neural network that utilized PointNet recursively on a nested partitioning of a list of 3D points. VOLUME 8, 2020 By utilizing metric space distances, the network can learn local features with increasing contextual scales. Liu et al. by [24] suggested using volumetric representation and unsupervised deep network to obtain the features of point cloud data. They likewise applied the Hough Forest method on the gathered features and achieve object detection and position estimation concurrently. They compared the results on the 2.5D dataset of Tejani et al. [25], and performed an almost acceptable score.
Several 2D recognition methods have been studied in terms of investigation on recent research in recognition topics. For example, if we focus on image recognition, RVM (Representative vector machine) [26] is highlighted. It concentrated on character recognition with PC-2DLSTM (Principal Component 2-D Long Short-Term Memory) [27] and metric learning-based recognition [28] and achieved the accurate result using deep neural network and in of face recognition and facial expression. Also, SSP (superimposed sparse parameter) classifier [29] and AFERS (facial expression recognition system) [30] have been proposed as the top approaches for classification purposes. MIT university in 2018 by [31] described the EdgeConv layer in deep networks to obtain local geometric features of point clouds. The entire architecture followed the Pointnet architecture except applying EdgeConv Blocks; it has produced an inevitable outcome on the experimental results. They had got an accuracy of 92% for the classification of ModelNet40. It was better than the state of the arts such as Pointnet++, VoxNet, and KD-Net. Also, [32] by Jin Xie et al. examined using nonlinear distance metric in 3D shape descriptors for retrieval. They assessed their results on SHREC'10, ShapeGoogle, McGill, and SHREC'14 datasets. The progress of methods using CNN not only can reach more accuracy, but also, some researches are designed to be simple for operating such as Sun et al. [33]. Such the method does not need to process the data before or after CNN, and thus it is easily manageable.

III. 3D OBJECT RECOGNITION
We propose a combination of 5 different CNNs that work using voxel, surface normal, and surface curvature (see FIG-URE 1). The main difference from our recognizer and other proposed approaches is using some parameters which explain the adjacency of a point, such as surface normal, which contains the direction information and curvature data, which gives us information of edges on the surface. The proposed method can train the entire categories in a short time, besides having a top and reliable accuracy.

A. PREPROCESSING
The first step of the proposed method is to convert the data from a list of the points to a series of information that can quickly be learned by our light and deep neural network. This information-processing includes: converting a point cloud to volumetric data, and converting normal and curvature information to a series of one-dimensional histograms. Firstly, the volumetric data for our primary target dataset, Shapenetcore-part [16], has already been provided (see FIGURE 2). However, in the case that volumetric data is necessary to be acquired, the 3D occupancy grid [34] is an accurate method. This method allows us efficiently estimate occupied space between two 3D points. Also, it can be stored and be operated with efficient and straight forward data structures. Secondly, as the Algorithm 1 is illustrated, we take advantage of normal and curvature data by converting them to a series of one-dimensional histograms at the preprocessing step. In this step, surface normal, which contains nx, ny, and nz parameters, provide three histograms that represent the distribution of different point direction in our target point cloud. Thus, at the end of the preprocessing step, provided data, containing 3D volumes, histograms of the surface normal in three directions, and histogram of surface curvature are ready for the learning process. An important preprocessing step is to convert the estimated data into a format which shows the overall changes of computed data and modify it to have less redundancy. Therefore, a histogram of one-dimensional data, an accurate representation of the distribution of the geometric features, could be applicable in this field. In other words, it can abstract the estimated data to comprehensive and trainable information.
To process a dataset, we design an appropriate method to be agile and efficient. Algorithm 1 is designed to extract a histogram in a loopless and matrix calculation based method, which is operational in TensorFlow as well.

B. DEEP NEURAL NETWORK STRUCTURE
Most of the deep neural network is designed for processing images, in particular in learning methods such as detection and classification. However, there are many different methods to train and classification on any sorts of challenges, such as gesture recognition, 3D reconstruction, and others. In this work, we explored many methods of 3D object recognition using a deep neural network to train a vast dataset and find an efficient and accurate approach. The designed CNN of the voxel data is contained two convoluted layers which are adapted to abstract the input data to more critical information. There are two different convolutional layers in the proposed network, which 1D and 3D Layers; 1D layers are applied to histogram data of the surface normal and curvature input arrays, and the 3D layer is used to refer on volumetric data. Besides, the results of these layers should be merged in 1 layer before the fully connected layers. Thus convolutional layers not only assisted us in decreasing the input size but also prepare the data to be ready for fully connected layers. FIGURE 3 represent each step of the suggested neural network in detail. It contains five CNNs which are concatenated and then connected to the fully connected layers before the output layer.

C. PARAMETERS ADJUSTMENT
To explore the parameter of voxelization, we conduct a series of experiments where use different grid size in each. The size of the histogram of normal and curvature information is changed relatively, and the processed data is recorded for each 3D model separately. For this investigation, the Shapenetcore-part dataset is used that contains 16 different categories divided to train, test, and validation, according to [16]. The evaluation demonstrates that increasing the grid resolution improves accuracy until the grid length of 8. If the grid size is increased, the more convolutional layers are required to keep the efficiency of the network. Additionally, the training level would need a high-performance processor and more memory when the grid size was increased. The comparison of FIGURE 4 shows us that the grid size with the value of 6 would be the best choice. It needs low memory, the training process is much faster, and the accuracy can be counted as one of the highest. On the other hand, increasing the grid size could seriously decrease our performance by importing many details of the 3D model, which is not necessary to be learned.

D. 3D MESH PART ANNOTATION
To verify the result of the recognition method, we apply an almost similar network to the challenge of 3D mesh part annotation, with some modification of the input data. The overall architecture of the part annotation approach is explained in FIGURE 5, which contains encoding 3D points to octree encoded binary arrays, the histogram of adjacency points' normal (in x, y, and z-axis), and the histogram of curvature as well. The octree encoding is a procedure of converting a point location to a series of binary data; This method by divideand-concur procedure, divide the data space to some smaller area and search to find the area which includes our intended point. Having a point in one area leads the algorithm to move to that area and continue the procedure there. This procedure continues until the method reaches the sequence of the binary array that leads us to find the point's area. For example, if the process continues until six times, we have six binary arrays with a length of 3. With following these 6 arrays data, we get closer to the point location step by step that is helpful to learn a location by intended tolerance. In this approach, each stage provides a number, which is the selected area index in the model space. The algorithm saves the number in each step and converts them to binary data to be ready for the learning method, as shown in FIGURE 6.
This method is using almost the same procedure as the proposed recognizer does. However, the network uses a RNN instead of a convolutional network, as it is evident in FIGURE  7. Also, the encoding step has to concentrate on just a point of the mesh, not the entire mesh. This suggested method showed its efficiency, but to improve its power, we can add normal and curvature to increase the accuracy. This kind of supervised 3D part annotation has a variety of applications. One example could be [35] by Karambakhsh et al., which was the first step of our collaboration with the medical school of Shanghai Jiao Tong University. Organizing a medical dataset and segmentation would be a significant application for our proposed method.
In the first step of the proposed approach, the position of the intended point should be encoded by the Octree method, which helps us to recognize the area of the point with our suggested resolution. To convert the point information to encoded binary arrays of the Octree, more than position data, we need to have a minimum and maximum VOLUME 8, 2020 FIGURE 4. The accuracy results are shown by sending voxels with different grid sizes, from 2 to 9. It shows that the accuracy increases by a larger grid size, but it also affects the training speed. We find out that grid size with a length of 6 is more efficient than the other sizes by having less training time and high accuracy. of the 3D model in x, y, and z-axis. To be scale-invariant, we require to normalize the position of vertices as well. Since the encoded data is sequential and distinguishable, the method can use an RNN for learning the point position.
In this paper, we use the same encoding method to recognize the location of the point. However, the result showed that having less encoding steps ends to having less location accuracy, and more encoding steps lead to be time-consuming. Finally, in our approach, it is decided to use less encoding step but some extra local features to recognize the shapes accurately.
As mentioned in the first section, using point normal and curvature provides us with an ability to recognize 3D models in a variety of datasets. Since each point of the 3D models has surface normal and curvature, they can be used in the 3D part annotation as well. Finally, by matching the acquired part's histogram with the selected category's histogram, the result of recognition is evaluated.

E. INTERACTIVE TRAINING
One of the crucial challenges in different environments is updating the database without consuming too much time and effort. In most of the approaches, a network can learn a dataset with a static number of inputs, but there is a possibility in the suggested approach that assists us to cover this request. The proposed method can receive the data part by part, due to our input generator settings. There are two types of training in the suggested approach; (i)offline training that receives all the dataset at the first step and (ii) active training that can apply a new entry to the pre-trained network. The active training makes an opportunity to add a new scanned 3D object and recognize it, the same as other existing objects in the dataset. For sure, there is a risk of decreasing the accuracy of the network if the operator added a wrong entry. The structure of our interactive training is shown in FIGURE 9. The method loads the input from a dataset directory gradually, which almost affects training speed, but it makes the technique able to receive data and continue training even after convergence. We should make sure that the new input information has to be in the right structure and similar to our origin input data. This ability assists researchers in pedagogical environments to update their databases frequently.

IV. EXPERIMENTAL RESULTS
The comparison of the proposed approach with the other state of the arts shows undeniable progress in the 3D mesh recognition issue. As we already discussed, the main suggestion in this paper is an acceptable and accurate method for classifying 3D models. However, we have achieved an  almost precise method for part annotation by modifying the process and applying a nearly similar procedure and network. For comparison, we used the proposed method on the Shapenetcore-part dataset, which is an almost large dataset with 16 different categories.    investigation of the recognition with different options and inputs, which is done by proposed neural networks. TABLE 1 clearly shows the progress of recognition accuracy after concatenating networks. The precise result of the network just by using volumetric data verifies the power of the proposed system. However, by adding two more parameters, which are point normal and curvature, the more exciting results are achieved. The advantages of the method are not limited to its accuracy but also training speed faster than some of the popular methods in the same issue. Using matrix-based calculation made preprocessing part enormously faster than ordinary loop-based implementation. Also, the network structure is simple, which means the number of nodes and layers is less than many other methods as well. TABLE 2 shows the comparison table in terms of recognition accuracy and based on categorization on Shapenetcore-part.
Dataset: The evaluation of the proposed approach is done on the accessible dataset of Shapenetcore-part contains 18045 shapes across 16 categories with a different range of objects. We use the default train-test split [16], which allow us to compare our approach with another method in the same situation.
Device: the processing is done in a desktop-PC with Intel Core i7 4 GHz processor, 16 GB RAM, with 64-bit Windows 10. The ordinary PC is selected to show the lightweight deep network structure, which is optimized and efficient. TABLE 2 collects the overall classification results of different methods on the Shapenetcore-part dataset. As the result shows, the proposed merged network result is superior in all the mentioned results. The comparison demonstrates the power of combined five networks into one. Besides, in comparison of image-based technics, such as MVCNN [8], the proposed approach does not require to render the model with different views. The only time-consuming task is preprocessing the data to extract voxel, surface normal, and surface curvature. However, our implementation using matrix calculation enabled this step to be as efficient as possible. TABLE 3 shows the estimation of the proposed deep neural network in each category separately. Indeed, there are many objects in each category, and we show the mean of the results in this table. On the other hand, FIGURE 10 visualizes the list of recognition accuracy in both stages of the proposed method. The results of the first stage demonstrate that the method has done the expected classification, especially on objects that have enough details to distinguish them, and the recognizer supporting by part annotation shows more improvement in some of the categories. Although the part annotation method shows precise results, using them in recognition of Shapenetcore-part has not got a perfect result; this problem is according to the variety of shapes on the Shapenetcore-part dataset. Thus, when the part annotator stage is applied on body organs 3D models, they do not have many differences in one category, the results would be more enhanced.
The proposed idea, which is combining the volumetric data with the point normal and curvature information, works well on the recognition issue. But to improve the abilities of the proposed approach, we decided to apply a similar network that received octree coded information of a 3D point instead of 3D mesh voxel data to the part annotation task and use this second network to verify the recognition results. TABLE 4 shows a forward step in terms of part annotation accuracy of Shapenetcore-part. The proposed approach contains a combination of an RNN on the position data and simple MLP layers on normal and curvature information that end to reach more accurate results on part annotation issue on the Shapenetcorepart dataset. A comparison of the results confirms that the number of objects in each category is essential. If there were many different shapes in a class and it has variety in shapes, VOLUME 8, 2020 FIGURE 11. The part-annotation results in 3 categories of Shapenetcore-part dataset. The left column in each category demonstrates the original point cloud, the middle one represents the ground truth, and the right column is our part annotation result. the accuracy would be moderate. Conversely, if the number were lower than an acceptable amount for training, the network would not be able to distinguish the category correctly. Therefore, the best network results are in classes that have an adequate number of objects. As it is shown in this section, the result of the proposed recognition network shows precise results, which are better than some of the top methods in this field. But to improve the recognition accuracy, we decided to use segmentation results to improve the classification result. Thus, the Shapenetcore-part has been used in this research that had already part annotated. Therefore, in this research, we also proposed to use an almost similar neural network model in terms of part annotation.
The result of our part annotation approach is shown in TABLE 4, which is a comparison with some other methods that used the same dataset, such as Voxnet [6], Pointnet++ [23], and others. We also demonstrate the part annotation results on the Shpaenetcore-part dataset in FIGURE 11, segmenting point cloud data using their trained category's network. The proposed network stage can segment a 3d model according to the selected category in the previous step (recognition stage). If the recognition stage classified the model in the correct category, the part annotation stage would be statistically matched to the selected category. Accordingly, in this paper, we show that these results can improve the recognition as well as the other 3D global features, and FIGURE 10 shows the results. On the other hand, many body-organ parts are scanning every day in terms of educating and investigating. Through an offline learning method, we would not be able to recognize them, except that the learning procedure train the whole dataset one more time. For this purpose, the interactive stage is added to this method to be able to continue training by new entries without starting from the first. This method requires an interactive interface and an open-loop learning procedure, which is correctly done in the suggested approach.

V. CONCLUSION AND FUTURE WORK
In this paper, we have suggested a combination of deep neural networks that can categorize the 3D object dataset and verify the results by statistical histogram from the part annotation. Our approach relies on two essential features of the 3D data, surface normal and curvature; the first one leverages direction variation while the second one concentrates on changes through the point cloud's surface. On the other hand, as the main feature of our model, we have used voxel information that naturally can remove noisy points. The experimental result shows that the proposed method is competitive with the most well-known approaches on the Shapenetcore-part dataset. One of our plans is to find a learning method that can achieve 3D features automatically and replace them with point normal and curvature parameters. Also, AutoEncoder networks have shown exciting results in terms of extracting 3D features from a 3D object, which encourages us to continue this research on methods that can recognize features using such the network in an optimized way. On the other hand, one of the preprocessing steps which can be attractive to 3D vision researchers is estimating the transformation of 3D objects that is one of our targets to be investigated as our next step.
PO YANG (Senior Member, IEEE) received the B.Sc. degree in computer science from Wuhan University, Wuhan, China, the M.Sc. degree in computer science from the University of Bristol, Bristol, U.K., and the Ph.D. degree in electronic engineering from the Staffordshire University, Stoke-on-Trent, U.K. He is currently a Senior Lecturer of large scale data fusion with the Department of Computer Science, The University of Sheffield, Sheffield, U.K. He holds a strong tracking of high-quality publications and research experiences. He has published over 40 articles. His current research interests include the Internet of Things, RFID and indoor localization, pervasive health, image processing, GPU, and parallel computing. More importantly, however, is that many of his research results have been translated into solutions to real-life problems and have made tremendous improvements to the quality of life for those concerned. He has been invited to give over 100 keynote presentations in 23 countries and regions. He has published over 700 scholarly research articles, pioneered several new research directions, and made a number of landmark contributions in his field. He is a Fellow of the Australian Academy of Technological Sciences and Engineering. He received the Crump Prize for Excellence in Medical Engineering from UCLA. He has served as the Chair for the International Federation of Automatic Control (IFAC) Technical Committee on Biological and Medical Systems. He has organized/chaired over 100 major international conferences/symposia/workshops.