Performance Analysis of State-of-the-Art Deep Learning Models in the Visual-Based Apparent Personality Detection


  • W. M. K. S. Ilmini Faculty of Graduate Studies, University of Sri Jayewardenepura and Intelligent Research Laboratory, Faculty of Computing, General Sir John Kotelawala Defence University
  • TGI Fernando Department of Computer Science, Faculty of Applied Sciences, University of Sri Jayewardenepura



This paper analyses the performances of pre-trained deep learning models as feature extractors for apparent personality trait detection (APD) by utilising different statistical methods to find the best performing pre-trained model. Accuracy and computational cost were used to measure the model performance. Personality is measured using the Big Five Personality Schema. CNN-RNN networks were designed using VGG19, ResNet152, and VGGFace pre-trained models to measure the personality with scene data. The models were compared using the mean accuracy attained and the average time is taken for training and testing. Descriptive statistics, graphs, and inferential statistics were applied in model comparisons. Results convey that, ResNet152 model reported the highest mean accuracy in the test dataset (0.9077), followed by VGG19 with 0.9036; VGGFace recorded the lowest (0.8962). ResNet152 consumed more time than other architectures in model training and testing since the number of parameters is comparably higher than the other two architectures involved. Statistical test results prove no significant evidence to conclude that VGG19 and ResNet152 based CNN-RNN models performed differently. This leads to the conclusion that even with a comparably lower number of parameters VGG19 model performed well. The findings reveal that satisfactory accuracy is obtained with a limited number of frames extracted from videos since models achieved more than 90% accuracy.