Loading Events

« All Events

  • This event has passed.

11th Joint Symposium on Computational Intelligence (JSCI11)

October 15, 2021

The Joint Symposium Computational Intelligence (JSCI) is an event that was first organised in 2016. The event was initiated by IEEE Computational Intelligence Society Thailand Chapter (IEEE-CIS Thailand), which aims to support research students and young researchers, to create a place enabling participants to share and discuss their research prior to publishing their works. The event is open to all researchers who want to broaden their knowledge in the field of computational intelligence. The symposium will feature student paper presentations as well as invited talks.

This is a very good opportunity to share and discuss your work with other researchers in the field of computational intelligence. If you are interested in presenting your work at this symposium, please submit a paper at [EDAS]. The paper must conform to the standard of IEEE Manuscript Templates for Conference Proceedings which is available to download at [IEEE] or [Overleaf]. A 4-6 pages paper must be submitted to be reviewed. The papers will be double-blind peer-reviewed. Accepted papers will be presented at a special session at the 13th International Conference on Information Technology and Electrical Engineering (ICITEE 2021), on 14-15 October 2021, Virtual, that will be published by IEEE.

Important Dates for JSCI 11 at ICITEE2021

Submission Deadline to JSCI11: August 25, 2021
Notification of Acceptance: September 15, 2021
Camera-ready Manuscript Submission: September 25, 2021
JSCI Special Session at ICITEE2021 Date: October 15, 2021


Virtual (ICT time UTC+7)

Organizing Committee


  • Chanboon Sathitwiriyawong (King Mongkut’s Institute of Technology Ladkrabang)


  • Kuntpong Woraratpanya (King Mongkut’s Institute of Technology Ladkrabang)

Organising Committee

  • Jonathan H. Chan (King Mongkut’s University of Technology Thonburi)
  • Phayung Meesad (King Mongkut’s University of Technology North Bangkok)
  • Kitsuchart Pasupa (King Mongkut’s Institute of Technology Ladkrabang)
  • Vithida Chongsuphajaisiddhi (King Mongkut’s University of Technology Thonburi)
  • Kiyota Hashimoto (Prince of Songkla University)
  • Sansanee Auephanwiriyakul (Chiang Mai University)

Invited Speaker

  • Syukron Abu Ishaq Alfarozi (Universitas Gadjah Mada)


Time Activities/Paper Presentation Abstract Student Presenters Paper
10:50 – 11:40 Invited Talk: Syukron Abu Ishaq Alfarozi (Universitas Gadjah Mada) Topic: TBC
10:40 – 12:05 An Evaluation of Transfer Learning With CheXNet on Lung Opacity Detection in COVID-19 and Pneumonia Chest Radiographs As the COVID-19 pandemic continues to put immense stress on hospitals, healthcare workers, and intensive care units, the need for a quick diagnosis and disease severity assessment for patients is crucial. This would allow clinicians to provide the right treatment early on, and thus, prevent serious illness in patients later on. Chest radiography is a fast method of diagnosing patients. By analyzing the presence and distribution of lung opacities in chest radiographs, clinicians can determine the severity of COVID-19 or other pneumonia and apply proper treatment early on. Hence, research into models that can detect such lung opacities in chest radiographs would help clinicians efficiently diagnose. Currently, much research is being done on gathering and classifying chest radiographs. A milestone in this regard has been the development of CheXNet by the Stanford ML group, which claims better performance than radiologists at classifying chest radiographs. In this study, the CheXNet feature extractor backbone is used to test if it can improve the performance of lung opacity object detection models with transfer learning. No improvement in performance was observed on a variety of test datasets, with the models trained using the CheXNet feature extractor experiencing a slight decrease in performance on some test datasets. Andy Wei Liu, and Jonathan H. Chan [PDF]
12:05 – 12:30 Automatic Detection and Recognition of Thai Vehicle License Plate From CCTV Images Detection and recognition of Thai vehicle license plates is a challenging area of research. We propose an automatic detection and recognition system for Thai vehicle license plates using CCTV images. For system development, we performed three steps: 1) license plate detection, 2) character segmentation, and 3) character recognition. In step 1, we used YOLOv4 and TensorFlow models as a feature extraction method for Thai license plate detection from CCTV. In this step, we separated data into training and test sets, 3,000 images and 3,000 images, respectively. The dataset had two classes. The test results were with an accuracy of 96.20%. In step 2, we applied OpenCV for character segmentation, and we obtained an accuracy result of 98.40%. In step 3, we used the characters’ points of interest for character recognition. In this step, we converted images into text using the Tesseract OCR engine. The results showed that the number recognition could reach an accuracy of 94.20%, while the Thai character recognition has an accuracy of 75.46% Wichan Thumthong; Phayung Meesad; Pita Jarupunphol [PDF]
12:30 – 13:30 Lunch
13:30 – 14:15 ICITEE 2021 Keynote
14:30 – 14:55 Enhancement of Anime Imaging Enlargement Using Modified Super-Resolution CNN Anime is storytelling mediums similar to movies and books. Anime images are a kind of artworks, which are almost entirely drawn by hand. Hence, reproducing existing anime images with larger sizes and higher quality images is expensive. Therefore, we proposed a model based on convolutional neural networks to extract outstanding features of images, enlarge those images, and enhance the quality of anime images. We trained the model with a training set of 160 images and validation set of 20 images. We tested the trained model with a testing set of 20 images. The experimental results indicated that our model successfully enhanced the image quality with a larger image-size, when compared with the common existing image enlargement and original SRCNN methods. Kuntpong Woraratpanya, Tanakit Intaniyom, and Warinthorn Thananporn [PDF]
14:55 – 15:20 Enhancement Multi-Class Facial Emotion Detection With Emo-Vggnet Deep learning networks has been successfully demonstrated their impacts as well as their potential to solve the problem in computer vision such as Image Classification and Object Detection. Facial emotional detection is one of the challenges in this field for many years. With the raise of Convolutional Neural Network. Nowadays, there are several models has been released and using widely, including AlexNet, VGGNet, GoogLeNet, ResNet and SqueezeNet. In this article, we choose VGGNet and go through several experiment to enhance the efficiency in accuracy to 80% – 99% as well as reduce the loss to apply into facial emotion detection with multi-class label classification. Quang Nhat Tran, and Phayung Meesad [PDF]
15:20 – 15:45 Personality Type Prediction From Text Based on Myers-Briggs Type Indicator The term of “personality” can be defined as the individual differences in characteristics pattern of thinking, feeling, and behavior. This project proposed various of machine learning techniques including, Naïve Bayes, Support Vector Machine, and Recurrent Neural Network to predict people personality from text based on Myers-Briggs Type Indicator (MBTI). Furthermore, this project applied with CRISP-DM which stands for Cross-Industry Standard Process for Data Mining, is an industry-proven way to guide your data mining efforts. Since, CRISP-DM is kind of iterative development, we have adopted with agile methodology which is rapid iterative software development methods that the development cycle is shrunk too minimal. Sakdipat Ontoum, and Jonathan H. Chan [PDF]