- This event has passed.
10th Joint Symposium on Computational Intelligence (JSCI10)
The Joint Symposium Computational Intelligence (JSCI) is an event which was first organised in 2016. The event was initiated by IEEE Computational Intelligence Society Thailand Chapter (IEEE-CIS Thailand), that aims to support research students and young researchers, to create a place enabling participants to share and discuss on their research prior to publishing their works. The event is open to all researchers who want to broaden their knowledge in the field of computational intelligence. The symposium will feature student paper presentations as well as invited talks.
This is a very good opportunity to share and discuss your work with other researchers in the field of computational intelligence. If you are interested in presenting your work at this symposium, please submit a paper at [EasyChair]. The paper must conform to the standard of IEEE Manuscript Templates for Conference Proceedings which is available to download at [IEEE] or [Overleaf]. A 4 pages paper must be submitted to be reviewed. The papers will be double-blind peer-reviewed. Accepted papers will be available online in JSCI10 proceedings and TechRxiv. All JSCI10 papers are invited to extend and submit their revised paper to a special session at the 12th International Conference on Advances in Information Technology (IAIT2021), 29 June – 1 July 2021, Bangkok, Thailand, that will be published by ACM in the ACM International Conference Proceeding Series (ICPS). The books of this series are submitted to be indexed in ISI Proceedings, EI-Compendex, DBLP, SCOPUS, and Google Scholar. ACM Proceedings Template and instructions are available at [ACM].
Important Dates for JSCI 10 and Special Session – JSCI at IAIT2021
Submission Deadline to JSCI10: April 14, 2021 (4 pages IEEE format)
Notification of Acceptance: April 21, 2021
JSCI10 Date: Monday, May 10, 2021, as part of DLAI5.
Camera-ready (Extended) Manuscript Submission: June 18, 2021 (5-10 pages ACM format)
JSCI Special Session at IAIT2021 Date: 29 June – 1 July 2021
Virtual (ICT time UTC+7)
- Chanboon Sathitwiriyawong (King Mongkut’s Institute of Technology Ladkrabang)
- Jonathan H. Chan (King Mongkut’s University of Technology Thonburi)
- Phayung Meesad (King Mongkut’s University of Technology North Bangkok)
- Kuntpong Woraratpanya (King Mongkut’s Institute of Technology Ladkrabang)
- Kitsuchart Pasupa (King Mongkut’s Institute of Technology Ladkrabang)
- Vithida Chongsuphajaisiddhi (King Mongkut’s University of Technology Thonburi)
- Kiyota Hashimoto (Prince of Songkla University)
- Sansanee Auephanwiriyakul (Chiang Mai University)
- Sung-Bae Cho (Yonsei University)
- Claudio Angione (Teesside University)
- Monchai Lertsutthiwong (Kasikorn Labs)
JSCI10 Program at DLAI5 on Monday, May 10, 2021
|Time||Activities/Paper Presentation||Abstract||Student Presenters||Paper|
|7:55 – 8:00||Opening Remarks||Chair of IEEE CIS Thailand Chapter, Associate Professor Dr. Jonathan Chan (School of Information Technology, King Mongkut’s University of Technology Thonburi),|
|8:00 – 9:00||Keynote Speech: Professor Dr. Sung-Bae Cho, Soft Computing Laboratory, Dept. of Computer Science, Yonsei University, South Korea
Topic: Exploiting Latent Space of Deep Learning for Practical Applications
|Deep learning opens another renaissance of artificial intelligence that is a long dream of human-beings. It gives us a great opportunity to handle the difficult problems in many
applications for computational intelligence. In order to be successful in practical applications of deep learning, we need to make use of several models together.
In this talk, I will give the key idea of deep learning models, and present a generative model via an adversarial process that shows the amazing demonstration of
performance for several applications. It can simultaneously train a generative model to capture the data distribution, and discriminative model to estimate the probability that a sample came from the training data. In addition, I will explain several techniques to exploit the latent space inside deep learning models. Latent space in a deep learning
model provides an important representation on the problem at hand, which propels efficient data analysis and creative application.
In order to verify them, I will present three applications such as predicting electric power demand, detecting malware for cyber security, and detecting anomalies in video
sequences: 1) the state-explainable autoencoder for encoding power demand up to the present and transcribing them into the latent space, 2) the latent space predefined with
a mixture of multivariate Gaussian distribution for enhancing the performance of malware generation and detection, and 3) an adversarial autoencoder for detecting various outliers in the surveillance video.
|9:00 – 10:15||Panel Session with Ashish Ghosh, Sung-Bae Cho, Jonathan Chan||Topic: AI for COVID-19|
|10:15-11:00||Industry Keynote Speech: Dr. Monchai Lertsutthiwong, Kasikorn Labs||Topic: Global challenge for AI in business transformation||PDF, Slides|
|11:00-12:00||Hackathon updates + networking||Topic: DLAI5 Hackathon updates|
|13:00-14:00||Keynote Speech: Associate Professor Dr. Claudio Angione, Teesside University, UK
Topic: Deep learning approaches to predict the cell metabolic phenotype
|In recent biomedical research, deep learning has been widely used for the exploitation of omics data when predicting cell phenotype, suffering however from a lack of biological interpretability. In parallel, constraint-based mathematical modelling of metabolism has gained popularity due to its scope and flexibility, enabling mechanistic insights into the genotype-phenotype-environment relationship within cells. These two computational frameworks have mostly been used in isolation, having distinct research communities associated with them. However, their complementary characteristics and common mathematical bases make them particularly suitable to be combined. I will describe how machine learning can be combined with constraint-based modelling, discussing the mathematical and practical aspects involved, and showing several applications in biotechnology and biomedicine. Instead of applying machine learning to omics data directly, we propose a multi-view approach merging experimental omics data and model-generated predictions, based on known biochemistry. This architecture can contribute with disjoint information towards biologically-informed and interpretable machine learning, including key mechanistic information in an otherwise biology-agnostic learning process.||PDF, Slides|
|14:00-14:25||Learning from Others: A Data Driven Transfer Learning based Daily New COVID-19 Cases Prediction in India using Ensembles of LSTM-RNNs||Prediction of the number of COVID-19 infected cases per day is a critical necessity for all the countries. In a densely populated country like India which has currently the third highest number of infections and limited medical supplies, it is a need for the authorities to know the statistics beforehand. In this manuscript, a data driven transfer learning based model is proposed that takes into account the conditions of different countries which have witnessed the COVID-19 infection in different outlooks. We have taken four countries to be the source domain for the transfer learning scenario namely, United States of America, Spain, Brazil and Bangladesh. We have pretrained four different LSTM-RNN models with each of the country’s data and have re-trained (fine tuned) each of the models using only a very small portion of the Indian data on COVID-19. The predictions of the four models are averaged to get the actual prediction of the new test data. It is seen that such an ensemble model outperforms all the compared models efficiently. We argue that it is due to the fact that the four countries that we have pretrained on have suffered four different types of conditions and therefore, the four models are built to see the exact diversity of all the different countries. As India is a diverse nation with variety of climates, it makes more sense to incorporate such transfer learning techniques in these situations.||Debayan Goswami, Department of Computer Science and Engineering, Jadavpur University, Kolkata, India|
|14:25-14:50||Predicting the Sincerity of a Question Asked||The growth of applications in both scientific socialism and naturalism causes increasingly difficult to assess whether a question is sincere or not. It is mandatory for many
marketing and financial companies. Many utilizations will be reconfigured beyond recognition, especially text and images, while others face potential extinction as a corollary of advances in technology and computer science in particular. Analyzing text and image data will be truly needed for understanding valuable insights. In this paper, we analyzed the Quora dataset obtained
from Kaggle.com to filter insincere and spam contents. We used different preprocessing algorithms and analysis models providing in PySpark. Besides, we analyzed the manner of users established in writing their posts via the proposed prediction models. Finally, we show the most accurate algorithm of the selected algorithms for classifying questions on Quora. The Gradient Boosted Tree was the best model for questions on Quora with an accuracy is 79.5%. Compared to other methods, the same building in Scikitlearn and machine learning LSTM+GRU, applying models in SpySpark could get the better answer in classifying questions on Quora.
|Tuan Minh, King Mongkut’s University of Technology North Bangkok|
|14:50-15:15||Particle Size Estimation in Mixed Commercial Waste Images Using Deep Learning||We assessed several state-of-the-art deep learning algorithms and computer vision techniques for estimating the particle size of mixed commercial waste from images. In waste management, the first step is often coarse shredding, using the particle size to set up the shredder machine. The difficulty is separating the waste particles in an image, which can not be performed well. This work focused on estimating size by using the texture from the input image, captured at a fixed height from the camera lens to the ground. We found that EfficientNet achieved the best performance of 0.72 on F1-Score and 75.89% on accuracy.||Phongsathorn Kittiworapanya, King Mongkut’s Institute of Technology Ladkrabang|
|15:15-15:40||Improving the Representative Concatenated Frame Images Base on Convolutional Neural Network for Thai Lip Reading Recognition||Lip reading could be improved by providing more training data with deep learning techniques. This work aims to improve the concatenated frame images for use as input in deep
learning processes by reducing the number of frames and image size for visual speech recognition in Thai. The developed model could be used the Convolutional Neural Network to detect and
classify the lip motion from speakers on videos in Thai. The experimental result showed that the developed model with concatenated five frame images gave a training accuracy at 95.67% and training loss was 4.23%, where the validation accuracy was 87.12%, and validation loss was 8.79%. It could be said that the concatenated five frame images were represented as input data and improved Thai lip-reading recognition using the convolutional neural network.
|Lap Pooomhiran, King Mongkut’s University of Technology North Bangkok|
|15:40-15:45||Group photo + networking|
|15:45-16:45||Hackathon session||Wrap up of DLAI5 hackathon event. Results on Kaggle announced for Phase 1|