top of page
Writer's pictureParlayMe

Making BIG Brainwaves with AI and Headsets

Updated: 3 days ago

Author: January Barnes


Photo credit - TED AI Vienna and Robert Leslie

Yes you read that headline correctly!


This is exactly what Professor Chin-Teng Lin Co-Director of the Australian AI Institute at University of Technology Sydney, showcased at TEDAI Vienna conference last month.


In what I had previously thought was only possible in science fiction films was demoed live on stage! A trained LLM that understands silently read words and sentences, reaching 50% accuracy reading sentences composed from 24 words & 75% accuracy by thinking of one out of four objects to select it.


Lin successfully showcased how with AI, you can control a computer interface with no keyboard or even a mouse, just a headset device and an EEG signal.


As journalists we are constantly confronted with both real and philosophical questions surrounding man v’s machine. The age old universal questions and new emerging dilemmas are haunting us, What does it mean to be human? What key risks does AI posse to humankind? What AI and computing ethics are we going to need to navigate this new AI driven world? And How is the AI bridging the gap and accelerating brain computer interfaces such as electroencephalogram (EEG)?


AI is without doubt revolutionising how we approach security, economics and even knowledge itself. Historically the brain has been central to knowledge as we have known it. But AI and brain computing devices are changing how we experience reality and our thought process within it. With all these questions bouncing around my mind and the internet I thought it would be a perfect time to speak to an expert in the BCI and EEG field, none other than Professor Chin-Teng Lin himself. Lin is one of the most pioneering and distinguished researchers in the field of AI and brain computer interfaces (BCI).


Photo credit - TED AI Vienna and Robert Leslie


Lin recently gave a Ted Talk at TEDAI Vienna. ParlayMe was onsite to interview some of the most progressive and intelligent minds in the AI space. So it was only right to interview Lin himself. Lin study’s the brain, behaviours and the physiological changes that occur when human cognitive functions are working, and ways to combine human physiological information with artificial intelligence to develop monitoring and feedback systems - with the quest to improve the flow of information from humans to robots. Lin wants to improve the flow of information from humans to robots, so humans can make better decisions and respond to complex, stressful situations, and so that robots can better understand the status and intention of humans to augment human-machine cooperation. This is an emerging trend identified in the fifth Industrial Revolution to deliver a common good for humanity.


Lin is also the inventor of Fuzzy Neural Networks (FNN), introducing neural-network learning into fuzzy systems and incorporating human-like reasoning into neural networks. Since then, there have been about 500,000 articles about FNN published online. Lin is a true pioneer in this area having started his research in 1992.


Lin joined UTS in 2016 as Co-Director of The Australian Artificial Intelligence Institute (AAII) at UTS. AAII is a world leading research institute in artificial intelligence and is Australia's largest research hub in the field of artificial intelligence. AAII staff have published over 1300 papers with over 500 of these in high reputable international journals. Lin is also the founding director of the Computational Intelligence and Brain Computer Interface Lab at the Australian AI Institute, at UTS.


The (BCI) market is experiencing significant growth, projected to increase from $1.74 billion in 2022 to $6.2 billion by 2030. Lin is on a quest to read brain thoughts without intrusive implants and we sat down with Lin at the TED AI Vienna conference to discuss his BCI endeavours and more!


1) Firstly, how did you get into the world of AI? You have been in the world of neural networks for decades - can you tell me where your background and passion for this field began?


When I was a freshman in National Chiao-Tung University in Taiwan in 1983, I was deeply attracted by a walking robot demonstration on campus, and I started to read related articles and took related courses. My enthusiasm for robotics sustained till my Master research in Purdue University in 1988. After I finished my Master thesis on robotics control, I realized that the intelligence of robots at that time was still far away from human intelligence. I understood that it is critical to have a human-like “brain” to make robot smarter. Hence, I stitched my focus to pursue for AI and neural networks research in my PhD studies in Purdue University from 1989 to 1992.

2. You are the actual inventor of Fuzzy Neural Networks. Please tell us a little bit about that because that's fascinating. Ultimately systems to enhance modelling and reasoning and ultimate decision making intelligent systems. Tell us a little bit how you started that. Also there have since been more than 500,000 articles written on Fuzzy Neural Networks. For someone that has never heard of Fuzzy Neural Networks, how would you explain it to them?


I invented the concept of fuzzy neural network when I pursued my PhD degree in Purdue University from 1988 to 1992. Before I had this idea, I spent two years reading more than 300 papers in artificial neural networks. I found that neural network is a kind of data crunching; we must feed a huge amount of data to train it. This is not the same as human intelligence, which includes high-level reasoning and low-level neurons activations. For example, when we learn to ride a bicycle, we not only train the brain but also the muscle neurons.
This inspired me to develop the framework of fuzzy neural network, which integrates the high-level reasoning mechanism of fuzzy logic and low-level learning ability of neural networks into a functional unit. It is a complementarily cooperative model of fuzzy logic and neural networks; it brings learning ability into fuzzy systems and human-like reasoning structure into neural networks. It can also house expert-given IF–THEN rules as background knowledge before adaptive learning. Fuzzy neural network has several advantages including high learning speed, smaller network size, and human-understandable network structure. The fuzzy neural network incorporates IF-THEN statements into neural networks, making them expressible in human language in addition to current interpretable AI technology.

3) Now let's talk about the Australian Artificial Intelligence Institute (AAII) because that's what you started at the University of Technology Sydney in 2016 as the, Co-Director. I also went to UTS, so I know what a great university it is, but why was UTS chosen as the birthplace for AAII? There are many institutions to choose from in Australia and around the world - Why was UTS the good place for it?


Before joining UTS, I worked as a professor and distinguished professor in Taiwan. I worked there for 26 years. I experienced all the research and also the administrative roles too. With UTS I hoped to find a place that I can regain my passion in research, 100% pure research. UTS agreed to make me a full-time researcher. I could finally free my hands so to speak and my mind from the administration side of it and focus on the research. Another reason is that UTS has one of the strongest AI programs in computer science. UTS's AI was ranked Top-1 in Australia and Top-4 globally (2024) from a total 142 universities researching AI.

Photo credit - TED AI Vienna and Robert Leslie

4) You are also the founding Director of the Computational Intelligence and Brain Computer Interface Lab. I also want to talk about BCIs because you talked about it in your in your TED talk today. And it's fascinating stuff. So can you tell us a little bit more about how you created the lab and primarily how you're working with BCIs? It's the thing sci fi is kind of made up, right? When I was a young girl or even, five years ago, you think sci fi and you think wearables and presume it’s so far in the future. But this isn’t true. There are invasive BCI (such as the BCI based on the intra cortical or ECoG signals) and non-invasive EEG-based BCI. Can you speak a little bit more about these?


Using fuzzy neural newtorks as a stepping stone, I grew my research areas into brain science and natural cognition, and brain computer interface (BCI). My research in natural cognition was motivated by my realisation that a breakthrough enhancement in human-like intelligence level of machine learning relies on the better and deeper understanding of the human brain. I have been working on BCI for the direct communication between the brain and machine since 2004.
I developed a series of wearable EEG headsets that do this. I probed the brain dynamics behind different cognitive status such as fatigue, stress, disorientation, and mental workload when performing real-life natural tasks such as driving, walking, and biking. With the advent of the global new AI ages around 2015, I started to identify the next milestone of BCI development by applying modern AI to decipher the deeper meaning of brain signals for efficient brain-machine communication in daily-life applications to transform BCI as a everyone and everyday technology. This motivated me to establish the Computational Intelligence and BCI Lab at UTS in 2016; computational intelligence is one mainstream of AI. As for the two types of BCI that you mentioned, the first thing I don't want to touch is to open the skull and put somethings inside the brain, so all our BCI programs are non-invasive.

5) When it comes to invasive and non-invasive do you see the general public gravitating towards invasive or non-invasive consumer applications/products?


To speak frankly, I think the non-invasive consumer products will be more acceptable to the general public. However, either invasive or non-invasive BCI has their niche and value in different application scenarios. For example, the non-invasive BCI can be used to augment human performance for the general public and the invasive BCI can efficiently assist people with neural disease such as for people who are paralyzed.

6) Let’s talk EEG again, you're getting kind of a 50% accuracy rate. Is this kind of where we're at, which is quite amazing because it wasn't long ago that that was 40% accuracy rate. So how fast are we moving towards the ideal 100% accuracy?


Good question, so when I say 50%, I mean specifically for the EEG-to-Text translation, to decode the words in the brain over 108 sentences with 24 words. How to enhance from 50% further is quite challenging. If you want to talk about the decoding of full nature language for general users, then I predict it could be another 10 years to reach. However, we are working in several ways to make this EEG-to-Text translation technology useful practically in daily life in a shorter time. For example, we had reached relatively high accuracy for limited words and limited sentences in some specific domain. Also, we are developing an on-line calibration method to tune our model for a specific user to reach high accuracy as a personalized system.

Photo credit - TED AI Vienna and Robert Leslie

7) So the BCI market is huge and it’s experienced extreme growth. It's projected to increase from like 1.74 billion to like 6.2 billion by 2030. What's powering this growth. Is it AI, is it robots, is it EEG?


Getting from the brain to the computer efficiently is a real bottleneck for any computer applications, especially with the fast advent of new AI technology which makes machine much smarter. Hence, I believe a natural and efficient human-machine or human-AI interface will facilitate the future seamless human and AI cooperation and become the market focus following the AI era. BCI is an interface that works in a natural way based on the way your brain is working naturally. I am passionate about how important this technology can be. An exciting point is linking the brain-computer interface to the wearable computer. You already have a computer on your head. Your brain is an interface. You will be able to see information, give commands – all through sensors on your head and the AI. But it is bigger than that. It is not only about controlling a computer. Natural BCI also provides another way for humans to communicate with humans. For example, it allows people who are not able to speak to communicate with others or such as when privacy or silence are required.


8) What industries do you think are utilising BCI’s or are early adopters? You mentioned the military. Are there other industries that are adopting it and looking into it that you're seeing? What kind of industries do you think can benefit?


As I said, combining wearable computers with BCI could have large market for the general public as an everyday and everyone technology like smartwatch and AR glasses. One niche of BCI technology is hands-free control, which can be highly beneficial for field applications. For example, the operators in a manufacturing plant wear the AR glasses to see the real-time information on the display nowadays.
By combining with the BCI technology, they will be able to further issue commands and select the component or service functions just on the head and keep both hands handy for other purposes. Such applications are also valuable for many other field workers and can be easily extended to other daily life scenarios, including smart homes. What I mentioned here is only one aspect of BCI application focusing on the EEG-to-Text or EEG-to-Command BCI Technology. The general BCI technology has a full spectrum of applications covering from gaming, robots, drones, fatigue detection, emotion identification, sleep quality assessment, and medical and health diagnosis such as migraine prediction. Hence, I think BCI technology can benefit a wide spectrum of industries including IT, AI, semiconductor, communications, control, medical devices, healthcare and defense industries.

Photo credit - TED AI Vienna and Robert Leslie

9) Are you developing this technology?


Yes, I and my team are developing some applications that I mentioned and I hope to show them to the world in the coming years


Research:

[1] Y. Duan, J. Zhou, Z. Wang, Y. K. Wang, and C. T. Lin, "DeWave: Discrete EEG Waves Encoding for Brain Dynamics to Text Translation", 2023 Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, USA, December 10-16, 2023. (selected as spotlight research).



[2] J. Zhou, Y. Duan, Y. C. Chang, Y. K. Wang, and C. T. Lin, “BELT: Bootstrapped EEG-to-language Training by Natural Language Supervision,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 32, pp. 3278-3288, August 2024. (DOI: 10.1109/TNSRE.2024.3450795) https://ieeexplore.ieee.org/abstract/document/10649644


[3] J. Zhou, Y. Duan, Z. Zhao, Y. C. Chang, Y. K. Wang, T. Do, and C. T. Lin, “Towards Linguistic Neural Representation Learning and Sentence Retrieval from Electroencephalogram Recordings”, In Proceedings of the 1st International Workshop on Brain-Computer Interfaces (BCI) for Multimedia Understanding (pp. 19-28), (2024, October). (DOI: doi.acm.org?doi=3688862.3689109) (selected as Best Paper). https://dl.acm.org/doi/abs/10.1145/3688862.3689109



About Professor Chin-Teng Lin


Distinguished Professor Chin-Teng Lin received a Bachelor’s of

Science from National Chiao-Tung University (NCTU), Taiwan in 1986, and holds Master’s and PhD degrees in Electrical Engineering from Purdue University, USA, received in 1989 and 1992, respectively.

He is currently a distinguished professor at School of Computer Science and Director of the Human Centric AI (HAI) Centre and Co-Director of the Australian Artificial Intelligence Institute (AAII) within the Faculty of Engineering and Information Technology at the University of Technology Sydney, Australia. He is also an Honorary Chair Professor of Electrical and Computer Engineering at NCTU. For his contributions to biologically inspired information systems, Prof Lin was awarded Fellowship with the IEEE in 2005, and with the International Fuzzy Systems

Association (IFSA) in 2012. He received the IEEE Fuzzy Systems Pioneer Award in 2017. He has held notable positions as editor-in-chief of IEEE Transactions on Fuzzy Systems from 2011 to 2016; seats on Board of Governors for the IEEE Circuits and Systems (CAS) Society (2005-2008), IEEE Systems, Man, Cybernetics (SMC) Society (2003-2005), IEEE Computational Intelligence Society (2008-2010); Chair of the IEEE Taipei Section (2009-2010); Chair of IEEE CIS Awards Committee (2022, 2023); Distinguished Lecturer with the IEEE CAS Society (2003-2005) and the CIS Society (2015-2017); Chair of the IEEE CIS Distinguished Lecturer Program Committee (2018-2019); Deputy Editor-in-Chief of IEEE Transactions on Circuits

and Systems-II (2006-2008); Program Chair of the IEEE International Conference on Systems, Man, and Cybernetics (2005); and General Chair of the 2011 IEEE International Conference on Fuzzy Systems. Prof Lin is the co-author of Neural Fuzzy Systems (Prentice-Hall) and the author of Neural Fuzzy Control Systems with Structure and Parameter Learning (World

Scientific). His 948 publications include 3 books; 28 book chapters; 485 journal papers; and 432 refereed conference papers, including about 232 IEEE journal papers in the areas of neural networks, fuzzy systems, brain-computer interface, multimedia information processing, cognitive neuro-engineering, and human-machine teaming, that have been cited more than 40,065 times. Currently, his h-index is 96, and his i10- index is 464.


About the Author

January Barnes - Founder/Head Tech Reporter and Podcaster of ParlayMe an interactive tech-news platform for startups, investors, entrepreneurs and business leaders. Looking for PR and digital content creation that will amplify your startup, business, profile or enterprise then apply to become a ParlayMe Member today and #ParlayWithUs - https://www.parlayme.com/memberships



Comments


bottom of page