Intel Develops Controversial AI to Detect Emotional States of Students

Kid Computer

(Image credit: Shutterstock)

An Intel-developed software solution aims to apply the power of artificial intelligence to the faces and body language of digital students. According to Protocol, the solution is being distributed as part of the “Class” software product and aims to aid in teachers’ education techniques by allowing them to see the AI-inferred mental states (such as boredom, distraction, or confusion) of each student. Intel aims to expand the program into broader markets eventually. However, the technology has been met with pushbacks that bring debates on AI, science, ethics and privacy to the forefront.

The AI-based feature, which was developed in partnership with Classroom Technologies, is integrated with Zoom via the former’s “Class” software product. It can be used to classify students’ body language and facial expressions whenever digital classes are held through the videoconferencing application. Citing teachers’ own experiences following remote lessons taken during the COVID-19 pandemic, Michael Chasen, co-founder and CEO of Classroom Technologies, hopes its software gives teachers additional insights, ultimately bettering remote learning experiences.

The software makes use of students’ video streams, which it feeds into the AI engine alongside contextual, real-time information that allows it to classify students’ understanding of the subject matter. Sinem Aslan, a research scientist at Intel who helped develop the technology, says that the main objective is to improve one-on-one teaching sessions by allowing the teacher to react in real-time to each student’s state of mind (nudging them in whatever direction is deemed necessary).

But while Intel and Classroom Technologies’ aim may be well-intentioned, the basic scientific premise behind the AI solution – that body language and other external signals can be accurately used to infer a person’s mental state – is far from being a closed debate.

For one, research has shown the dangers of labeling: the act of fitting information – sometimes even shoehorning it – into easy to perceive (but ultimately and frequently too simplistic) categories.

We don’t yet fully understand the external dimensions through which people express their internal states. For example, the average human being expresses themselves through dozens (some say even hundreds) of micro expressions (dilating pupils, for instance), macro expressions (smiling or frowning), bodily gestures, or physiological signals (such as perspiration, increased heart rate, and so on). 

It’s interesting to ponder the AI technology’s model – and its accuracy – when the scientific community itself hasn’t been able to reach a definite conclusion on translating external action toward internal states. Building houses on quicksand rarely works out.

Another noteworthy and potential caveat for the AI engine is that expressing emotions also vary between cultures. While most cultures would equate smiling with an expression of internal happiness, Russian culture, for instance, reserves smiles for close friends and family – being overly smiley in the wrong context is construed as a lack of intelligence or honesty. Expand this towards the myriad of cultures, ethnicities, and individual variations, and you can imagine the implications of these personal and cultural “quirks” on the AI model’s accuracy.

According to Nese Alyuz Civitci, a machine-learning researcher at Intel, the company’s model was built with the insight and expertise of a team of psychologists, who analyzed the ground truth data captured in real-life classes using laptops with 3D cameras. The team of psychologists then proceeded to examine the videos, labeling the emotions they detected throughout the feeds. For the data to be valid and integrated into the model, at least two out of three psychologists had to agree on how to label it. 

Intel’s Civitci himself found it exceedingly hard to identify the subtle physical differences between possible labels. Interestingly, Aslan says Intel’s emotion-analysis AI wasn’t assessed on whether it accurately reflected students’ actual emotions, but rather on its results being instrumental or trustable by teachers.

There are endless questions that can be posed regarding AI systems, their training data (which has severe consequences, for instance, on facial recognition tech used by law enforcement) and whether its results can be trusted. Systems such as these can either prove beneficial, leading teachers to ask the right question, at the right time, to a currently troubled student. But it can also be detrimental to student performance, well-being, and even their academic success, depending on its accuracy and how teachers use it to inform their opinions on students.

Questions surrounding long-term analysis of students’ emotional states also arise – could a report from systems such as these be used by a company hiring students straight out of university, with labels such as “depressed” or “attentive” being thrown around? To what measure of this data should the affected individuals have access? And what about students’ emotional privacy – their capacity to keep their emotional states internalized? Are we comfortable with our emotions being labeled and accessible to anyone – especially if there’s someone in a position of power on the other side of the AI?

The line between surveillance and AI-driven, assistive technologies seems to be thinning, and the classroom is but one of the environments at stake. That brings an entirely new interpretation for wearing our hearts on our sleeves.

What bothers me is there are legal safeguards over what can be done with recording and using cameras on children, but no such safeguards on adults.   We are getting used to every aspect of public life being under the watch of cameras and those that can tap into them.  Plus many people have cameras in their homes, on their electronic devices, in their autos that all record or report on them.   Over the years since 9/11 we have given up any real idea of privacy, our lives are a fishbowl.   Even our TV’s report back what we watch, when we stop or pause.   Our homes have cameras that the police want access to (ring system) that neighbors can join to share their cameras with.   Now these face things on your computer.   Ask this question, will they have to ask you to use your camera, or will the bad actors simply use them anyway.   I run security programs to prevent access to my computer cameras and on my desktops when I am not using them I unplug them.  But what about new independent digital cameras and phone cameras?   Will you get notified when a company accesses them?  The microphones?   Do the terms of service you just ignore to get the app you want give them the right to spy on you?   I wish I could say the government will protect us but the government is one of the biggest abusers of the system.  After 9/11 the Patriot Act gave away most privacy rights of US citizens in favor of the feeling of being safe.  Do you feel safer now?  In some areas the public has to install cameras in their homes in case the police break in to protect the occupant that lives there.    WTF has happened to the independent freedom loving Americans?   Oh yes they are attacking school boards over mask policies and trying to stop people from reading books that have the true racist history or god forbid be about LGBTQ+.  

6 thoughts on “Intel Develops Controversial AI to Detect Emotional States of Students

  1. Strictly in regard to the data created by classroom use of facial recognition tech, I trust most teachers to use those data for the greater good of each of their students. What concerns me would be the records created by those data; regular “Permanent Records!” kept by schools can be sobering enough! But those records are created after work is evaluated, and are not based upon a whim, a draft, mom burning the mac&cheese, or any number of reasons a student might make a facial expression. I don’t feel that this tech has use in real life beyond maybe gaming and such; certainly not for use in actual evaluation of humans.

    Liked by 1 person

    1. Hello Ali. Can you imagine the data base that will be built using the facial expressions of kids as different subjects are discussed? Another reason why cameras do not belong in classrooms nor record kid’s expressions on remote teaching. The AI tech is not good enough and facial recognition is sloppy at best with it being horrible at distinguishing minorities. The mistakes that the AI is going to make is bad enough but to make sure it is right all the kids pictures will have to be looked over by a human. How good are people at understanding the emotional state of a bored child by looks? Is the need to go to the bathroom in a young child going to be mistaken for something else? The entire range of abuse of this is too deep in my opinion.

      Liked by 1 person

  2. Scottie, while I agree with you in essence about this issue, have you ever stopped to think how much information about yourself that you reveal right here on your blog? You take all these precautions with your computer but allow “the world” to know who you are as a person.

    Yes, I realize that essentially, there IS NO PRIVACY in the world today. Those who are so inclined have all sorts of ways to poke around into a person’s life. But why make it any easier for them?

    Liked by 1 person

    1. Hello Nan. The reason I take such security on my computers is to protect the equipment. Back in 2017 or 2018 I had a computer that got hit with ransomware. To fix it I had to release the bios secure boot and the Trusted Platform. I should have found another way but at the time I was not as up on malware as I am now. So I use that security not to hide who I am but to stop any hardware damage to the computer.

      Now as to the fact I disable the cameras on my computers is because of the ways that information can be used without permission by both government and now by companies.
      While I do not mind people knowing all about me, I do hate the idea of companies building large databases about me that know more about me than I do myself. That Microsoft can know every keystroke and everything I do on my computer, every picture I upload or look at is not something I agree with. I don’t want that. Right now China has a score system for civilians that rate if they are being a good member of society or not. Everything a person does is watched and noted. If you want to go on vacation it depends on your social score. What luxury you can buy or what time off you can have depends on that score. They watch to see if you litter, if you clean up after yourself or dog, are you courteous to others. All watched and scored. That is what I worry about with cameras.

      At first Ron was worried about stolen ID and all that could happen with all the data about us on my blog. But the data shows most of that scamming stuff is done off Facebook posts and to people on Facebook, Instagram and big social media like that. But my blog is too small for it to be much threat, but even if someone wants to pretend to be me what would be the gain. To what end? To write comments on others blogs as if they were me? To use my picture for something, who wants that grizzly old mug? I cannot see how the things I post on line about my self could be dangerous to me.


      1. I cannot disagree with you that there are untold avenues that a person can follow to get information on others. I just prefer not to make it any easier on them so I tend to stay as much under the radar as possible.

        As for your last sentence … the fact that “right-wingers” have been known to attack those who disagree with them, does this not make you at all concerned that something you might say on one of the blogs could cause one of more “unbalanced” ones to seek you out?

        Liked by 1 person

        1. Hello Nan. Thank you. If you don’t mind I love that you care about my safety. Do not worry. I live in a safe area. I have cameras on every aspect of the outside of my home, I have ways of defending my self and Ron, and James our 30 year old son is a trained armed person employed by the county. We are safer than most. I am crippled but trained, Ron is able bodied but showing his age and ailments, and James is young strong and not a person to mess with who has friends. We are safe.

          But the real question Nan is what about people who are not like us, do they dare speak out? How safe do people feel in the US to voice their opinion? That was the point of the maga driven assaults schoolboard meetings over covid, to make it seem that the board members were not safe and the people / police agreed with the maga. Notice the places they chose to attack and the fact that often the attackers were bussed in. It was about causing fear, and you could be next if you speak up for masking or your rights.

          Nan this is a coordinated campaign to make people both feel unsafe and to drive public opinion in one direction. The right has learned that being thugs gets them what they want. They have learned that to make up the most terrifying harmful things gets them what they want. So they keep doing it.

          If I may let me go back in history to the Tea Party movement. The media portrayed it as grassroots driven anger at Obama’s tax policy. Yet in truth it was entirely paid for by the Koch brothers political network who recruited people and bussed them to events. In fact one of the slogans was “Keep the governments hands off my Social Security payments”. And the people at these events did not see the irony because they were paid to be there or where lied to and had ginned up outrage at something else so did not care.

          In 2000 the election was so very close between George W Bush and Gore. It was down to Florida. Gore won Florida that now is know to be a fact and had the election stolen from him. But the right organized a mob, they bussed in people in button down shirts to all the counting places to pound on the windows and cause such problems that it looked like the people were up rising against the election.

          They called it the ” Brooks Brothers riots”. Totally ginned up on the right to get their way. But the SCOTUS used it to throw the election to Bush and the Republicans.

          That is the world we live in now, the world where the right has learned to be thugs to get what they want, but the maga are worse.

          The point is these people win if we let them. How do we stand up to them, we do stand up to them. I have posted how hard it is in other countries, in war areas, in countries where you are thrown in prison for doing so. You just do it.

          Shouldn’t we do the same in our own relative safety? I am not important enough for the real violent maga’s to attack, they want the big fish. Oh I wish I had that following and reach. The ones I fear are the ones sitting in their homes with a good knowledge of computers who would cripple my systems (remember I have a modem currently hacked by Russians I can not use that cost over $400 back 10 years ago) and I have a computer without a bios secure boot because I did not know how then to clear a ransomware attack. My equipment is able to be reached from anywhere in the world, to get to me personally they have to get through a housing development of people who make watching their neighbors a sport.

          I hope that puts your mind at ease. Hugs


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.