• 6 months ago
(Adnkronos) - Lama Nachman, Direttrice dell'Intelligent Systems Research Lab presso Intel Labs, ha un'esperienza di 26 anni nel campo del computing contestuale, dei sistemi Human/AI, delle interazioni multimodali, delle reti di sensori, dell'architettura dei computer, dei sistemi embedded e delle tecnologie wireless. Nachman è una delle figure di spicco nell'ambito dell'intelligenza artificiale: la sua ricerca è dedicata ad amplificare il potenziale umano attraverso una collaborazione responsabile tra esseri umani e intelligenza artificiale, guidando un team multidisciplinare di ricercatori che esplora nuove esperienze utente, sistemi di rilevamento, algoritmi e applicazioni. Un aspetto notevole della sua carriera è la collaborazione con il Professor Stephen Hawking: a partire dal 2012, Nachman ha guidato un team di ricercatori nello sviluppo di una nuova piattaforma software e di un sistema di rilevamento per aiutare Hawking a comunicare. Successivamente, ha esteso questa tecnologia alla comunità open source, permettendo a persone con disabilità in tutto il mondo di comunicare con input limitati e vivere in modo più indipendente. Nell'intervista concessa a Adnkronos Tech&Games, Nachman parla tra le altre cose del proprio lavoro e dell'importanza degli studi interdisciplinari nell'evoluzione dell'IA.

Category

🗞
News
Transcript
00:00When you actually can look at any problem or potential solution
00:06from the perspectives of social science,
00:09from the perspective of design,
00:10from the perspective of AI research and computer science,
00:14you end up with solutions that really address and
00:19mitigate a lot of the concerns yet
00:21bring out that benefit dramatically.
00:24What's usually hard about this is
00:29having a shared language and a shared understanding, right?
00:32Because sometimes it feels like
00:34when you're coming from a different discipline,
00:36that people don't even understand your language
00:38or what you're talking about.
00:40So one advice that I would also give is
00:43for people to really try to increase their knowledge
00:46and understanding of social science and design.
00:49So we actually started this AI program at Intel
00:54back in 2016.
00:55And I think that notion of scaling
00:58has really been something that we've been
01:00iterating on and working on.
01:02The problem with Gen AI is that
01:04you have no idea what the downward application is.
01:08How do you test for a bias in a Gen AI system?
01:11So technically, it's also much harder to assess risk
01:14when you separate the capability from the application.
01:18Maybe I would say there are two areas
01:19that we've been focused on to continue to improve it.
01:22One, so if you think about
01:24the assistive context-aware toolkit,
01:26we actually, what we did is we were able to extract
01:31the equivalent of a push button
01:34by looking at signals from the face, from the hand,
01:37from like just minor, minor movements.
01:40What if you can't move a single muscle
01:43if you're totally locked in?
01:44And that's where brain-computer interface needs to come in.
01:48So we've been doing a lot of work on EEG signals,
01:51taking signals from somebody wearing a cap
01:54and then translating those signals
01:56into the equivalent of that.
01:57So far, the AI system has come in
02:01to help things like word prediction,
02:03to finish a sentence, right?
02:05So we added sentence completion in this last release
02:08so that somebody can just start saying something
02:11and it will actually complete the whole thing,
02:13not just the next word, right?
02:15So how do we reduce the number of interactions
02:18that people have to make
02:19so that they can express their thoughts
02:20much more effectively without locking them
02:25out of what they wanna do?
02:27You know, with Professor Hawking,
02:28I mean, what was really interesting
02:30is that you start to have an appreciation
02:34not only for the problem you're trying to solve,
02:38but all the other problems around this, right?
02:40So I'll give you an example.
02:43When, for example, he sat down and ate, right?
02:47This thing is still working, right?
02:50The sensor is still working.
02:52So now there is random push buttons
02:54that are going on, right?
02:56So one of the things
02:57that we've been kind of like thinking about is like,
02:59okay, well, how does he easily then mute the system
03:01and not accidentally unmute it,
03:04but not make it so hard for him to unmute that system?
03:06So like all of these things
03:09wouldn't have been top of mind for me
03:12had I not spent a lot of time
03:14really understanding his whole life
03:16and how he goes about his day
03:18from the time he wakes up
03:20all the way to the time that he goes to bed.
03:22You know, we as people have learned
03:27how to adapt to technology, right?
03:29So we speak differently for Alexa to understand us, right?
03:34We're constantly struggling with how do we adapt
03:37so that technology can understand us.
03:39And really where I see the future headed,
03:41especially for someone whose whole experience
03:44is in context-aware computing,
03:46is really to have technology adapt
03:48and understand the specifics
03:50and personalize to the needs of individuals
03:54and their context
03:55and what is happening in the world around them.
03:58AI needs to come into that physical world
04:01and start to understand what people are doing,
04:04what they are meaning to do,
04:06what's the context in which they're doing.
04:07And that's where I see the future of AI headed.
04:09One of the things that I really think,
04:11especially like we just talked about accessibility, right?
04:14If it wasn't for AI,
04:15it would be very hard to bring a lot of people
04:18to communicate who couldn't communicate in any other way.
04:20But also bridging gaps in understanding,
04:24bridging gaps in language, right?
04:26I mean, having access to instantaneous translation,
04:29because you can take a language model
04:31and use it for all sorts of good purposes
04:33and for very bad purposes.
04:35And in fact, even if it wasn't intentional misuse,
04:38if you use it in an application that that was not meant for,
04:42you can actually risk the safety of people who use it.
04:44Looking at how do we,
04:48for example, from a regulatory standpoint,
04:50regulating the usage of AI
04:53and doing that work from a regulatory stand
04:58is extremely important.
05:00You really need to understand
05:01how people actually go about their day-to-day,
05:04what they're doing,
05:05because you can't just go in and say,
05:06oh, I'm gonna solve this problem.
05:08Here's an AI system, use it, right?
05:10Nobody will ever adopt a system like that.
05:12If you understood the constraints
05:15in that workflow
05:16and where the AI can actually come in
05:18to solve a real problem that they have,
05:20not the one that we think we can solve,
05:21but the one that they need solved, right?
05:23That's where the anthropology perspective comes in.

Recommended