MindPortal aims to revolutionize communication by using AI models to convert brain signals into text, potentially paving the way for telepathy and seamless interaction using ChatGPT. Imagine thought-to-text communication! However, as our AI Editor Ryan Morrison discovered, this AI mind reader technology still has a long journey ahead. Find out why the experiment didn’t quite hit the mark and what challenges lie ahead for this ambitious startup.
Category
🤖
TechTranscript
00:00Have you ever thought tech is listening to your thoughts? Well, that's exactly what I'm here at
00:04Startup Mind Portal to have done. They've created a new technology that can read your mind and send
00:10it to ChatGPT so you can have a conversation without having to type or speak. Wish me luck.
00:19Hi. Hi Ryan, good to see you. Hi, good to see you. So tell me a little bit about Mind Portal.
00:24What is it? How did the idea come about? Mind Portal is a human AI interaction company
00:30and it was founded after a year of me taking psychedelic substances. During those abstract
00:38thinking sessions, a lot of visions came out. So during those sessions, what I set as kind of
00:43the thinking goal of those sessions is the future of humanity, how to make an impact, etc. But really
00:50the premise of Mind Portal is we want to explore the nature of human AI interaction.
00:54How can you make a future that symbiotic, which biologically is, win-win?
00:59We're going to look at a demo today where we can chat to ChatGPT using our brain.
01:03Yes. So this is the world's first demonstration, like never before has this been done. Can you bridge
01:09the gap between the human and their thoughts, which is the most intimate form of communication,
01:14what you think, what you feel, and an AI agent that can understand and respond to those thoughts.
01:19All right, should we try it? Yes, let's do it. Tell me what we're looking at now.
01:24So you've got something called an EFNIR system, which is able to record brain data,
01:29optical brain data, based on the blood flow happening in Ed's brain.
01:33Yeah. When Ed imagines language that obviously activates different parts of the brain,
01:38and it's that activation that's being picked up in real time. It's also the first demonstration of
01:44communicating with ChatGPT as well. Using just your mind. Exactly. You'll think a thought,
01:50the sentence is classified, but that classified sentence then becomes an input
01:55into ChatGPT to then respond to you. Okay, we're going to have a look at it.
01:59What's the goal? What do you want to do with this? Where do you see it going?
02:03This system, if you were to spend time building it towards commercialization,
02:07yeah, you could scale it in a multitude different ways. Okay.
02:11So number one is you could scale the number of sentences, and then of course you scale the accuracy.
02:16What are we doing first? So if you'd like to pick a sentence.
02:19All right, well, let's go for Venus. I'm a space buff. Yep. I'm going to imagine this.
02:23Well, I'll read it out loud then, because if you read it out loud, then it raises the question of,
02:27is it just taking it from your voice? So you're going to think in your mind,
02:32if I were on Venus, I would be in a world of extremes. The pressure would feel like being
02:36a kilometer underwater, crushing you from all sides. The air is a corrosive nightmare,
02:41capable of dissolving metal, and forget about rain, it's sulfuric. So you're thinking that.
02:46Yep. That's a long sentence to imagine.
02:48It is. So you can't just imagine the visual of being on Venus. You've got to imagine the actual
02:53words in that sentence. Currently, yeah, we're trying to extract the semantics from that.
02:58All right, well, let's go. Let's see how that happens. You're going to think those words,
03:04and hopefully the chat GPT will respond.
03:06So you've sent that off as the prompt to your decoder.
03:13So now that the decoder basically is taking the brain data, and trying to identify which of the
03:18sentences he was thinking. Okay.
03:20And then that's over there, it's outputting the sentence. So you got it wrong.
03:26So this is the restaurant sentence. Okay. So it was sentence number two, I think.
03:30And this is the brain data that went, as he was thinking that sentence, went into the system.
03:36Yeah, we can do another one, and we can show you basically how this progresses as it's imagining.
03:40What's the one that works more often than not?
03:43If we're sticking to probability. Let's try, let's try again. Let's see if it works.
03:48And right. Okay. So this time he's had a chat with mum on the phone,
03:58but it is showing that you can have this conversation. It's just a case of scale.
04:03Correct. So with enough data, with a larger model, we hypothesize as we've seen in AI,
04:11or with any breakthroughs in their infancy, the accuracy would improve. And then you'd start to
04:16increasingly have the conversation you want without the incorrect inputs. In essence,
04:20what's happening each time there's an incorrect input, and then there's correct ones. And correct
04:24ones are happening enough times for us to know this works. Okay. Scaling data and scaling the model
04:29is let's get it to work more and more times with reduced error in essence. Should we try it one more
04:34time? At least we've seen them all now. Now it should be stressed, this is very early stage. We're
04:42looking at a sort of research preview of a technology that with enough scale will improve
04:48potentially exponentially. Where do we go? What am I going to be able to go into a supermarket
04:55and look at a product and say in my head to my AI, can I eat this and have the AI pick up my thoughts
05:04and respond? Can we get to that point? Yeah, I think we can. And the reason being is we've seen this again
05:10and again in AI, as you increase the amount of data, as you increase the model size, you get better
05:17performance. So the time constraint, honestly, there is how many people and financial constraint
05:23is around how many people can you collect brain data from? Because unlike going online or using
05:29written sources, which are easy sources of data to acquire, this is a bit more of a tricky in the
05:34current paradigm. I've done some back of the napkin calculations just for fun. And it's not as expensive
05:40as you might think. All right. And it doesn't take as long as you might think. I think for, you know,
05:44under $50 million, which is, you know, in the venture capital world. Yeah, that's pocket change.
05:50Yeah, exactly. You could have in six months time operating 100, 200 different headgear caps. Yeah.
05:57People coming in in batches and have thousands of people going through. Now, of course, my cursory kind
06:03of calculations assumed a threshold you'd need to reach because we don't know how much data will confer
06:09the best results. So let's assume you reach that threshold, you get the funding. And in a year's
06:13time, you've done all the data, you've crunched all the data, your model's working. I can go out
06:18and buy a baseball cap and talk to my AI without having to speak out loud. Can I talk to someone
06:23else wearing the same baseball cap? And we were going to have full telepathy with the AI as a translator.
06:27So the answer to that is if you scale the data and if you scale the model, and if you integrate it
06:32into a cap wearable, then yes, theoretically, it should work. It should work. There's no reason why that
06:37shouldn't work. So we can have telepathy, but you could have telepathy. There's nothing,
06:41and that's what we were setting out to prove. So for example, I could wear headgear, think of a
06:46sentence such as, how are you today? Yeah. That could be then sent through an AI model that takes
06:53the text and translates it into a voice and puts it into your ear as an air pod, through an air pod.
06:59And you can hear me thinking, and you can respond back to my air pod. So in theory, we're having a
07:04telepathic conversation. Neither of us are speaking, but we're using pre-trained sentences
07:08to have a back and forth dialogue, which we're both hearing. And now you've got AI models that
07:13can take a bit of your own voice and it can sound like you. So it sounds like Ryan when I'm hearing,
07:18it would sound like me when I'm talking to you. See, that raises an interesting point because that
07:22would potentially give the voiceless a voice because you could use a text-to-speech engine based on
07:29that and their thoughts could go directly to the voice engine rather than having to type it out.
07:34Exactly. Well, that was a lot of fun. It didn't work as expected, but it's an early research preview.
07:39This isn't a product they're going to be putting on the market tomorrow. However, it did give us a
07:44really interesting insight on what we might be using and how we might be interacting with AI and each other
07:49in the next few years. And I really hope it works because I do not want to be standing in the
07:55supermarket talking to myself when I'm just having a conversation with my AI. Fingers crossed. If you
08:03want to find out more about what's going on in the world of AI, find me on Tom's Guide or follow our
08:07socials at Tom's Guide.