The company, Figure AI, shared a demonstration video, showing how ChatGPT helps the two-legged machine visual objects, plan future actions and even reflect on its memory.
Figure's cameras snap its surrounding and send them to a a large vision-language model trained by OpenAI, which than translates the images back to the robot.
The clip showed a man asking the humanoid to put away dirty laundry, wash dishes and hand him something to eat - and the robot performed the tasks - but unlike ChatGPT, Figure is more hesitant when it comes to answering questions.
Figure's cameras snap its surrounding and send them to a a large vision-language model trained by OpenAI, which than translates the images back to the robot.
The clip showed a man asking the humanoid to put away dirty laundry, wash dishes and hand him something to eat - and the robot performed the tasks - but unlike ChatGPT, Figure is more hesitant when it comes to answering questions.
Category
🤖
Tech