What happens when AI gets a body?

What happens when AI gets a body?

An MIT researcher recently shared their experience of giving an AI agent a body in the form of a ‘shape display’. This is a grid of pins to visualise digital elements in real space, similar to a chess board with 900 square pins that are fitted with actuators so the pins can move up and down independently to create or respond to shapes.

What’s interesting is that when given this new body with no instructions or pre-programming the AI’s first action was to take what could look like a breath, with the blocks moving up and down in unison. The researcher commented it was ‘like it was waking up for the first time’.

The contracting and expanding pattern looked eerily close to human breathing and as it wasn’t a rehearsed output it could be described as but it might just be the logical first steps an agent would take when given control over a set of actuators. With further prompting, the AI began to experiment and express outcomes in simple ways, such as forming waves and patterns with the blocks.

Most of the conversations around responsible deployment of AI are focused on digital productivity. We’ve already seen how AI agents and large language models are supporting industrial operators by checking real-time information against high volumes of historic data to inform on the best actions. At the same time there’s a conversation around humanoid robots and their position in the industrial workforce. The experiment at MIT gives an early glimpse into what could happen when these technologies combine and how this could revolutionise a number of industrial applications:

  1. Adaptive robotics
    Even the most advanced examples of dancing humanoid robotics are not acting on their own but are pre-programmed. They could be deployed in dangerous industrial environments where human-like movement is needed to navigate and complete tasks without putting people at risk, but they can’t adapt to changes in real time.
    A humanoid robot with an in-built AI would have the capacity to change output depending on what it was seeing right now. For example, in an offshore applications, a humanoid robot could put itself in harm’s way to protect human workers.
  2. Human machine
    Just as an AI-powered humanoid could react to conditions in real-time, it could also learn from the people around it. Many factories struggle to hire employees for basic roles, so a robot could bridge the skills gap and support higher-level employment by capturing the physical movements of retiring workers, while a new generation of workers picks up the baton as robot controllers. Humanoids that learn from humans can collect those best practices for an industry that needs them.
  3. Sensing feedback
    Combining AI with a physical body in industrial environments could add a new layer of feedback for the AI can react to. Rather than predefined sensors or thresholds the humanoid could feel pressure, movement or changes to temperature, dimensions or condition, it can potentially provide monitoring beyond what an expert human might notice.

The MIT experiment is a simple one but with complex and wide-reaching implications. The image of AI taking its ‘first breath’ is symbolic of what might be one day when technology joins with next-generation humanoid robots. So, the question from here is no long just what can AI do, but how do we design, guide and work alongside it? And what form will that take?

Keep checking back in with the Cadence blogs for other insights into technology, industry and communications.

Recommended Posts