For the sake of discussion, let’s call a system an agent if it has a mental embodiment, and a robot if it has both a mental and physical embodiment. So HAL in 2001: A Space Odyssey is an agent, while C3PO is a robot.
These definitions means that robots have more issues to consider than agents: they have to move atoms in the real world (in such a way that bad things don’t happen, and good things do), and they have to worry about energy control since they (mostly) don’t remain connected to external power at all times.
But otherwise there’s a huge amount of overlap in how they’re designed and how they operate. Both have to solve problems of how to understand the world they find themselves in, how to choose what to do next in a smoothly varying and responsive way, how to interact with the environment and humans around them to move towards desirable states, and how to learn and so refine all of these steps.
You would never know that successful agents and robots required solutions to such similar problems from the research literature. Agent researchers think that robots are “just” agents with embodiment; while robotics researchers think that robots without embodiment are “just” simple bits of software. As a result, neither side learns from the gains of the other.