🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Boston Dynamics

Started by
53 comments, last by Calin 2 years, 5 months ago

Calin said:
The types of interactions between objects in the real world are numerically limited, current physical engines can simulate all them* (which is showcased in Newton Dynamics SandBox).

Not sure where you are heading, but Newton can simulate classical mechanics with rigid bodies (soft bodies, cloth and fluid is wip). And yes, that's what we need to simulate robots. In my experience it is the best physics engine for this, because it's very flexible to implement your own kinds of joints and motors, and has high accuracy and robustness at good performance. Due to accuracy being so good, the joint motors can drive a biped model precisely and there is no need to work on your own constraint solver to get motor torques. So i can really recommend it if you want to work on such simulations, but i did no comparisons to other engines anymore since 15 years. They all have added robotics features since then, and there are some new ones too.
Physics Engines also provide tools to help with motion planning. Trace rays to check for potential collisions, or cast a shape through the scene to see if obstacles cross the path.

What they traditionally don't have is tools for control problems. To push the button, you may first calculate a pose which does so, but then you need to drive joint motors to get to this pose. That's not easy. The naive solution would be to rotate joints with some constant angular velocity to get from pose to pose. But this looks like bad movie robots from the fifties, and worse, we can not solve higher level problems like efficient balancing this way.

Now i'm not up to date with this either, and idk what control problem tools physics engines have at current day. AFAICT, Newtons approach seems to be effector driven. ‘Effector’ means the end of a chain of bones in a skeleton, so the hand and feet, or the head. The user gives targets for them, and the physics engine cares about the rest, including maintaining balance and locomotion. If so, the user would not even need to work on IK problems, so that's really easy to use. But idk how to gain more precise control in cases where we need that - likely by providing poses / animation somehow. We'll see how it evolves, and i should try and give feedback.

I'm curious what other engines do, if somebody can tell…

Advertisement

JoeJ said:
Not sure where you are heading

If you can get your simulated biped to do basic interaction with the environment, push and pull, grab and move objects you could make your robot have emergent behavior. If all interactions with environment can be classified and your robot has the ability to recognize which scenario applies when then he can work his way through any challenge course all by himself. I have no intention of getting into physics engines, I find them boring and difficult to understand, there is little reward for me in learning to work with one. My area of interest is building AI simulating human thinking in RTS games like Starcraft.

My project`s facebook page is “DreamLand Page”

The fact that you don't think this is a typical Machine Learning or Deep Learning example, indicates that you don't know a lot about it. I suggest checking out https://pybullet.org/wordpress/

Machine learning, neural networks, are universal problem solving machines. Your brain is an example.

@h8CplusplusGuru I think you vastly over-sell the current state of the art.

Machine learning models are predictors. You train it by feeding it various kind of input, generally with some kind of expected output, and then you use some flavor of gradient back propagation to iteratively “tune” the model to give the “right” output for the given input. The output is a linear combination of the input, where the trick is that the “linear combination” allows non-linear functions. (So “linear” here means how it's put together, not what the functions are.)

The “recurrent” models are not really any less linear than straight NNs and convolutional NNs – they simply “unroll” parts of their architecture into a kind of feedback loop. For a N-degree recurrent network, you can get the same network fully linear by unrolling it N times. Although there are some topologies that aren't quite that unrolled – instead, you need to use reinforcement algorithms to tune them.

Anyway – given enough input, it can be trained to predict output, assuming you know what “good” is. If you think that's a “universal problem solving machine,” then you probably haven't run into any really interesting problems yet :-)

h8CplusplusGuru said:
The fact that you don't think this is a typical Machine Learning

trying to build a `device by first recognizing the existing objects nearby the `problem area as device pieces of an incomplete device and then running a comparison of the incomplete device against a list of complete `devices in the database to establish the class to which the incomplete device belongs is as much machine learning as running a finger print comparison in a police database. Im not a Machine Learning expert but not every algorithm in this world is ML and the case Im talking about isnt ML either. `Learning means there is repeated experimentation to learn a lesson. The situation that Im talking about is a situation where youve learned all the lessons already and its time to figure out which lesson applies when you`re faced with a problem.

My project`s facebook page is “DreamLand Page”

JoeJ said:
So i'll try to subdivide big problems into smaller problems, solving them with simple and predictable controllers as far as possible.

That's reasonable.

Stable walking on the flat is reasonably easy. For really slow walking, you just keep the CG over the base of support.

As walking speeds up, there's something called the “zero moment point”. That's a generalization of the CG that includes velocity. That's what Asimo used.

If you get into running (any gait with a no-ground-contact flight phase) a key point is to figure out the acceptable landing conditions at foot touchdown. Then you need a planner which can get you the takeoff that will hit that target. Target, in this sense, is not just a position. It has velocity, rotation, rotational velocity, and limb position.

A general idea is that when you have ground contact, you're stabilizing the system and setting up for the next takeoff. While in flight, nothing you do can change the trajectory of the center of gravity, but you can cause rotation and position the legs for landing.

Simple running requires only that you plan ahead to the next ground contact point. If you can plan two ground contact points ahead, you start getting the capability for broken field running, sports, and and martial arts. You start to see where “combo moves" come from. The Boston Dynamics people do some of that, but it seems to be preplanned with human assistance, not generated in real time. They seem to do something like setting up a path of stepping stones and practicing in simulation until they can hit all of them. Then they let the robot try it.

It's an interesting area, but even for games, probably not that useful. Gamers want football players that move like humans, even if some other movement pattern is optimal. Which is why everybody seems to use motion capture and morphing rather than full physical simulation.

I spent about two years on this, and three more on physics engines, back in the 1990s.I finally sold the technology off to a major company in the industry and exited the field. It's worth working on again. Back then I had 100 MIPS and it was all just too sluggish. Today, we all have 4000 MIPS per CPU and can rent more on AWS. With a bigger engine, you can do more.

Calin said:
If you can get your simulated biped to do basic interaction with the environment, push and pull, grab and move objects you could make your robot have emergent behavior.

Maybe. Or at least it could look like so, which would be good enough.

I have no intention of getting into physics engines, I find them boring and difficult to understand, there is little reward for me in learning to work with one. My area of interest is building AI simulating human thinking in RTS games like Starcraft.

If your interest is AI, Boston Dynamics might not be an interesting topic, as there is not much AI demonstrated i guess. Those robots move like living animals or humans, so we associate they are living beings, or close to it. It's some kind of psychological trick which would work with games as well, but there is no intelligence at solving problems or any social behavior with those robots.

So you could just ignore physics, and instead build some game with agents and objects allowing many interactions and combinations of them. To keep it simple, the game can be 2D on a grid, movement can be restricted to one cell at a time into 4 possible directions. Objects can be pushed and pulled, stacked above each other, joined to create a new object, etc.
Would this spawn emergent behavior out of nothing from pure magic? Likely not, otherwise we would know since a long time already. Would this improve if we now replace our quantized simulation model with proper simulation of real world physics as discussed? Likely not much, i guess. Agents still need ability to learn and memorize, and they need motivation to do something at all, e.g. eat and reproduce.

Maybe you think ‘To develop AI, we need a sandbox for our agents where they can exist and interact.’ I think so too, and games are close to providing such sandbox. Probably intelligent life in nature can only emerge in such environment, and it makes sense to assume we should do the same with developing AI. So maybe we should focus on artificial life simulation, to get some progress on AI we would never get with restricted games such as Chess and Go.

But we don't know how advanced and realistic our sandbox has to be. Likely there have to be strict laws of nature like conservation of energy, so our agents can learn and apply simple and reasonable formulations of optimization problems.
Like in the real world, i think a fluid environment would be better suited to simulate basic artificial life. Much easier to predict, less complexity to model constant contact with a harsh boundary. Assumptions based on sensor input are easier as well.
But if i'm right with this, it would confirm actual simulation of the environment is indeed mandatory.


Nagle said:
As walking speeds up, there's something called the “zero moment point”. That's a generalization of the CG that includes velocity. That's what Asimo used.

Yeah, my balancing controller basically works by planning motion as fast as possible while guaranteeing the ZMP remains inside the support polygon. Care must be taken to start deacceleration phase at the right point in time, while keeping the COM itself inside the support polygon is no objective at all.
Asimo does this clearly wrong (slow movement, ineffcient, constant velocities), Boston Dynamics do it correctly (fast and efficient motion, which looks more like constant acceleration).

In this sense running is much easier than standing upright, because we make big steps and can put the foot where the ZMP is, which can also correct prediction errors of a balance controller.

Nagle said:
Gamers want football players that move like humans, even if some other movement pattern is optimal.

Disagree. The optimal movement is also the most natural one. It's the same thing, and this allows us to solve and use it, and to replace animation.
But maybe i'm too optimistic. Still need time to proof it. And sports / fighting games is hard. Good i like shooters more anyway : )

JoeJ said:
but there is no intelligence at solving problems

A BD robot/Asimo is an actor that can transform a 3D physics driven environment, like it can build a setup for testing a physics law. A sprite actor can`t transform a true 3D space where physics apply. The problem with projects like Boston Dynamics is that what they have is a blind robot. The robot doesnt have cameras to see the environment so whatever it does, it does so blindfolded. This applies to your newton dynamics driven robotic rig as well. Add vision and intelligence comes into play. So though I`m not passionate about balanced walking you got my attention once you have added awareness of the environment .

My project`s facebook page is “DreamLand Page”

This topic is closed to new replies.

Advertisement