Replies: 4 comments
-
|
Concretely, I don't understand this part: "inNeutralPose1 and inNeutralPose2 must match as closely as possible, preferably the position of the mappable joints should be identical." For me, the positions in the ragdoll skeleton are the positions of the RagdollSettings::Part (RigidBodies/CapsuleShapes), like in the RagdollSettings *RagdollLoader::sCreate() function. And so that doesn't match my GLTF, where the joints are at the top of the arms (upperarm in the photo) for example, and not in the center (between lowerarm_r and upperarm_r, like the capsule in sCreate). |
Beta Was this translation helpful? Give feedback.
-
|
I think the simplest way to make it work is (and from the top of my head, so I may have made a mistake):
Note that there are variations possible on this because transforms can be stored in several places (I've hinted at some of them). |
Beta Was this translation helpful? Give feedback.
-
|
Hello 😃 I implemented most of what you suggested. In particular, since you mentioned using RotatedTranslatedShape, I integrated it into my setup. I now use it for all bones of my ragdoll skeleton. This allows me to directly apply world transforms to my rendering skeleton by retrieving the transforms from Ragdoll::GetPose() into an Array. From there, I can complete the remaining graphical bones (for example, fingers) using animations or poses without any issue. That said, I am a bit confused about one point. In your previous message, you suggested that I should initialize and use the RagdollMapper. However, since I am using RotatedTranslatedShape, my rigid bodies already match the position and orientation of my graphical bones. Because of that, I can directly call GetPose() or even retrieve world transforms from the rigid bodies themselves and apply them to my rendering skeleton. So I’m not sure why the RagdollMapper would still be required in this situation, and I suspect I may be misunderstanding its intended purpose. However, I’m running into problems with bones that are not part of the ragdoll but are located between bones that are simulated by the ragdoll, such as the neck and clavicles. Currently, I keep these two bones at the same local transforms they had when the ragdoll skeleton was created. The issue is that this causes visible graphical artifacts. For example, my SwingTwistConstraintSettings between spine03 and upperarm_r defines the allowed shoulder motion. Since the constraint is positioned at upperarm_r, it represents the full shoulder rotation and twist. When the arm is raised close to 90 degrees upward, the mesh deforms very badly. Normally, this type of motion is shared between the clavicle and the upper arm, which produces much more natural deformation. But with a SwingTwistConstraint, the upper arm bone never changes its relative distance from the spine, which is not what would happen if the clavicle were included in the ragdoll. Initially, I assumed that the ragdoll mapping system would automatically distribute motion across bones that don’t exist in both skeletons, but it seems that this is not the case. Or maybe it does, and I simply misunderstood how it is supposed to work. I suspect that many of my issues come from not fully understanding the purpose of the SkeletonMapper and how it is expected to work with rotation-only bones typically found in AAA character rigs. I am also hesitant to add the neck and clavicle bones directly to the ragdoll, because fully simulating these areas can produces unstable or unnatural motion ? At the same time, since the recommended approach seems to involve using a simplified low-bone skeleton for the ragdoll, I am unsure whether adding more bones is actually the right direction. So at the moment, I have something that works to some extent, but it feels like I’m bypassing the intended strengths of the API and relying on workarounds instead of using the system properly. Do you have any tips or best practices explaining how Jolt is meant to be used to build a high-quality ragdoll setup, similar to what would be expected in a modern AAA production? The issue may simply be my lack of ragdoll industry experience rather than a misunderstanding of the Jolt API itself. Thanks :) |
Beta Was this translation helpful? Give feedback.
-
Yes this could work, but that does restrict the ability to orient the constraint limits (but maybe you don't need this because your joints are already rotated in such a way that you don't need an additional offset).
No, the only thing it does is apply rotation/translation offsets between mapped joints (that have equivalent joints in the high/low detail skeleton) and it preserves the local space transforms for the joints between those mapped joints. It's only 200-ish lines of code, so it's not magic.
I think we have a neck, and we might have clavicle bones (I don't remember).
We have a procedural joints that are evaluated afterwards that are being driven by (among others) the shoulder joint to handle further deformation. This always happens (also when the ragdoll is not active) and is just the way these joints are driven. You could drive your clavicle joint procedurally after running the mapping. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
First of all, thank you for this amazing library.
I'm trying to make a ragdoll the right way.
I was previously using NVIDIA PhysX. My approach was very simple:
Now I'm switching to Jolt and I'm already excited about the power of Skeleton + Ragdoll + SkeletonMapper. It seems perfect for mixing animation and ragdoll, filling in bones without RigidBody, handling parent/child collisions, etc.
My goal is very simple to start: a character that plays normal animation (graphics only) most of the time, and when hit (gameplay event), it falls into a full ragdoll (start of physic simulation for this character).
From what I've understood so far, the recommended way is:
Keep my full Mixamo skeleton (high-detail, ~60 bones) for rendering and animation. (in a JPH::Skeleton)
Create a much smaller JPH::Skeleton + RagdollSettings (low-detail, ~15-20 joints: hips, spine1-2, head, upper/lower arms, upper/lower legs, etc.).
In the ragdoll, the RigidBodies + capsules are centered in the middle of the segments (not at the joints).
Create two SkeletonPose:
Init this two with Sample func (This is the moment where I need to sample the default/bind poses to set up the skeletons so the mapper works afterward. Since I don't have any .tof files, I probably need to create keyframes and push them into the AnimatedJointVector of SkeletalAnimation?)
I checked the JPH::RagdollSettings* sCreate function and it doesn't seem to initialize the skeleton pose properly. I need to figure out how to sample or init it without a .tof file.
Actually, I don't really understand the relationship between the ragdoll skeleton's positions and the RagdollSettings::Part.
Use SkeletonMapper + MapReverse (to drive the ragdoll from first pos to start death) and Map (to get the physical pose back to the visual skeleton).
So basically, I use MapReverse only once, at the moment of death, to set up my ragdoll pose from the current animation. Then, every frame, I just retrieve the physical pose with Map (directly) after PhysicsSystem::Update(), and apply it to my Vulkan visual nodes, without even calling CalculateJointMatrices() again.
Is this the correct/recommended approach for this use-case?
What I'm not sure about either is whether the constraint positions (mPosition1 / mPosition2) need to exactly match the GLTF node positions (e.g. top of the arm / shoulder joint), or if the SkeletonMapper with LockAllTranslations can handle a small offset here without visually breaking the character (shoulders detaching, weird stretching, etc.).
Thank you :)
Beta Was this translation helpful? Give feedback.
All reactions