Custom BC, fixed normal displacement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • ateshian
    Developer
    • Dec 2007
    • 1839

    #16
    Hi Fluvio,

    Your code for the residual seems correct. You can delete the line that includes double p = m_s.m_p; since it is not needed here.

    Also (and this was my mistake when I said the rest of the code should remain the same), according to eq.(7) in the notes you need to subtract this element force vector from the global residual, so
    Code:
    				fe[3 * j  ] -= H[j] * v.x;
    				fe[3 * j+1] -= H[j] * v.y;
    				fe[3 * j+2] -= H[j] * v.z;
    Regarding the stiffness matrix, your code can be simpler than the one in FEVolumeConstraint. Whereas FEVolumeConstraint::StiffnessMatrix declares kab as a vec3d object, you need to evaluate a general 2nd-order tensor Kab (so declare it as mat3d Kab) inside the double loop.

    To evaluate Kab inside the double loop, you first need to evaluate a few things outside that loop (i.e., all the quantities in equation (15) which don't have a subscript a or b). For example, the dyadic product of the unit vector n with itself can be evaluated as N, using
    Code:
                vec3d nu = dxr ^ dxs;
                double da = nu.unit();
                mat3ds N = dyad(nu);
    Note that da is the magnitude of the vector cross-product of dxr and dxs, which you will need when you evaluate Kab in the loop.
    You can also evaluate the dyadic product of n and u (which produces a non-symmetric tensor) along with the remaining terms containing that expression in equation (15) as
    Code:
                mat3ds ImN = mat3dd(1) - N;
                mat3d U = (mat3dd(1)*(u*nu) + (nu & u))*ImN;
    Note here that mat3dd(1) produces a diagonal tensor with 1 along its diagonal, i.e., the identity tensor.

    Finally, inside the double loop, you need to evaluate the antisymmetric tensor A (which appears in (15) and (16)) as
    Code:
                        mat3da A(dxr*Gs[j] - dxs*Gr[j]);
    FEBio defines the class mat3da for antisymmetric tensors, and the above initialization accepts the dual vector as its argument.

    Now you have everything you need to calculate Kab inside the double loop, according to equation (15). You should replace the existing code for evaluating the element stiffness matrix ke with
    Code:
                        ke[3*i  ][3*j  ] += Kab(0,0);
                        ke[3*i  ][3*j+1] += Kab(0,1);
                        ke[3*i  ][3*j+2] += Kab(0,2);
    
                        ke[3*i+1][3*j  ] += Kab(1,0);
                        ke[3*i+1][3*j+1] += Kab(1,1);
                        ke[3*i+1][3*j+2] += Kab(1,2);
    
                        ke[3*i+2][3*j  ] += Kab(2,0);
                        ke[3*i+2][3*j+1] += Kab(2,1);
                        ke[3*i+2][3*j+2] += Kab(2,2);
    Since Kab is not symmetric (based on the formulation), you will need to solve problems that use this constraint by setting the flag <symmetric_stiffness>0</symmetric_stiffness> in the <Control> section. If you use a quasi-Newton method, you should use Broyden instead of BFGS (<qnmethod>1</qnmethod>).

    As usual, I am available to answer any questions.

    Best,

    Gerard

    Comment

    • Fluvio
      Member
      • Dec 2018
      • 43

      #17
      Gerard,

      I went through your notes and updated our code.
      Here is the latest 'Kab' evaluation;

      Code:
      vec3d nu = dxr ^ dxs;
      double da = nu.unit();
      mat3ds Nu = dyad(nu);
      mat3ds ImN = mat3dd(1) - Nu;
      mat3d U = (mat3dd(1)*(u*nu) + (nu & u))*ImN;
      
      for(int i=0; i<neln; ++i)
      	for (int j = 0; j < neln; ++j)
      	{
      		mat3d kab;
      		mat3da A(dxr*Gs[j] - dxs*Gr[j]);
      
      
      		kab = (U*A*N[j] + N[j]*N[i]*Nu*da)*w[k]*0.5*p*m_eps;
      
      		ke[3 * i][3 * j] += kab(0, 0);
                      ke[3 * i][3 * j + 1] += kab(0, 1);
      		ke[3 * i][3 * j + 2] += kab(0, 2);
      
      		ke[3 * i + 1][3 * j] += kab(1, 0);
      		ke[3 * i + 1][3 * j + 1] += kab(1, 1);
      		ke[3 * i + 1][3 * j + 2] += kab(1, 2);
      
      		ke[3 * i + 2][3 * j] += kab(2, 0);
      		ke[3 * i + 2][3 * j + 1] += kab(2, 1);
      		ke[3 * i + 2][3 * j + 2] += kab(2, 2);
      
      	}
      }
      I also updated the forces on the residual function;

      Code:
      for (int j = 0; j < neln; ++j)
      {
      	fe[3 * j]		-= H[j] * v.x;
      	fe[3 * j + 1]	-= H[j] * v.y;
      	fe[3 * j + 2]	-= H[j] * v.z;
      }
      I still need to clean some redundant definitions, but the code compiles as is.

      We were also able to run the constraint on a model I created for this problem (EXP_wNC_job.feb), but I do not get any convergence.
      The program does not seem to use augmentation at all, so my guess is that I need to work on the 'Augmentation' function of the code.
      If you decide to run our .feb, disregard the second part defined as a rigid body, I left it as a reference to the boundary we are trying to recreate.

      We will work on this today!

      Fluvio L. Lobo Fenoglietto
      CTO, Principal Engineer
      Digital Anatomy Simulations for Healthcare, LLC

      Comment

      • ateshian
        Developer
        • Dec 2007
        • 1839

        #18
        Hi Fluvio,

        In the double loop, please keep in mind that 'a' is the same as 'i' and 'b' is the same as 'j'. So you should use
        kab = (U*A*N[i] + N[i]*N[j]*Nu*da)*w[k]*m_eps;
        Please delete the variable 'p' from your calculations since it is not relevant here. (This could be the reason you were not able to get convergence.)

        The program does not seem to use augmentation at all, so my guess is that I need to work on the 'Augmentation' function of the code.
        There is no compelling need to implement augmentation for this problem. So for now you can declare the Augment function in the header file as
        bool Augment(int naug, const FETimeInfo& tp) override { return true; }
        If you want to implement augmentation, you would need to create a double array that stores the value of eps*(u*n) at each integration point at the end of each iteration as Lagrange multipliers. We can discuss that later if you feel that this would be needed.

        Let me know if the changes suggest above resolve the convergence issue on your end.

        Best,

        Gerard

        Comment

        • Fluvio
          Member
          • Dec 2018
          • 43

          #19
          Gerard,

          Not getting convergence after removing 'p' and 'overriding' the augmentation function.
          I will try different step sizes and penalty factors later (currently 0.01 and 10 respectively).
          I do have the cutback set as aggressive and the steps go down to 1e-09 without any success.

          Like I said, have not played around enough, let me know if you have any other thoughts.

          Fluvio L. Lobo Fenoglietto
          CTO, Principal Engineer
          Digital Anatomy Simulations for Healthcare, LLC

          Comment

          • ateshian
            Developer
            • Dec 2007
            • 1839

            #20
            Hi Fluvio,

            If you send (or share with) me your full code, I can take a closer look.

            Best,

            Gerard

            Comment

            • Fluvio
              Member
              • Dec 2018
              • 43

              #21
              Gerard,

              Sorry I did not get to it yesterday.
              Here are the .cpp, .h, .dll (zipped), and .feb files;

              FENormalConstraint.zip
              EXP_wNC2_job.feb

              Thank you!

              Fluvio L. Lobo Fenoglietto
              CTO, Principal Engineer
              Digital Anatomy Simulations for Healthcare, LLC

              Comment

              • tsims
                Junior Member
                • Feb 2020
                • 13

                #22
                Gerard,

                We have been working on the code and have actually been able to make some progress since the last post and wanted to provide some updates. We were able to get a model that converges, or at least begins to converge. To do this we implemented an auto-penalty factor similar to the one in the FEFacet2FacetSliding class, with some modifications. Specifically, rather than assigning the penalty factor per point, the overall penalty factor was determined as an average of the penalties at each point, as I could not determine a way to assign the penalty by point in the interface that we are using.

                Our update code can be found here:
                FENormalConstraint.zip
                EXP_wNC2_job.feb

                Thanks for your continued help!
                ~Tres

                Comment

                • ateshian
                  Developer
                  • Dec 2007
                  • 1839

                  #23
                  Hi Tres, Fulvio,

                  I went over your code and found one bug and one omission:

                  (1) Bug: in FEFixedNormalDisplacement::StiffnessMatrix, FESurfaceElement& el = m_s.Element(1); should be FESurfaceElement& el = m_s.Element(l); (You need the letter 'l' not the number '1').
                  (2) m_binit should be initialized to false in FEFixedNormalDisplacement::FEFixedNormalDisplaceme nt(FEModel* pfem)

                  You can try making just those changes to your code and see if that resolves the convergence issue. If you want, you can also look at the modifications I made to your files (FENormalConstraint.zip): I deleted all the functions that I believe are not necessary for this constraint (they were needed for FEVolumeConstraint), and I made the changes based on the two points listed above.

                  I was able to run the two model files you included in this post. I used a penalty of 1e5 (instead of 5). We can discuss the results after you've had a chance to adjust your code.

                  Best,

                  Gerard

                  Comment

                  • Fluvio
                    Member
                    • Dec 2018
                    • 43

                    #24
                    Gerard,

                    It is working!
                    Here is a comparison with the same model using sliding-elastic contact (with tension enabled).

                    wC.PNG
                    Fig.1: Total Displacement, Sliding-Elastic Contact between Soft Tissue and Rigid Material

                    NC.PNG
                    Fig.2: Total Displacement, Normal Constraint on curved surface of Soft Tissue

                    We definitely want to discuss these results!
                    We were just about to review function for auto penalty calculation.
                    I used 1, 5, 10 while the auto penalty function returned 0.31, how did you get 1e5?

                    Fluvio L. Lobo Fenoglietto
                    CTO, Principal Engineer
                    Digital Anatomy Simulations for Healthcare, LLC

                    Comment

                    • ateshian
                      Developer
                      • Dec 2007
                      • 1839

                      #25
                      Hi Fluvio,

                      I am glad it's working

                      The penalty I chose was not based on any of the properties of the materials in the model. Basically I ran the analysis with 1, 10, 100, then jumped to 1e5 while looking for convergence of the results. I did not encounter any difficulties with the penalty method alone, regardless of the value of the penalty parameter. That's why I believe that Lagrange augmentation is not strictly needed for this particular constraint.

                      I am glad that you are getting similar results with the sliding-elastic contact with tension enabled. But please double check if you see tangential motion for the faces whose normal displacement has been constrained. The sliding-elastic contact should allow for it, but my observation with your test problem is that there is no noticeable sliding.

                      Best,

                      Gerard

                      Comment

                      • tsims
                        Junior Member
                        • Feb 2020
                        • 13

                        #26
                        Hello Gerard!

                        Thank you again for your help. We have the quarter circle face constrained, and we do see some tangential motion along the surface!



                        Do you think that it is worth submitting this plugin to https://febio.org/plugins/submit-a-plugin/ ?

                        ~Tres Sims

                        Comment

                        • Fluvio
                          Member
                          • Dec 2018
                          • 43

                          #27
                          Gerard,

                          Tres' results were obtained with a penalty factor of 10
                          If we use a greater penalty, like 1e5, we do not get any tangential sliding.

                          NC_P10.PNG
                          Fig.1: Total Displacement, Penalty = 10

                          NC_P1e5.PNG
                          Fig.2: Total Displacement, Penalty = 1e5

                          Is the penalty acting as a frictional component?
                          If so, this is great as we want to enable a friction style parameter.
                          We just want to make sure it makes physical sense.

                          Fluvio L. Lobo Fenoglietto
                          CTO, Principal Engineer
                          Digital Anatomy Simulations for Healthcare, LLC

                          Comment

                          • ateshian
                            Developer
                            • Dec 2007
                            • 1839

                            #28
                            Hi Fluvio,

                            The normal displacement constraint gets enforced very strictly with higher penalty parameters. In a numerical scheme with finite time steps, what would normally be a tangential displacement would become a displacement along a secant line. However, such a secant must necessarily include a normal component, which a high penalty parameter prevents from happening. That's why increasing the penalty parameter significantly reduces any tangential motion.

                            So it makes sense from a theoretical perspective to observe this outcome. However, in effect, this zero-normal-displacement constraint becomes equivalent to a fixed displacement boundary condition, which seems to limit its benefit? Let me know your thoughts.

                            Best,

                            Gerard

                            Comment

                            • Fluvio
                              Member
                              • Dec 2018
                              • 43

                              #29
                              Gerard,

                              First, here are some additional results comparing contact to four penalty parameters (1, 10, 100).

                              CPF2.PNG
                              Fig.1. Contact

                              P1.PNG
                              Fig.2. Normal Constraint (NC), with Penalty = 1

                              P10.PNG
                              Fig.3. Normal Constraint (NC), with Penalty = 10

                              P100.PNG
                              Fig.4. Normal Constraint (NC), with Penalty = 100

                              A penalty factor somewhere between 1 - 10 should give us the same result as the contact problem.
                              I left the 'ribcage' model to visualize the 'penetration' between the two parts due to lower penalty values.

                              Perhaps we could use the value of the normal displacement, associated with the secant displacement, as sort of a 'gap tolerance' that can drive an 'auto penalty' calculation? (thinking out-loud)

                              Also,
                              Would a finer discretization of the surface reduce this effect?
                              Do you think the issue is scaled with simulation steps? Could a dynamic penalty factor help?

                              I have yet to try the constraint on our bigger models, but I am motivated to use 'as is'. After completing this constraint, my hope was to add a friction parameter, similar to the sliding-elastic friction but enabled while in tension.
                              I like that our penalty does work in this way --minus the 'penetration' we see with lower penalty values.
                              In my mind, most of these boundaries will never be rigid. Some compliance and deformation should be fine.
                              However, I think this should be quantified as means to avoid running a contact problem just to verify that I am getting the right displacements (as done above), which circles back to my point of using the normal as a 'gap tolerance'

                              What do you think?

                              Fluvio L. Lobo Fenoglietto
                              CTO, Principal Engineer
                              Digital Anatomy Simulations for Healthcare, LLC

                              Comment

                              • ateshian
                                Developer
                                • Dec 2007
                                • 1839

                                #30
                                Hi Fluvio,

                                Perhaps we could use the value of the normal displacement, associated with the secant displacement, as sort of a 'gap tolerance' that can drive an 'auto penalty' calculation? (thinking out-loud)
                                The short answer is yes, of course you could think of it like that. The more detailed answer is that this normal displacement constraint is a non-standard condition that you have chosen to adopt for your model, so you do have some flexibility in deciding how to interpret it.

                                Would a finer discretization of the surface reduce this effect?
                                Perhaps. This is something that would need to be checked empirically. If it turns out that the discretization influences the results significantly for a given choice of the penalty, that would be an undesirable outcome. Ideally, you want your formulation to converge to a consistent result as you refine the mesh. Otherwise, if you write a paper describing your result and you state that a given penalty factor captures the right balance between enforcing the normal displacement constraint and providing the right amount of friction, this conclusion may not be reproducible when applied to a different mesh or geometry. (In contrast, the friction coefficient in a sliding contact interface should produce consistent results for different geometries and converged meshes).

                                However, I think this should be quantified as means to avoid running a contact problem just to verify that I am getting the right displacements (as done above), which circles back to my point of using the normal as a 'gap tolerance'
                                Would you like us to modify the existing sliding-elastic interface to produce meaningful results when friction is non-zero and the tension flag is turned on? Perhaps that would provide a better alternative to this constrained normal displacement (if it fails the convergence test outlined above)? Let me know.

                                Best,

                                Gerard

                                Comment

                                Working...
                                X