# Interpolation with multiple weighing criteria

 Register Blogs Members List Search Today's Posts Mark Forums Read

 May 5, 2015, 10:52 Interpolation with multiple weighing criteria #1 New Member   Join Date: May 2015 Posts: 7 Rep Power: 8 Hi, I am trying to perform interpolation at a point using information from a scattered set of data points (cell centroids, where I have velocity etc. stored). The number of cell centroids I will have for a given interpolation can vary - i.e., depending on the surface orientation (in an immersed boundary kind of set-up). Because of this, I decided to use moving-least-squares based interpolation which has the advantage of including arbitrary number of data points as long as that number is greater than my basis size. Everything works out ok if I just want to do this (that is, interpolate based on the distance of each of the data point to the interpolation point). Now the issue is this: the values at the cell centroids may be inaccurate somewhat, depending on how close they are to the surface. Hence, I have this additional factor w2 at each data point (which varies between 0 and 1 and essentially tells me how correct a given value is - closer to 0 means very high inaccuracy and therefore I shouldn't give it too much weightage and close to 1 means accurate and it's ok to include it in the interpolation), that I want to include in the interpolation. Is there a mathematically consistent way I can include w2 in the moving-least-squares (MLS) formulation? I have tried doing the original MLS calculation, and then multiplying the weights so obtained with the corresponding w2 values and then re-normalizing the weights. This is resulting a very noisy interpolation. I am also open to any other technique of interpolation that maybe used for my scenario, if that can inherently account for my requirements. I will greatly appreciate if anybody can help me with this.

 May 5, 2015, 12:15 #2 Senior Member   Filippo Maria Denaro Join Date: Jul 2010 Posts: 5,734 Rep Power: 60 can you produce a tesselation of the scattered data? triangle (2D), tethraedron (3D) can be use for Lagrangian simplex. You can use shape functions in a FE manner

 May 5, 2015, 12:30 #3 New Member   Join Date: May 2015 Posts: 7 Rep Power: 8 Thanks for the suggestion. I am not very knowledgeable on finite element techniques, but I am replying based on what I think I understood from your reply. Yes, tessellation (forming triangles/tetrahedra) from the data points is certainly possible. However, interpolation using only the 3 (triangle - 2D) or 4 (tetrahedron - 3D) points that surround the interpolation point will not be sufficiently accurate for my purpose. I would like to ideally use all the data points available, but just weight those points according to distance as well as the other factor w2 which dictates how accurate the value at a given data point is. Like I said, MLS does a very good job if it was only the distance criteria, but I am unable to figure how to include the factor w2. If I have misunderstood your suggestion, please feel free to clarify. Thanks.

May 5, 2015, 12:41
#4
Senior Member

Filippo Maria Denaro
Join Date: Jul 2010
Posts: 5,734
Rep Power: 60
Quote:
 Originally Posted by i_m3_mys3lf Thanks for the suggestion. I am not very knowledgeable on finite element techniques, but I am replying based on what I think I understood from your reply. Yes, tessellation (forming triangles/tetrahedra) from the data points is certainly possible. However, interpolation using only the 3 (triangle - 2D) or 4 (tetrahedron - 3D) points that surround the interpolation point will not be sufficiently accurate for my purpose. I would like to ideally use all the data points available, but just weight those points according to distance as well as the other factor w2 which dictates how accurate the value at a given data point is. Like I said, MLS does a very good job if it was only the distance criteria, but I am unable to figure how to include the factor w2. If I have misunderstood your suggestion, please feel free to clarify. Thanks.

First, you can build higher order shape functions, for example in 2d using 6-node simplex...but you have to think about what you are looking for...if your data are collected with some errors (experimental or numerical you did not specify), it is no really worthwhile using higher order interpolation....in many cases, first or second order degree can be sufficient.

 May 5, 2015, 12:48 #5 Senior Member   Filippo Maria Denaro Join Date: Jul 2010 Posts: 5,734 Rep Power: 60

 May 5, 2015, 14:22 #6 New Member   Join Date: May 2015 Posts: 7 Rep Power: 8 Actually, maybe I spoke too soon. I had a re-look at my code and it turns out that I can do tessellation, but with some degree of difficulty (in terms of memory storage etc.). Also, I will have a large number of points where the interpolation is to be done and at each of these points, the exact number of data points I have will keep on varying. In fact, this aspect was one of the reasons why I chose MLS in the first place - so I could use the same scheme for any arbitrary number of data points. Given these two aspects, I feel that it will be difficult to achieve what I want to do? Am I thinking the right way? In any case, the more interesting point is to be able to include the additional factor w2 to account for the error in the data collection (the data is numerical, BTW). How do you propose to implement this aspect, assuming tessellation would work out? Maybe even if I don't use tessellation, this may give some insight about how to achieve what I want to do. @others: I would also be interested in any other ideas if others might have.. Thanks.

 May 5, 2015, 17:25 #7 Senior Member     Paolo Lampitella Join Date: Mar 2009 Location: Italy Posts: 1,546 Blog Entries: 20 Rep Power: 32 Have you considered Radial Basis Functions? They can interpolate fully unstructured data sets without singularity (except for coinciding points, of course) and, if i remember well, they also allow a smoothing paramether which will, somehow, make the interpolation not exact but smoothed. I just do not remember if such smoothing has to be global or can be local (i.e., different for each interpolation point). If it is local, i guess you have something to look at. The first place to look at for RBF is: https://amath.colorado.edu/faculty/fornberg/ Moreover, the Lorena Barba's Group has produced open source code which you can look at for efficient parallel interpolation over large sets of points. If your interpolation stencils only involve few 10's of points and are local, then i guess you can just go with the bookkeeping.

May 5, 2015, 17:39
#8
Senior Member

Filippo Maria Denaro
Join Date: Jul 2010
Posts: 5,734
Rep Power: 60
Quote:
 Originally Posted by i_m3_mys3lf Actually, maybe I spoke too soon. I had a re-look at my code and it turns out that I can do tessellation, but with some degree of difficulty (in terms of memory storage etc.). Also, I will have a large number of points where the interpolation is to be done and at each of these points, the exact number of data points I have will keep on varying. In fact, this aspect was one of the reasons why I chose MLS in the first place - so I could use the same scheme for any arbitrary number of data points. Given these two aspects, I feel that it will be difficult to achieve what I want to do? Am I thinking the right way? In any case, the more interesting point is to be able to include the additional factor w2 to account for the error in the data collection (the data is numerical, BTW). How do you propose to implement this aspect, assuming tessellation would work out? Maybe even if I don't use tessellation, this may give some insight about how to achieve what I want to do. @others: I would also be interested in any other ideas if others might have.. Thanks.

have a look and try cftool in Matlab

May 6, 2015, 15:17
#9
New Member

Join Date: May 2015
Posts: 7
Rep Power: 8
Quote:
 Originally Posted by sbaffini Have you considered Radial Basis Functions? They can interpolate fully unstructured data sets without singularity (except for coinciding points, of course) and, if i remember well, they also allow a smoothing paramether which will, somehow, make the interpolation not exact but smoothed. I just do not remember if such smoothing has to be global or can be local (i.e., different for each interpolation point). If it is local, i guess you have something to look at. The first place to look at for RBF is: https://amath.colorado.edu/faculty/fornberg/ Moreover, the Lorena Barba's Group has produced open source code which you can look at for efficient parallel interpolation over large sets of points. If your interpolation stencils only involve few 10's of points and are local, then i guess you can just go with the bookkeeping.
Thanks! I haven't looked at Radial basis functions - I will have a look at it and see if it will work out. If there is indeed a local smoothing parameter usage possible, that probably what I am looking for. Thanks again.

 May 6, 2015, 16:23 #10 Senior Member     Paolo Lampitella Join Date: Mar 2009 Location: Italy Posts: 1,546 Blog Entries: 20 Rep Power: 32 Dear i_m3_mys3lf, you can find attached a very simple set of MATLAB/octave routines which you can use to experiment with the RBF interpolation. In particular, a typical call would be: yrbf=evalRBFGA(intRBFGA(xd,yd,rbf_type,rbf_par,rbf _smc,sys_sol),xd,xrbf,rbf_type,rbf_par); where: - xd is the n_points x n_dimensions array of locations of the points holding your original data (yd). Note that the routines work for arbitrary n_dimensions > 0 - yd is your original data at such points. I suggest working with scalar data (n_points x 1) as i do not recall if the routines actually worked also for multi-dimensional yd (even if it seems so from part of the code) - xrbf are the new locations (n_new_points x n_dimensions) where you want to interpolate - yrbf is the result of the interpolation at these new locations - rbf_type is a string holding one of the following values: 'MQ', 'IMQ', 'IQ', 'GA', which are the available type of RBF interpolants - rbf_par is a constant parameter which usually is a pain in the ass. I suggest for your tests to first select it as the inverse of the minimum distance among the original points and then try to improve over it (the smaller the better but, at some point, the system becomes very ill-conditioned, so you need to find a trade off) -rbf_smc is the smoothing constant we were talking about. Note how in these routines i used to have it constant over all the points (i.e., it is a scalar value). Start playing with the scalar value just to take confidence with it (a value of 0 means no smoothing while the maximum value is one, which means the point is not interpolated at all). Then, if you want to use local values, you should slightly modify the routine intRBFGA (note that the constant is just subtracted from the matrix disagonal, thus you should subtract your new rbf_smc vector from the diagonal) - sys_sol is a string holding one of the following values: 'pinv', 'gmres', 'bicgstab', 'minres', 'pcg', 'inv' which are the possible methods to solve the system in the interpolation problem. I suggest 'pinv' or 'inv' to start with. Note that substituting evalRBFGA with evalDRBFGA will produce the gradient of the interpolant, hence it is a method to compute the gradient of an unstructured dataset (just like the least squares). As for all the interpolations, it is better to not use the RBF as an extrapolation (with xrbf outside the convex hull of the original xd) EDIT: The original attachment got... detached. I will try again with another post. i_m3_mys3lf likes this.

May 6, 2015, 16:25
#11
Senior Member

Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 1,546
Blog Entries: 20
Rep Power: 32
Second try for the attachment

EDIT: It worked
Attached Files
 RBF.zip (2.0 KB, 4 views)

 May 7, 2015, 12:06 #12 New Member   Join Date: May 2015 Posts: 7 Rep Power: 8 Thanks a lot! I will try it and let you know how it went :-)

 May 9, 2015, 05:50 #13 Senior Member     Paolo Lampitella Join Date: Mar 2009 Location: Italy Posts: 1,546 Blog Entries: 20 Rep Power: 32 May i ask you where your "reliability" weights come from in the immersed boundary context?

 May 11, 2015, 11:03 #14 New Member   Join Date: May 2015 Posts: 7 Rep Power: 8 The "reliability" is essentially from knowing that the way I am doing things, the results maybe off because of mass conservation in the cells that are "cut" by the surface. I have noticed that smaller the volume of these cut cells, more the error - so I am trying to use that to bias my interpolation and somehow try to reduce the effect of those error values on the final interpolation.. sbaffini likes this.

 Tags interpolation, moving least squares, scattered data