jones007 |
November 4, 2020 12:18 |
Quote:
Originally Posted by mz_uon
(Post 786750)
Hi
Thanks for your reply. Although my aim is to mesh it in the mesh component rather than CFX/fluent. But I see what you mean, so can you create separate parts in the mesh component?
Regards
|
I don't know the answer to that. I've always created the divisions in CAD. If you import the assembly as a single file into ANSYS, then the block-to-block connections are usually automatically created. I've never had to do any additional work in either the mesher or Pre to make the connections. I've always assumed that the block-to-block connections created roughly the same additional work as the partitioner for parallel processing in the solver. I suppose there might be a little extra work to interpolate data if you don't have a node match across boundaries. I think there may be a way to force the node match, but I have not explored that option. For steady solutions, as long as the element sizes are close across the block boundaries, I've never seen anything non-physical show up. When possible, I try to avoid having a block interface break through a high gradient region, but if you start looking at really large jobs that involve complicated surface topologies, it may be quite challenging or undesirable to have the entire body surface in one block.
If evcelica is opposed to this approach, perhaps they can provide alternate approaches for tackling jobs that are impractical or impossible to run on a single core. The challenge that I have run into is that modern computer architectures that are optimized for high speed solving typically have a large number of slower cores. If you can take advantage of all of the cores, then they are faster than their counterparts that have fewer, but higher speed cores. It's trivial to run the flow solvers using parallel processing to take advantage of multiple cores. However, my experience has been that if you give the mesher a single block domain, it will only use a single core, so machines that have fewer, faster cores will excel for meshing. Since I don't really want to use one machine for meshing and another for solving, domain decomposition has been my workaround.
Some of my larger jobs have on the order of 40 million nodes and 120 million cells. Grid generation on a single block was requiring about 7 to 8 hours. If the mesher failed or you were unhappy with the mesh, it was another day to get a new mesh. This was on a machine with a pair of Xeon Silver chips, and 96GB of RAM, but during meshing, just one core was in use by the mesher. Once I had a mesh, I could use something like 20 cores for the solver, and I could get a converged CFX solution using SST and GT transition modeling in less time than the it took to get a mesh.
I hope this helps.
|