One of the tough challenges in creating meshes for CFD simulations is the requirement to create a mesh that works with very different geometry. With Overset meshing you can create the ideal mesh for each piece of geometry in your model, and let them overlap where they touch and the program handles the calculations at those boundaries. All of this is handled simply in the ANSYS Workbench interface and then combined in ANSYS FLUENT.PADT-ANSYS-Fluent-Overset-Meshing-2017_07_05-1
One of the more common questions we get on thermal expansion simulations in tech support for ANSYS Mechanical and ANSYS Mechanical APDL revolve around how the Coefficient of Thermal Expansion, or CTE. This comes in to play if the CTE of the material you are modeling is set up to change with the temperature of that material.
This detailed presentation goes in to explaining what the differences are between the Secant and Instantaneous methods, how to convert between them, and dealing with extrapolating coeficients beyond temperatures for which you have data.PADT-ANSYS-Secant_vs_Instantaneous_CTE-2017_07_05
You can download a PDF of the presentation here.
The PADT sales and support team focused on simulation solutions is best known for our work with the full ANSYS product suite. What a lot of people don’t know is that we also represent a fantastic simulation tool called Flownex. Flownex is a system level 1-D program that is designed from the ground up to model thermal-fluid systems.
What does Flownex Do?
Flownex Simulation Environment is an interactive software program that allows users to model systems to understand how fluids (gas and/or liquid) flow and how heat is transferred in that same system due to that flow. the way it works is you create a network of components that are connected together as a system. The heat and fluid transfer within and between each node is calculated over time, giving a very accurate, and fast, representation of the system’s behavior.
As a system simulation tool, it is fast, it is easy to build and change, and it runs in real time or even faster. This allows users to drive the design of their entire system through simulation.
Need to know what size pump you need, use Flownex. Want to know if you heat exchanger is exchanging enough heat for every situation, use Flownex. Tasked with making sure your nuclear reactor will stay cool in all operating conditions, use Flownex. Making sure you have optimized the performance of your combustion nozzles, use Flownex. Time to design your turbine engine cooling network, use Flownex. Required to verify that your mine ventilation and fire suppression system will work, use Flownex. The applications go on and on.
Why is Flownex so Much Better than other System Thermal-Fluid Modeling Solutions?
There are a lot of solutions for modeling thermal-fluid systems. We have found that the vast majority of companies use simple spreadsheets or home-grown tools. There are also a lot of commercial solutions out there. Flownex stands out for five key reasons:
- Breadth and depth of capability
Flownex boasts components, the objects you link together in your network, that spread across physics and applications. Whereas most tools will focus on one industry, Flownex is a general purpose tool that supports far more situations. For depth they have taken the time over the years to not just have simple models. Each component has sophisticated equations that govern its behavior and user defined parameters that allow for very accurate modeling.
- Developed by hard core users
Flownex started life as an internal code to support consulting engineers. Experienced engineering software programmers worked with those consultants day-in and day-out to develop the tools that were needed to solve real world problems. This is the reason why when users ask “What I really need to do to solve my problem is such-and-such, can Flownex do that?” we can usually answer “Yes, and here are the options to make it even more accurate.”
- Customization and Integration
As powerful and in-depth as Flownex is, there is no way to capture every situation for every user. Nor does the program do everything. That is why it is so open and so easy to customize and integrate. As an example, may customers have very specific thermal-pressure-velocity models that they use for their specific components. Models that they developed after years if not decades of testing. Not a problem, that behavior can be easily added to Flownex. If a customer even has their own software or a 3rd party tool they need to use, it is pretty easy to integrate it right into your Flownex system model.Very common tools are already integrated. The most common connection is Matlab/Simulink. At PADT we often connect Excel models from customers into our Systems for consulting. It is also integrated into ANSYS Mechanical.
- Nuclear Quality Standards
Flownex came in to its own as a tool used to model the fluid system in and around Nuclear Reactors. So it had to meet very rigorous quality standards, if not the most stringent they are pretty close. This forced to tool to be very robust, accurate, and well documented. And the rest of us can take advantage of that intense quality requirement to meet and exceed the needs of pretty much every industry. We can tell you after using it for our own consulting projects and after talking to other users, this code is solid.
- Ease of Use
Some people will read the advantages above and think that this is fantastic, but that much capability and flexibility must make it difficult to use. Nothing could be further from the truth. Maybe its because the most demanding users are down the hallway and can come and harangue the developers. Or it could be that their initial development goal of keeping ease of use without giving up on functionality was actually followed. Regardless of why, this simulation tool is amazingly simple and intuitive. From building the model to reviewing results to customization, everything is easy to learn, remember, and user. To be honest, it is actually fun to use. Not something a lot of simulation engineers say.
Why does buying and getting support from PADT for Flownex make a Difference?
The answer to this question is fairly simple: PADT’ simulation team is made up of very experienced users who have to apply this technology to our own internal projects as well as to consulting jobs. We know this tool and we also work closely with the developers at Flownex. As with our ANSYS products, we don’t just work on knowing how to use the tool, we put time in to understand the theory behind everything as well as the practical real world industry application.
When you call for support, odds are the engineer who answers is actually suing Flownex on a customer’s system. We also have the infrastructure and size in place to make sure we have the resources to provide that support. Investing in a new simulation tool can generate needs for training, customization, and integration; not to mention traditional technical support. PADT partners with our customers to make sure they get the greatest value form their simulation software investment.
Reach out to Give it a Try or Learn More
Our team is ready and waiting to answer your questihttp://www.flownex.com/flownex-demoons or provide you with a demonstration of this fantastic tool. . You can email us at firstname.lastname@example.org or give us a call at 480.813.4884 or 1-800-293-PADT.
Still want to learn more? Here are some links to more information:
- Check out our Flownex page.
- The Flownex website if full of great info.
- This video is a great introduction that gives you a feel of how powerful and intuitive it is.
- Check out the Flownex SE video page on YouTube. It has examples for many different industries where you can get a feel for the power and ease-of-use.
- The Flownex FAQ is fantastic. All those questions you have about “Can Flownex do this?” are there… along with some application specific questions they get a lot.
- Sign up for a demo.
- Contact us at email@example.com or Flownex at firstname.lastname@example.org
Sometimes everything happens at once. This June 22nd was one of those days. Three key events were scheduled for the same time in three different states and we needed to be at all of them. So everyone stepped up and pulled it off, and hopefully some of you reading this were at one of these fantastic events. Combined they are a great example of PADT’s commitment to the local technology ecosystem, showing how we create true win-win partnerships across organizations and geographies. Since the beginning we wanted to be more than just a re-seller or just consultants, and this Thursday was a chance to show our commitment to doing just that.
Albuquerque: New Mexico Technology Council 3D Printing Peer Group Kickoff
Everyone talks about how they thing we should all work together, but there never seems to be someone who is willing to pull it all together. That is how the additive manufacturing committee in New Mexico was until the New Mexico Technology Council (NMTC) stepped up to host a peer group around 3D Printing. Even though it was a record 103f in Albuquerque, 35 brave 3D Printing enthusiasts ventured out into the heat and joined us at Rio Bravo Brewing to get the ball rolling on creating a cooperative community. We started with an introduction from NMTC, followed by an overview of what we want to achieve with the group. Our goals are:
- Create stronger cooperation between companies, schools, and individuals involved in 3D Printing in New Mexico
- Foster cooperation between organizations to increase the benefits of 3D Printing to New Mexico
- Make a contribution to New Mexico STEM education in the area of 3D Printing
To make this happen we will meet once a quarter, be guided by a steering committee, and grow our broad membership. Anyone with any involvement in Additive Manufacturing in the state is welcome to join in person or just be part of the on-line discussion.
Then came the best part, where we went around the room and shared our names, orginization, and what we did in the world of 3D Printing. What a fantastic group. From a K-12 educator to key researchers at the labs, we had every industry and interest representing. What a great start.
Here are the slides from that part of the presentation:NMTC-PADT-3D-Printing-Peer-Group-2017_06_22
Once that was done PADT’s Rey Chu gave a presentation where it went over the most important developments in Additive Manufacturing over the last year or so. He talked about the three new technologies that are making an impact, new materials, and what is happening business wise. Check out his slides to learn more:NMTC-PADT-New-3D-Printing-2017_06_22
After a question and answer period we had some great conversations in small groups, which was the most valuable part.
If you want to learn more, please reach out to email@example.com and we will add you to the email list where we will plan and execute future activities. We are also looking for people to be on the steering committee and locations for our next couple of meetings. Share this with as many people as you can in New Mexico so that next event can be even better!
Denver: MSU Advance Manufacturing & Engineering Sciences Building Opening
Meanwhile, in Denver it was raining. In spite of that, supporters of educating the next generation of manufacturers and engineers gathered for the opening of the Advanced Manufacturing and Engineering Sciences Building at Metropolitan State University. This 142,000 sqft multi-disciplinary facility is located in the heart of downtown Denver and will house classes, labs, and local companies. PADT was there to not only celebrate the whole facility, but we were especially excited about the new 3D Printing lab that is being funded by a $1 million gift from Lockheed Martin. A nice new Stratasys Fortus 900 is the centerpiece of the facility. It will be a while before the lab itself is done, so watch for an invite to the grand opening. While we wait we are working with MSU, Lockheed Martin, Stratasys, and others to put a plan together to develop the curriculum for future classes and making sure that the engineers needed for this technology are available for the expected explosion of use of this technology.
Stratasys and PADT are proud to be partners of this fantastic effort along with many key companies in Colorado. If you want to learn more about how we can help you build partnerships between industry and academia, please reach out to firstname.lastname@example.org or give us a call.
Phoenix: 2017 Aerospace, Aviation, Defense + Manufacturing Conference
The 113f high in Phoenix really didn’t stop anyone from coming to the AADM conference. This annual event was at ASU SkySong in Phoenix and is sponsored by the AZ Tech Council, AZ Commerce Authority, and RevAZ. PADT was proud to not only be a sponsor, but also have a booth, participate in the advanced manufacturing panel discussion, and do a short partner presentation about what we do for our Aerospace and Defense Customers.
Here is Rob’s presentation on PADT:PADT-AeroConf-AZTC-2017
We had great conversations at our booth with existing customers, partners, and a few people that were new to us. This is always one of the best events of the summer, and we look forward to next year.
If you want to know more about how PADT can help you in your Aerospace, Defense, and Manufacturing efforts, reach out and contact us.
Researchers and students at universities around the world are tackling difficult engineering and science problems, and they are turning to simulation more and more to get to understanding and solutions faster. Just like industry. And just like industry they are finding that ANSYS provides the most comprehensive and powerful solution for simulation. The ANSYS suite of tools deliver breadth and depth along with ease of use for every level of expertise, from Freshman to world-leading research professors. The problem in the past was that academia operates differently from industry, so getting to the right tools was a bit difficult from a lot of perspectives.
Now, with the ANSYS Academic program, barriers of price, licensing, and access are gone and ANSYS tools can provide the same benefits to college campuses that they do to businesses around the world. And these are not stripped down tools, all of the functionality is there.
Yes, free. Students can download ANSYS AIM Student or ANSYS Student under a twelve month license. The only limitation is on problem size. To make it easy, you can go here and download the package you need. ANSYS AIM is a new user interface for structural, thermal, electromagnetic, and fluid flow simulation oriented towards the new or occasional user. ANSYS Student is a size limited bundle of the full ANSYS Mechanical, ANSYS CFD, ANSYS Autodyn, ANSYS SpaceClaim, and ANSYS DesignXplorer packages.
You can learn more by downloading this PDF.
That is pretty much it. If you need ANSYS for a class or just to learn how to use the most common simulation package in industry, download it for free.
Academic Institutions – Discounted Packages
If you need access to full problem sizes or you want to use ANSYS products for your research, there are several Academic Packages that offer multiple seats of full products at discounted prices. These products are grouped by application:
- Structural-Fluid Dynamics Academic Products — Bundles that offer structural mechanics, explicit dynamics, fluid dynamics and thermal simulation capabilities. These bundles also include ANSYS Workbench, relevant CAD import tools, solid modeling and meshing, and High Performance Computing (HPC) capability.
- Electronics Academic Products — Bundles that offer high-frequency, signal integrity, RF, microwave, millimeter-wave device and other electronic engineering simulation capabilities. These bundles include product such as ANSYS HFSS, ANSYS Q3D Extractor,ANSYS SIwave, ANSYS Maxwell, ANSYS Simplorer Advanced. The bundles also include HPC and import/connectivity to many common MCAD and ECAD tools.
- Embedded Software Academic Products — Bundles of our SCADE products that offer a model-based development environment for embedded software.
- Multiphysics Campus Solutions— Large task count bundles of Research & Teaching products from all three of the above categories intended for larger-scale deployment across a campus, or multiple campuses.
You can see what capabilities are included in each package by downloading the product feature table. These are fully functional products with no limits on size. What is different is how you are authorized to use the tool. The Academic licence restricts use to teaching and research. Because of this, ANSYS is able to provide academic product licenses at significantly reduced cost compared to the commercial licenses — which helps organizations around the globe to meet their academic budget requirements. Support is also included through the online academic resources like training as well as access to the ANSYS Customer Portal.
What does all this mean? It means that every engineer graduating from their school of choice should enter the workforce knowing how to use ANSYS Products, something that employers value. It also means that researchers can now produce more valuable information in less time for less money because they leverage the power of ANSYS simulation.The barriers are down, as students and institutions, you just need to take advantage of it.
Sometimes you want to take two parts and and prepare them for meshing so that they either share a surface between them, or have identical but distinct surfaces on each part where they touch. In this simple How-To, we share the steps for creating both of these situations so you can get a continuous mesh or create a matching contact surface in ANSYS Mechanical.PADT-Presentations-Grey_White-Wide
By using the power of ANSYS SpaceClaim to quickly modify geometry, you can set up your surface models in ANSYS Mechanical to easily be connected. Take a look in this How-To slide deck to see how easy it is to extend geometry and intersect surfaces.PADT-ANSYS-Connecting-Shells-SpaceClaim-Mechanical
A support request from one of our customers recently was for the ability to make Thermal Contact Conductance, which is sort of a reciprocal of thermal resistance at the contact interface, a parameter so it can be varied in a parametric study. Unfortunately, this property of contact regions is not exposed as a parameter in the ANSYS Mechanical window like many other quantities are.
Fortunately, with ANSYS there is almost always a way……in this case we use the capability of an APDL (ANSYS Parametric Design Language) command object within ANSYS Mechanical. This allows us to access additional functionality that isn’t exposed in the Mechanical menus. This is a rare occurrence in the recent versions of ANSYS, but I thought this was a good example to explain how it is done including verifying that it works.
A key capability is that user-defined parameters within a command object have a ‘magic’ set of parameter names. These names are ARG1, ARG2, ARG3, etc. Eric Miller of PADT explained their use in a good PADT Focus blog posting back in 2013
In this application, we want to be able to vary the value of thermal contact conductance. A low value means less heat will flow across the boundary between parts, while a high value means more heat will flow. The default value is a calculated high value of conductance, meaning there is little to no resistance to heat flow across the contact boundary.
In order to make this work, we need to know how the thermal contact conductance is applied. In fact, it is a property of the contact elements. A quick look at the ANSYS Help for the CONTA174 or similar contact elements shows that the 14th field in the Real Constants is the defined value of TCC, the thermal contact conductance. Real Constants are properties of elements that may need to be defined or may be optional values that can be defined. Knowing that TCC is the 14th field in the real constant set, we can now build our APDL command object.
This is what the command object looks like, including some explanatory comments. Everything after a “!” is a comment:
! Command object to parameterize thermal contact conductance
! by Ted Harris, PADT, Inc., 3/31/2017
! Note: This is just an example. It is up to the user to create and verify
! the concept for their own application.
! From the ANSYS help, we can see that real constant TCC is the 14th real constant for
! the 17X contact elements. Therefore, we can define an APDL parameter with the desired
! TCC value and then assign that parameter to the 14th real constant value.
! We use ARG1 in the Details view for this command snippet to define and enable the
! parameter to be used for TCC.
r,cid ! tells ANSYS we are defining real constants for this contact pair
! any values left blank will not be overwritten from defaults or those
! assigned by Mechanical. R command is used for values 1-6 of the real constants
rmore,,,,,, ! values 7-12 for this real constant set
rmore,,arg1 ! This assigned value of arg1 to 14th field of real constant
! Now repeat for target side to cover symmetric contact case
r,tid ! tells ANSYS we are defining real constants for this contact pair
! any values left blank will not be overwritten from defaults or those
! assigned by Mechanical. R command is used for values 1-6 of the real constants
rmore,,,,,, ! values 7-12 for this real constant set
rmore,,arg1 ! This assigned value of arg1 to 14th field of real constant
You may have noticed the ‘cid’ and ‘tid’ labels in the command object. These identify the integer ‘pointers’ for the contact and target element types, respectively. They also identify the contact and target real constant set number and material property number. So how do we know what values of integers are used by ‘cid’ and ‘tid’ for a given contact region? That’s part of the beauty of the command object: you don’t know the values of the cid and tid variables, but you alsp don’t need to know them. ANSYS automatically plugs in the correct integer values for each contact pair simply by us putting the magic ‘cid’ and ‘tid’ labels in the command snippet. The top of a command object within the contact branch will automatically contain these comments at the top, which explain it:
! Commands inserted into this file will be executed just after the contact region definition.
! The type number for the contact type is equal to the parameter “cid”.
! The type number for the target type is equal to the parameter “tid”.
! The real and mat number for the asymmetric contact pair is equal to the parameter “cid”.
! The real and mat number for the symmetric contact pair(if it exists)
! is equal to the parameter “tid”.
Next, we need to know how to implement this in ANSYS Mechanical. We start with a model of a ball valve assembly, using some simple geometry from one of our training classes. The idea is that hot water passes through the valve represented by a constant temperature of 125 F. There is a heat sink represented at the OD of the ends of the valve at a constant 74 degrees. There is also some convection on most of the outer surfaces carrying some heat away.
The ball valve and the valve housing are separate parts and contact is used to allow heat to flow from the hotter ball valve into the cooler valve assembly:
Here is the command snippet associated with that contact region. The ‘magic’ is the ARG1 parameter which is given an initial value in the Details view, BEFORE the P box is checked to make it a parameter. Wherever we need to define the value of TCC in the command object, we use the ARG1 parameter name, as shown here:
Next, we verify that it actually works as expected. Here I have setup a table of design points, with increasing values of TCC (ARG1). The output parameter that is tracked is the minimum temperature on the inner surface of the valve housing, where it makes contact with the ball. If conductance is low, little heat should flow so the housing remains cool. If the conductance is high, more heat should flow into the housing making it hotter. After solving all the design points in the Workbench window, we see that indeed that’s what happens:
And here is a log scale plot showing temperature rise with increasing TCC:
So, excluding the comments our command object is 6 lines long. With those six lines of text as well as knowledge of how to use the ARG1 parameter, we now have thermal contact conductance which varies as a parameter. This is a simple case and you will certainly want to test and verify for your own use. Hopefully this helps with explaining the process and how it is done, including verification.
WHY did nature evolve cellular structures?
In a previous post, I laid out a structural classification of cellular structures in nature, proposing that they fall into 6 categories. I argued that it is not always apparent to a designer what the best unit cell choice for a given application is. While most mechanical engineers have a feel for what structure to use for high stiffness or energy absorption, we cannot easily address multi-objective problems or apply these to complex geometries with spatially varying requirements (and therefore locally optimum cellular designs). However, nature is full of examples where cellular structures possess multi-objective functionality: bone is one such well-known example. To be able to assign structure to a specific function requires us to connect the two, and to do that, we must identify all the functions in play. In this post, I attempt to do just that and develop a classification of the functions of cellular structures.
Any discussion of structure in nature has to contend with a range of drivers and constraints that are typically not part of an engineer’s concern. In my discussions with biologists (including my biochemist wife), I quickly run into justified skepticism about whether generalized models associating structure and function can address the diversity and nuance in nature – and I (tend to) agree. However, my attempt here is not to be biologically accurate – it is merely to construct something that is useful and relevant enough for an engineer to use in design. But we must begin with a few caveats to ensure our assessments consider the correct biological context.
1. Uniquely Biological Considerations
Before I attempt to propose a structure-function model, there are some legitimate concerns many have made in the literature that I wish to recap in the context of cellular structures. Three of these in particular are relevant to this discussion and I list them below.
1.1 Design for Growth
Engineers are familiar with “design for manufacturing” where design considers not just the final product but also aspects of its manufacturing, which often place constraints on said design. Nature’s “manufacturing” method involves (at the global level of structure), highly complex growth – these natural growth mechanisms have no parallel in most manufacturing processes. Take for example the flower stalk in Fig 1, which is from a Yucca tree that I found in a parking lot in Arizona.
At first glance, this looks like a good example of overlapping surfaces, one of the 6 categories of cellular structures I covered before. But when you pause for a moment and query the function of this packing of cells (WHY this shape, size, packing?), you realize there is a powerful growth motive for this design. A few weeks later when I returned to the parking lot, I found many of the Yucca stems simultaneously in various stages of bloom – and captured them in a collage shown in Fig 2. This is a staggering level of structural complexity, including integration with the environment (sunlight, temperature, pollinators) that is both wondrous and for an engineer, very humbling.
The lesson here is to recognize growth as a strong driver in every natural structure – the tricky part is determining when the design is constrained by growth as the primary force and when can growth be treated as incidental to achieving an optimum functional objective.
Even setting aside the growth driver mentioned previously, structure in nature is often serving multiple functions at once – and this is true of cellular structures as well. Consider the tessellation of “scutes” on the alligator. If you were tasked with designing armor for a structure, you may be tempted to mimic the alligator skin as shown in Fig. 3.
As you begin to study the skin, you see it is comprised of multiple scutes that have varying shape, size and cross-sections – see Fig 4 for a close-up.
The pattern varies spatially, but you notice some trends: there exists a pattern on the top but it is different from the sides and the bottom (not pictured here). The only way to make sense of this variation is to ask what functions do these scutes serve? Luckily for us, biologists have given this a great deal of thought and it turns out there are several: bio-protection, thermoregulation, fluid loss mitigation and unrestricted mobility are some of the functions discussed in the literature [1, 2]. So whereas you were initially concerned only with protection (armor), the alligator seeks to accomplish much more – this means the designer either needs to de-confound the various functional aspects spatially and/or expand the search to other examples of natural armor to develop a common principle that emerges independent of multi-functionality specific to each species.
1.3 Sub-Optimal Design
This is an aspect for which I have not found an example in the field of cellular structures (yet), so I will borrow a well-known (and somewhat controversial) example  to make this point, and that has to do with the giraffe’s Recurrent Laryngeal Nerve (RLN), which connects the Vagus Nerve to the larynx as shown in Figure 5, which it is argued, takes an unnecessarily long circuitous route to connect these two points.
We know that from a design standpoint, this is sub-optimal because we have an axiom that states the shortest distance between two points is a straight line. And therefore, the long detour the RLN makes in the giraffe’s neck must have some other evolutionary and/or developmental basis (fish do not have this detour) . However, in the case of other entities such as the cellular structures we are focusing on, the complexity of the underlying design principles makes it hard to identify cases where nature has found a sub-optimal design space for the function of interest to us, in favor of other pressing needs determined by selection. What is sufficient for the present moment is to appreciate that such cases may exist and to bear them in mind when studying structure in nature.
2. Classifying Functions
Given the above challenges, the engineer may well ask: why even consider natural form in making determinations involving the design of engineering structures? The biomimic responds by reminding us that nature has had 3.8 billion years to develop a “design guide” and we would be wise to learn from it. Importantly, natural and engineering structures both exist in the same environment and are subject to identical physics and further, are both often tasked with performing similar functions. In the context of cellular structures, we may thus ask: what are the functions of interest to engineers and designers that nature has addressed through cellular design? Through my reading [1-4], I have compiled the classification of functions in Figure 6, though this is likely to grow over time.
This broad classification into structural and transport may seem a little contrived, but it emerges from an analyst’s view of the world. There are two reasons why I propose this separation:
- Structural functions involve the spatial allocation of materials in the construction of the cellular structures, while transport functions involve the structure AND some other entity and their interactions (fluid or light for example) – thus additional physics needs to be comprehended for transport functions
- Secondly, structural performance needs to be comprehended independent of any transport function: a cellular structure must retain its integrity over the intended lifetime in addition to performing any additional function
Each of these functions is a fascinating case study in its own right and I highly recommend the site AskNature.org  as a way to learn more on a specific application, but this is beyond the scope of the current post. More relevant to our high-level discussion is that having listed the various reasons WHY cellular structures are found in nature, the next question is can we connect the structures described in the previous post to the functions tabulated above? This will be the attempt of my next post. Until then, as always, I welcome all inputs and comments, which you can send by messaging me on LinkedIn.
Thank you for reading!
- Foy (1983), The grand design: Form and colour in animals, Prentice-Hall, 1st edition
- Dawkins (2010), The greatest show on earth: the evidence for evolution, Free Press, Reprint of 1st edition
- Gibson, Ashby, Harley (2010), Cellular Materials in Nature and Medicine, Cambridge University Press; 1st edition
- Ashby, Evans, Fleck, Gibson, Hutchinson, Wadley (2000), Metal Foams: A Design Guide, Butterworth-Heinemann, 1st edition
Occasionally when solid geometry is imported from CAD into ANSYS SpaceClaim the geometry will come in as solids, but when a mesh is generated on the solids the mesh will appear to “leak” into the surrounding space. Below is an assembly that was imported from CAD into SpaceClaim. In the SpaceClaim Structure Window all of the parts can be seen to be solid components.
When the mesh is generated in ANSYS Mechanical it appears like the assembly has been successfully meshed.
However, when you look at the mesh a little closer, the mesh can be missing from some of the surfaces and not displayed correctly on others.
Additionally, if you create a cross-section through the mesh, the mesh on some of the parts will “leak” outside of the part boundaries and will look like the image below.
Based on the mesh color, the mesh of the part in the center of the assembly has grown outside of the surfaces of the part.
To repair the part you need to go back to SpaceClaim and rebuild it. First you need to hide the rest of the parts.
Next, create a sketch plane that passes through the problem part.
In the sketch mode create a rectangle that surrounds the part. When you return to 3D mode in SpaceClaim, that rectangle will become a surface that passes through the part.
Now use the Pull tool in SpaceClaim to turn that surface into a part that completely surrounds the part to be repaired, making sure to turn on the “No Merge” option for the pull before you begin.
After you have pulled the surface into a solid, it should like the image below where the original part is completely buried inside the new part.
Now you will use the Combine tool to divide the box with the original part. Select Combine from the Tool Bar, then select the box that you created in the previous step. The cutter will be activated and you will move the cursor around until the original part is highlighted inside the box. Select it with the left mouse button. The Combine tool will then give you the option to select the part of the box that you want to remove. Select the part that surrounds the original part. After it is finished, close the combine tool and the Structure Tree and 3D window will now look like the following:
Now move the new solid that was created with the Combine tool into the location of the original part and turn off the original one and re-activate the other parts of the assembly. The assembly and Structure Tree should now look like the pictures below.
Now save the project, re-open the meshing tool, and re-generate the mesh. The mesh should now be correct and not “leaking” beyond the part boundaries.
What types of cellular designs do we find in nature?
Cellular structures are an important area of research in Additive Manufacturing (AM), including work we are doing here at PADT. As I described in a previous blog post, the research landscape can be broadly classified into four categories: application, design, modeling and manufacturing. In the context of design, most of the work today is primarily driven by software that represent complex cellular structures efficiently as well as analysis tools that enable optimization of these structures in response to environmental conditions and some desired objective. In most of these software, the designer is given a choice of selecting a specific unit cell to construct the entity being designed. However, it is not always apparent what the best unit cell choice is, and this is where I think a biomimetic approach can add much value. As with most biomimetic approaches, the first step is to frame a question and observe nature as a student. And the first question I asked is the one described at the start of this post: what types of cellular designs do we find in the natural world around us? In this post, I summarize my findings.
In a previous post, I classified cellular structures into 4 categories. However, this only addressed “volumetric” structures where the objective of the cellular structure is to fill three-dimensional space. Since then, I have decided to frame things a bit differently based on my studies of cellular structures in nature and the mechanics around these structures. First is the need to allow for the discretization of surfaces as well: nature does this often (animal armor or the wings of a dragonfly, for example). Secondly, a simple but important distinction from a modeling standpoint is whether the cellular structure in question uses beam- or shell-type elements in its construction (or a combination of the two). This has led me to expand my 4 categories into 6, which I now present in Figure 1 below.
Setting aside the “why” of these structures for a future post, here I wish to only present these 6 strategies from a structural design standpoint.
- Volumetric – Beam: These are cellular structures that fill space predominantly with beam-like elements. Two sub-categories may be further defined:
- Honeycomb: Honeycombs are prismatic, 2-dimensional cellular designs extruded in the 3rd dimension, like the well-known hexagonal honeycomb shown in Fig 1. All cross-sections through the 3rd dimension are thus identical. Though the hexagonal honeycomb is most well known, the term applies to all designs that have this prismatic property, including square and triangular honeycombs.
- Lattice and Open Cell Foam: Freeing up the prismatic requirement on the honeycomb brings us to a fully 3-dimensional lattice or open-cell foam. Lattice designs tend to embody higher stiffness levels while open cell foams enable energy absorption, which is why these may be further separated, as I have argued before. Nature tends to employ both strategies at different levels. One example of a predominantly lattice based strategy is the Venus flower basket sea sponge shown in Fig 1, trabecular bone is another example.
- Volumetric – Shell:
- Closed Cell Foam: Closed cell foams are open-cell foams with enclosed cells. This typically involves a membrane like structure that may be of varying thickness from the strut-like structures. Plant sections often reveal a closed cell foam, such as the douglas fir wood structure shown in Fig 1.
- Periodic Surface: Periodic surfaces are fascinating mathematical structures that often have multiple orders of symmetry similar to crystalline groups (but on a macro-scale) that make them strong candidates for design of stiff engineering structures and for packing high surface areas in a given volume while promoting flow or exchange. In nature, these are less commonly observed, but seen for example in sea urchin skeletal plates.
- Tessellation: Tessellation describes covering a surface with non-overlapping cells (as we do with tiles on a floor). Examples of tessellation in nature include the armored shells of several animals including the extinct glyptodon shown in Fig 1 and the pineapple and turtle shell shown in Fig 2 below.
- Overlapping Surface: Overlapping surfaces are a variation on tessellation where the cells are allowed to overlap (as we do with tiles on a roof). The most obvious example of this in nature is scales – including those of the pangolin shown in Fig 1.
What about Function then?
This separation into 6 categories is driven from a designer’s and an analyst’s perspective – designers tend to think in volumes and surfaces and the analyst investigates how these are modeled (beam- and shell-elements are at the first level of classification used here). However, this is not sufficient since it ignores the function of the cellular design, which both designer and analyst need to also consider. In the case of tessellation on the skin of an alligator for example as shown in Fig 3, was it selected for protection, easy of motion or for controlling temperature and fluid loss?
In a future post, I will attempt to develop an approach to classifying cellular structures that derives not from its structure or mechanics as I have here, but from its function, with the ultimate goal of attempting to reconcile the two approaches. This is not a trivial undertaking since it involves de-confounding multiple functional requirements, accounting for growth (nature’s “design for manufacturing”) and unwrapping what is often termed as “evolutionary baggage,” where the optimum solution may have been sidestepped by natural selection in favor of other, more pressing needs. Despite these challenges, I believe some first-order themes can be discerned that can in turn be of use to the designer in selecting a particular design strategy for a specific application.
This is by no means the first attempt at a classification of cellular structures in nature and while the specific 6 part separation proposed in this post was developed by me, it combines ideas from a lot of previous work, and three of the best that I strongly recommend as further reading on this subject are listed below.
- Gibson, Ashby, Harley (2010), Cellular Materials in Nature and Medicine, Cambridge University Press; 1st edition
- Naleway, Porter, McKittrick, Meyers (2015), Structural Design Elements in Biological Materials: Application to Bioinspiration. Advanced Materials, 27(37), 5455-5476
- Pearce (1980), Structure in Nature is a Strategy for Design, The MIT Press; Reprint edition
As always, I welcome all inputs and comments – if you have an example that does not fit into any of the 6 categories mentioned above, please let me know by messaging me on LinkedIn and I shall include it in the discussion with due credit. Thanks!
Considered the “largest gathering of chip, board, and systems designers in the country,” with over 5,000 attendees this year and over 150 technical presentations and workshops, DesignCon exhibits state of the art trends in high-speed communications and semiconductor communities.
Here are the top 5 trends I noticed while attending DesignCon 2017:
1. Higher data rates and power efficiency.
This is of course a continuing trend and the most obvious. Still, I like to see this trend alive and well because I think this gets a bit trickier every year. Aiming towards 400 Gbps solutions, many vendors and papers were demonstrating 56 Gbps and 112 Gbps channels, with no less than 19 sessions with 56 Gbps or more in the title. While IC manufacturers continue to develop low-power chips, connector manufacturers are offering more vented housings as well as integrated sinks to address thermal challenges.
2. More conductor-based signaling.
PAM4 was everywhere on the exhibition floor and there were 11 sessions with PAM4 in the title. Shielded twinaxial cables was the predominant conductor-based technology such as Samtec’s Twinax Flyover and Molex’s BiPass.
A touted feature of twinax is the ability to route over components and free up PCB real estate (but there is still concern for enclosing the cabling). My DesignCon 2017 session, titled Replacing High-Speed Bottlenecks with PCB Superhighways, would also fall into this category. Instead of using twinax, I explored the idea of using rectangular waveguides (along with coax feeds), which you can read more about here. I also offered a modular concept that reflects similar routing and real estate advantages.
3. Less optical-based signaling.
Don’t get me wrong, optical-based signaling is still a strong solution for high-speed channels. Many of the twinax solutions are being designed to be compatible with fiber connections and, as Teledyne put it in their QPHY-56G-PAM4 option release at DesignCon, Optical Internetworking Forum (OIF) and IEEE are both rapidly standardizing PAM4-based interfaces. Still, the focus from the vendors was on lower cost conductor-based solutions. So, I think the question of when a full optical transition will be necessary still stands.
With that in mind, this trend is relative to what I saw only a couple years back. At DesignCon 2015, it looked as if the path forward was going to be fully embracing optical-based signaling. This year, I saw only one session on fiber and, as far as I could tell, none on photonic devices. That’s compared to DesignCon 2015 with at least 5 sessions on fiber and photonics, as well as a keynote session on silicon photonics from Intel Fellow Dr. Mario Paniccia.
4. More Physics-based Simulations.
As margins continue to shrink, the demand for accurate simulation grows. Dr. Zoltan Cendes, founder of Ansoft, shared the difficulties of electromagnetic simulation over the past 40+ years and how Ansoft (now ANSYS) has improved accuracy, simplified the simulation process, and significantly reduced simulation time. To my personal delight, he also had a rectangular waveguide in his presentation (and I think we were the only two). Dr. Cendes sees high-speed electrical design at a transition point, where engineers have been or will ultimately need to place physics-based simulations at the forefront of the design process, or as he put it, “turning signal integrity simulation inside out.” A closer look at Dr. Cendes’ keynote presentation can be found in DesignNews.
5. More Detailed IC Models.
This may or may not be a trend yet, but improving IC models (including improved data sheet details) was a popular topic among presenters and attendees alike; so if nothing else it was a trend of comradery. There were 12 sessions with IBIS-AMI in the title. In truth, I don’t typically attend these sessions, but since behavioral models (such as IBIS-AMI) impact everyone at DesignCon, this topic came up in several sessions that I did attend even though they weren’t focused on this topic. Perhaps with continued development of simulation solutions like ANSYS’ Chip-Package-System, Dr. Cende’s prediction will one day make a comprehensive physics-based design (to include IC models) a practical reality. Until then, I would like to share an interesting quote from George E. P. Box that was restated in one of the sessions: “Essentially all models are wrong, but some are useful.” I think this is good advice that I use for clarity in the moment and excitement for the future.
By the way, the visual notes shown above were created by Kelly Kingman from kingmanink.com on the spot during presentations. As an engineer, I was blown away by this. I have a tendency to obsess over details but she somehow captured all of the critical points on the fly with great graphics that clearly relay the message. Amazing!
How To Update The Firmware Of An Intel® Solid-State Drive DC P3600 in four easy steps!
The Dr. says to keep that firmware fresh! so in this How To blog post I illustrate to you how to verify and/or update the firmware on a 1.2TB Intel® Solid-State Drive DC 3600 Series NVMe MLC card.
CUBE Workstation Specifications – The Tester
PADT, Inc. – CUBE w32i Numerical Simulation Workstation
- 2 x 16c @2.6GHz/ea. (INTEL XEON e5-2697A V4 CPU), 40M Cache, 9.6GT, 145 Watt/each
- Dual Socket Super Micro X10DAi motherboard
- 8 x 32GB DDR4-2400MHz ECC REG DIMM
- 1 x NVIDIA QUADRO M2000 – 4GB GDDR5
- 1 x Intel® DC P3600 1.2TB, NVMe PCIe 3.0, MLC AIC 20nm
- Windows 7 Ultimate Edition 64-bit
Step 1: Prepping
Check for and download the latest downloads for the Intel® Solid-State DC 3600 here: https://downloadcenter.intel.com/product/81000/Intel-SSD-DC-P3600-Series
You will need the latest downloads of the:
Intel® Solid State Drive Toolbox
Intel® SSD Data Center Tool
Intel® SSD Data Center Family for NVMe Drivers
Step 2: Installation
After instaling, the Intel® Solid State Drive Toolbox and the Intel® SSD Data Center Tool reboot the workstation and move on to the next step.
Step 3: Trust But Verify
Check the status of the 1.2TB NVMe card by running the INTEL SSD DATA Center Tool. Next, I will be using the Windows 7 Ultimate 64-bit version for the operating system. Running the INTEL DATA CENTER TOOLS within an elevated command line prompt.
Right-Click –> Run As…Administrator
Command Line Text: isdct show –intelssd
As the image indicates below the firmware for this 1.2TB NVMe card is happy and it’s firmware is up to date! Yay!
If you have more than one SSD take note of the Drive Number.
- Pro Tip – In this example the INTEL DC P3600 is Drive number zero. You can gather this information from the output syntax. –> Index : 0
Below is what the command line output text looks like while the firmware process is running.
C:\isdct >isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware…The selected Intel SSD contains current firmware as of this tool release.
isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): n Canceled.
isdct.exe load –f –intelssd 0 Updating firmware… The selected Intel SSD contains current firmware as of this tool release.
isdct.exe load –intelssd 0 WARNING! You have selected to update the drives firmware! Proceed with the update? (Y|N): y Updating firmware… Firmware update successful.
Step 4: Reboot Workstation
The firmware update process has been completed.
ANSYS Mechanical is great at applying tabular loads that vary with an independent variable. Say time or Z. What if you want a tabular load that varies in multiple directions and time. You can use the External Data tool to do just that. You can also create a table with a single variable and modify it in the Command Editor.
In the Presentation below, I show how to do all of this in a step-by-step description.PADT-ANSYS-Tabular-Loading-ANSYS-18
You can also download the presentation here.
With each release of ANSYS the customization toolkit continues to evolve and grow. Recently I developed what I would categorize as a decent sized ACT extension. My purpose in this post is to highlight a few of the techniques and best practices that I learned along the way.
Why I chose C#?
Most ACT extensions are written in Python. Python is a wonderfully useful language for quickly prototyping and building applications, frankly of all shapes and sizes. Its weaker type system, plethora of libraries, large ecosystem and native support directly within the ACT console make it a natural choice for most ACT work. So, why choose to move to C#?
The primary reasons I chose to use C# instead of python for my ACT work were the following:
- I prefer the slightly stronger type safety afforded by the more strongly typed language. Having a definitive compilation step forces me to show my code first to a compiler. Only if and when the compiler can generate an assembly for my source do I get to move to the next step of trying to run/debug. Bugs caught at compile time are the cheapest and generally easiest bugs to fix. And, by definition, they are the most likely to be fixed. (You’re stuck until you do…)
- The C# development experience is deeply integrated into the Visual Studio developer tool. This affords not only a great editor in which to write the code, but more importantly perhaps the world’s best debugger to figure out when and how things went wrong. While it is possible to both edit and debug python code in Visual Studio, the C# experience is vastly superior.
The Cost of Doing ACT Business in C#
Unfortunately, writing an ACT extension in C# does incur some development cost in terms setting up the development environment to support the work. When writing an extension solely in Python you really only need a decent text editor. Once you setup your ACT extension according to the documented directory structure protocol, you can just edit the python script files directly within that directory structure. If you recall, ACT requires an XML file to define the extension and then a directory with the same name that contains all of the assets defining the extension like scripts, images, etc… This “defines” the extension.
When it comes to laying out the requisite ACT extension directory structure on disk, C# complicates things a bit. As mentioned earlier, C# involves a compilation step that produces a DLL. This DLL must then somehow be loaded into Mechanical to be used within the extension. To complicate things a little further, Visual Studio uses a predefined project directory structure that places the build products (DLLs, etc…) within specific directories of the project depending on what type of build you are performing. Therefore the compiled DLL may end up in any number of different directories depending on how you decide to build the project. Finally, I have found that the debugging experience within Visual Studio is best served by leaving the DLL located precisely wherever Visual Studio created it.
Here is a summary list of the requirements/problems I encountered when building an ACT extension using C#
- I need to somehow load the produced DLL into Mechanical so my extension can use it.
- The DLL that is produced during compilation may end up in any number of different directories on disk.
- An ACT Extension must conform to a predefined structural layout on the filesystem. This layout does not map cleanly to the Visual studio project layout.
- The debugging experience in Visual Studio is best served by leaving the produced DLL exactly where Visual Studio left it.
The solution that I came up with to solve these problems was twofold.
First, the issue of loading the proper DLL into Mechanical was solved by using a combination of environment variables on my development machine in conjunction with some Python programming within the ACT main python script. Yes, even though the bulk of the extension is written in C#, there is still a python script to sort of boot-load the extension into Mechanical. More on that below.
Second, I decided to completely rebuild the ACT extension directory structure on my local filesystem every time I built the project in C#. To accomplish this, I created in visual studio what are known as post-build events that allow you to specify an action to occur automatically after the project is successfully built. This action can be quite generic. In my case, the “action” was to locally run a python script and provide it with a few arguments on the command line. More on that below.
Loading the Proper DLL into Mechanical
As I mentioned above, even an ACT extension written in C# requires a bit of Python code to bootstrap it into Mechanical. It is within this bit of Python that I chose to tackle the problem of deciding which dll to actually load. The code I came up with looks like the following:
Essentially what I am doing above is querying for the presence of a particular environment variable that is on my machine. (The assumption is that it wouldn’t randomly show up on end user’s machine…) If that variable is found and its value is 1, then I determine whether or not to load a debug or release version of the DLL depending on the type of build. I use two additional environment variables to specify where the debug and release directories for my Visual Studio project exist. Finally, if I determine that I’m running on a user’s machine, I simply look for the DLL in the proper location within the extension directory. Setting up my python script in this way enables me to forget about having to edit it once I’m ready to share my extension with someone else. It just works.
Rebuilding the ACT Extension Directory Structure
The final piece of the puzzle involves rebuilding the ACT extension directory structure upon the completion of a successful build. I do this for a few different reasons.
- I always want to have a pristine copy of my extension laid out on disk in a manner that could be easily shared with others.
- I like to store all of the various extension assets, like images, XML files, python files, etc… within the Visual Studio Project. In this way, I can force the project to be out of date and in need of a rebuild if any of these files change. I find this particularly useful for working with the XML definition file for the extension.
- Having all of these files within the Visual Studio Project makes tracking thing within a version control system like SVN or git much easier.
As I mentioned before, to accomplish this task I use a combination of local python scripting and post build events in Visual Studio. I won’t show the entire python code, but essentially what it does is programmatically work through my local file system where the C# code is built and extract all of the files needed to form the ACT extension. It then deletes any old extension files that might exist from a previous build and lays down a completely new ACT extension directory structure in the specified location. The definition of the post build event is specified within the project settings in Visual Studio as follows:
As you can see, all I do is call out to the system python interpreter and pass it a script with some arguments. Visual Studio provides a great number of predefined variables that you can use to build up the command line for your script. So, for example, I pass in a string that specifies what type of build I am currently performing, either “Debug” or “Release”. Other strings are passed in to represent directories, etc…
The Synergies of Using Both Approaches
Finally, I will conclude with a note on the synergies you can achieve by using both of the approaches mentioned above. One of the final enhancements I made to my post build script was to allow it to “edit” some of the text based assets that are used to define the ACT extension. A text based asset is something like an XML file or python script. What I came to realize is that certain aspects of the XML file that define the extension need to be different depending upon whether or not I wish to debug the extension locally or release the extension for an end user to consume. Since I didn’t want to have to remember to make those modifications before I “released” the extension for someone else to use, I decided to encode those modifications into my post build script. If the post build script was run after a “debug” build, I coded it to configure the extension for optimal debugging on my local machine. However, if I built a “release” version of the extension, the post build script would slightly alter the XML definition file and the main python file to make it more suitable for running on an end user machine. By automating it in this way, I could easily build for either scenario and confidently know that the resulting extension would be optimally configured for the particular end use.
Now that I have some experience in writing ACT extensions in C# I must honestly say that I prefer it over Python. Much of the “extra plumbing” that one must invest in in order to get a C# extension up and running can be automated using the techniques described within this post. After the requisite automation is setup, the development process is really straightforward. From that point onward, the increased debugging fidelity, added type safety and familiarity a C based language make the development experience that much better! Also, there are some cool things you can do in C# that I’m not 100% sure you can accomplish in Python alone. More on that in later posts!
If you have ideas for an ACT extension to better serve your business needs and would like to speak with someone who has developed some extensions, please drop us a line. We’d be happy to help out however we can!