Twice in the last week we’ve taken tech support calls in which the users questioned why their stress or strain results were being reported differently in Workbench Mechanical vs. the results from the same results file in /POST1 with ANSYS Mechanical APDL. After answering those questions it was pretty obvious that a Focus blog entry was in order. All we needed was a good, relevant example to demonstrate the issues and the explanation.
First glimpse of approaching dust storm. All photos by the author.
In case you missed it, the big story here in the Phoenix area this week was our monster haboob, or dust storm. If you’re not familiar with the term haboob, Wikipedia explains it here: http://en.wikipedia.org/wiki/Haboob. In order to have a humongous dust storm, you’ve got to have wind.
About 3 minutes after this picture was taken the dust storm arrived.
Wind tends to cause damage, but although our recent dust storm is estimated to have been 100 miles wide and up to 10,000 ft. high, we fortunately did not sustain much significant wind damage. Things that do tend to get mangled, however, are deployed patio umbrellas and portable expanding sunshades. These sunshades typically retract into a compact size and fit in a zip up carrying case. Many of us have collections of damaged sun shades that still work via creative application of duct tape, wire, etc. These inexpensive shades work great for keeping the sun off of us during birthday parties or other outdoor gatherings, but high winds tend to cause the metal members to bend and break, causing the shades to need some field engineering repair if not just a one way trip to the dumpster.
Here is a solid geometry representation of a typical portion of the frame of a representative shade. It consists of two rectangular hollow members, pinned to each other at the center, with pins at each end that in the full structure would be attached to additional components.
For simplicity, we fixed the pins on the right side to ground, while those on the left side have an upward bearing load applied to the upper pin and a downward bearing load applied to the lower pin. These loads tend to cause the members to bend at the central pin. The bearing loads in our example represent the effect of a strong gust of wind hitting the fabric canopy above the frame, with the load eventually reacting through the frame to stakes that attach the frame to the ground at the bottom. The main thing to note here is that the applied load is large enough to cause significant plastic deformation, not unlike what one might experience in the real world when one of these structures is subjected to a very strong wind.
Workbench Mechanical, Coarse Mesh, Peak von Mises Stress is 79,219 psi
Same Results File in Mechanical APDL /POST1, Peak von Mises Stress is 83,873 psi
The issue here is that for our initial run with a very coarse mesh, when we view the von Mises stress results in the Mechanical editor and then compare them with the von Mises stress results obtained from the same results file in /POST1 in Mechanical APDL, we notice there is a difference (79.2 ksi vs. 83.9 ksi). Why is that? It has to do with how stresses are calculated. First let’s consider Mechanical APDL and /POST1. The original graphics display system is known as Full graphics. Fifteen or twenty years ago ANSYS, Inc. developed a newer graphics display system for MAPDL known as Powergraphics. There are several differences between these two display systems which affect results quantities.
ANSYS Mechanical APDL uses PowerGraphics by default, which among other things
A similar effect is seen with the von Mises plastic strain results:
Regarding the mesh densities used, the coarse mesh had an element size of at least 0.05 in. on the member hole at the high stress/strain location, while the fine mesh had an element size of 0.025 on the same hole. Another way to look at the mesh refinement is that the coarse model had 20 elements on the hole of interest while the fine mesh had 104 elements on the same hole. Clearly the coarse mesh in this example was way too coarse for engineering purposes, but this was selected for this article to ensure the effect of different results calculation methods was significant and observable.
So, the bottom line here is that if you see unacceptable differences in stress or strain results using different results calculation methods, it likely means that your mesh, at least in the area of interest, is too coarse. Try adding mesh refinement and check the results again. In Mechanical, you can try adding a Convergence item to a scoped result plot to at least partially automate this process. Just be careful that you don’t include any singularities in your desired convergence region.
If you were expecting a reference to the Kansas song, “Dust in the Wind,” well, I guess this is it. Fortunately we don’t seem to have many lingering affects of the big dust storm. The parking lot here at PADT has a thin layer of dirt that’s gradually disappearing. Once we get a good rain it will all get washed away.
I first started contemplating this article several weeks ago, and I was planning on somehow working in a Jersey Shore reference. But now that I’ve relocated to Colorado and am recovering from a climbing trip that was a bit above my ability, my creative juices are a little low (they were used up trying to improvise my way up an overhanging roof pitch).
Anyways, User Defined results were first introduced in R12 to grant the user access to element table items.
When you insert a user-defined result, you are required to fill in the ‘expression’ line in the details window.
There’s a much easier way to auto-populate the ‘expression’ line…the Worksheet view! To access this, first click on the ‘Solution’ branch and then select ‘Worksheet’ (tab in R12, button in R12.1 and newer). This will show you a list of all the user defined result expressions. Find one you like, right-mouse-click on it and select ‘Create User Defined Result’.
This will insert a ‘User Defined Result’, the only work you need to do is scope it to a body (if necessary). The only ‘tricky’ part of this process is that you need to solve the model first before using the Worksheet view. This is because before the model has been solved, Mechanical doesn’t know what is in the result file. So if the worksheet view is blank or grayed out, it’s because you haven’t solved the model.
So what’s the benefit of using the User Defined Results? Say you wanted to look at total strain, kinetic energy, or reaction force contour plots…just to name a few. In order to view any of those, you would either have to open the .rst file in MAPDL or use the User Defined Result.
|ENFO||Element Nodal Reaction Forces|
|NDIR||Nodal Orientation Values|
In order to access the average stress value of the element (not node), we need to use the User Defined Results. We’ll ask to evaluate the expression seqv (von Mises) and set the integration option of using the elemental mean.
Now when we export that result item we get what we want…element vs stress:
Now we just need to export both vectors (volume and stress), then copy/paste/sort/sum and you’re done. Don’t forget the most important step…billing for 4 hours of post-processing work.
Long story short, all you MAPDL users who have been complaining about not being able to access element tables should take a look at the User Defined functions.
· ANSYS Release 13.0 must already be installed on your machine
· Time needed to complete update: 15-30 minutes
Begin ANSYS 13.0 SP2 update procedure
1. Login and download the appropriate Linux version update from the ANSYS Customer Portal.
2. Save ANSYS130SP2_LINX54.tar to your install Desktop
3. Extract the files root Desktop
4. Open a Terminal window session and change directory to your Desktop then to the extracted folder on the desktop. In my example the folder is Save ANSYS130SP2_LINX54.tar_FILES
5. After you have verified that you are in this folder type. ./INSTALL
7. Review the ANSYS Software License agreement and click I AGREE to continue.
8. Verify your installation directory and click Next.
You must install into the ANSYS RELASE 13.0 SP2 into the same location as your original ANSYS RELEASE 13.0 location.
9. Select the components you would like to install. The installation GUI will by default have checked installed selected.
You can choose to install fewer products to update but you would not be able to select products to add with the update package. You would need to install from the main ANSYS RELASE 13.0 media and then install the SP2 on top of that installation.
Please note the amount of disk space required for the update. Approximately 7.2 GB. You will need to make sure that your Linux machine has at least 7.3GB free and available.
10. Verification screens – Dates
11. Installation screens – Various screens will scroll through as the installation manager extracts the package files. Screen shot of Extraction: Package 4 of 16
12. Installation screens – Various screens will scroll through as the installation manager extracts the package files. Screen shot of Extraction: Package 5 of 16
13. Installation screens – Various screens will scroll through as the installation manager extracts the package files. Screen shot of Extraction: Package 14 of 16
14. Completed update – Click Next to begin the ANSYS 13.0 Licensing Client Installation update
15. ANSYS 13.0 Licensing Client Installation – Begin verification
16. ANSYS 13.0 Licensing Client Installation – Configuration log and successful completion of ANSYS 13.0 Licensing Client update.
17. Click Finsh to end update installation script routine.
As indicated in the note on the IMPORTANT note below. Please run the update for the ANSYS, Inc. License Manager SP2 update after completing this procedure. This download is also available through the ANSYS Customer Portal
Recently, PADT has conducted some parallel benchmarks with our Linux cluster. The model used for the tests is an ANSYS FLUENT *.cas file with 26.5 million cells and 6.5 million nodes. The physics in this model are fairly simple; it is modeling external, steady airflow over an object. Simulations of 10 iterations were conducted using as many as 144 processors over three 48-core systems (the machine names are “cs0”, “cs1”, and “cube48”, and are summarized in Table 1). The first two machines in Table 1 (cs0 and cs1) are connected together via Infiniband, so they effectively form a 96-core machine. Furthremore, the cs0 and cs1 machines are connected to cube48 via GigE network ports. Each of the machines has two GigE network ports each which connect to a Gigabit switch to allow for trunking.
Table 1 – Benchmark machine specifications
2.3 GHz AMD Opteron 6176SE
2.3 GHz AMD Opteron 6176SE
2.2 GHz AMD Opteron 6176SE
Figure 1 – Speedup vs. Processors with 6.5 million node ANSYS FLUENT model
The first result in Figure 1 (blue curve) was obtained using as many as 48 processors on the cs0 machine before installing the Service Pack 2 for FLUENT version 13. As illustrated in Figure 1, the speedup values of the blue result decrease as the number of processors increases. The next result (green curve with triangles) was calculated using Service Pack 2 for ANSYS FLUENT version 13 on the cube48 machine using as many as 48 processors. This result on cube48 presents perfect correspondence with Linear speedup, and even displays some values which are “super-linear”. This behavior led us to suspect that the improvements in Service Pack 2 for ANSYS FLUENT version 13 were the primary cause of the increase in performance demonstrated by the cube48 speedup result.
However, further testing of the cs0 and cs1 machine with Service Pack 2 for ANSYS FLUENT seems to suggest otherwise. These data are represented by the red squares in Figure 1. Runs were conducted on as many as 144 processors, which involved a distributed run using 48 processors on cs0, 48 processors on cs1, and 48 processors on cube48. The general trend of this result is the same as that recorded on cs0 before installing Service Pack 2 for ANSYS FLUENT version 13, suggesting that some other effect (likely machine-related) is present. Our current hypothesis is that the socket 2 processor (which handles the Infiniband UIO card hardware) is causing the slowdown, primarily because the Infiniband switch is present only on the cs0 + cs1 machine, and not on the cube48 machine. This was suggested by the manufacturer of Infiniband switch (SuperMicro). Testing to asses this problem is on-going.
We just finished up a tech support call for a customer that wanted a way to get accurate reaction loads from a PSD run in ANSYS Mechanical. Alex Grishin took the call and provided a nice example to the customer, so we thought we would leverage that and share it with all of you. Even if you are not in need of this particular function, it is a great example of using snippets. If you are not familiar with this, check out our recent webinar on the subject.
The reason why you have to do this is because doing an accurate PSD force calculation is not a simple thing. The math is a bit complicated, because PSD responses are probabilities of results that loose sign. And it is right now only available in Mechanical APDL (MAPDL). This is not a problem because we can use an APDL command object to get the results from MAPDL and bring them back to ANSYS Mechanical.
Three Simple Steps
There are three very simple steps needed to get this done:
- Identify the geometry you want the reaction loads calculated on
Do this by selecting a face, edge, or corner and create a named component. You will use that named component to grab the nodes that sit on the piece of geometry and do an FSUM in MAPDL. In our example, we call the named selection react_area1.
- Tell the solver to store the required modal information
Since ANSYS Mechanical doesn’t do reaction force calculations they save disk space by not storing the info needed for such calculations, but we need them. So add a command object in your modal analysis environment that says save all my results (outres) and expand all my modes (mxpand):
- Calculate the reaction force
Now we simply need to add a command object to the post processing branch that:
- gets the PSD deflection results (set,3)
- selects the named selection (cmsel),which is a nodal component in MAPDL,
- gets the nodes attached (esln)
- calculates the reaction load (fsum)
- stores the results in parameters that we return to ANSYS Mechanical. (*get,my_). Remember that anytime you create a MAPDL parameter in the post processor that starts with my_ it gets returned to ANSYS Mechanical. (well, that is the default, you can change the prefix)
- select everything so that MAPDL can keep post processing like normal
For our example, it looks like this:
The following figure shows the model tree for our example, and the returned parameters:
Nothing fancy, simple in fact: Make a component, store the required info in the result files, do an FSUM and bring back the results.
You can download the example here.
That was a short article! And no exciting pictures. So… if you want to you could check out the travels of The PADT Hat around the world.
Over the years I have learned to do more with less. When it comes to information systems world, you all know the equation is often much more with much less. One of my to-do’s over the years, umm that continued to get bumped down on the priorities list. It is the juggling act of making sure that the data center has enough cooling vs. power vs. yes again in AZ cooling. If you are an IT professional or even an Engineer you really don’t have time to attempt to try to convince someone, anyone that we need to speed more money. Even if you use the effective philosophy of Time, Money and Quality. After dealing recently “wish ware software vendors this past year” I added a fourth dimension to the above philosophy. It’s called Functionality; here is what Merriam Webster had to say about Functionality. The quality or state of being functional; especially: the set of functions or capabilities associated with computer software or hardware or an electronic device. http://www.merriam-webster.com/dictionary/functionality?show=0&t=1307146105
- Time – Try searching the internet for terms such as data center cooling calculator, data center cooling costs or how can I save money with our data center cooling? You will suddenly have millions of search queries at your beckon. From video blogs on data center cooling, white papers on optimizing IT strategy on data center cooling. It is endless 3 million plus hits on just one search term data center cooling calculator. Wow, Start researching my young padawon learner, fill out those lead generating white papers. Keep it simple…
- Money – I would prefer that we used or dollars on buying a server with a couple INTEL XEON ET-8870 processor. Or how about a QUAD based AMD FX-8130P processor server! I do not have any budget for a I am sure fabulous data center cost benefit crisis analysis.
- Quality – Can I even understand what the end-result document white paper will read. Will I look like even more of an idiot? This needs to be accurate information. I will have to do it myself or use a third party data center analysis.
- Functionality – Will our current air conditioners hold up this year? What about when we hit 120 degrees? Oh my do I need to add more cooling power?
Wikipedia on British Thermal Unit (BTU) – http://en.wikipedia.org/wiki/British_thermal_unit
So why I care about a BTU? Because approximately one "ton of cooling", which is a typical way people talk about cooling devices in the USA, is 12,000 BTU/h. This is the amount of power needed to melt one short ton of ice in 24 hours. Locked away in a climate controlled vault is one of my data centers. “Said such vault may or may not contain the following items on any given day.” After all this is a mobile compute server world these days.
- 13 Servers
- 174 Cores (Mix of Windows/Linux servers)
- 2 – Network Routers
- 3 – Network Switches
- Phone System
- Voice Mail System
- 1 LCD 20” monitor/KVM
Go Green in the Data Center! First, let’s get a “Cool Grip” on your data center…16,484.058 BTU/h
A couple years ago one of the our ANSYS Mechanical Simulation Engineers named Jason Krantz told me about a watt meter device. A handy little watt meter monitoring device designed by P3 International KILL A WATT™. Over the years that little watt meter devices has become one of closest friends and ally in IT. Today, I was able to quickly asses (realistically about four hours of time) just how many Watts of power each one of our servers, network devices, etc. used. I tried to be as accurate as I could without having to take out a second mortgage. So I made sure and verified that one of our PHD FE Analyst or CFD Analysts had our servers at our near 100% CPU use.
YOUTUBE VIDEO :: Check out this real-world example of a AMD Opteron 6174, 287 hour electrical cost usage test. The data shown in this video is of a server that has four AMD Opteron 6174 processors installed.
So, what is your magical number? Ours is 4,831 Watts
Do you know how many watts of power your server room is using? could you even logically guess what that number is? Our magical number for server room #1 turned out to be 4,831 Watts. I do need to state that I was unable to take some of the devices offline. When that was the case I used data pulled from the actual technical documents of the device’s manufacture website.
So what is your BTU/h number? Ours is 16,484.058 BTU/h? Oh and I don’t even like math? I know, I know, math was solved and perfected centuries ago. But how do I convert Watts into BTU?
I used a 99 cent app that I bought off of the iTunes App store called “Convert Units”.
- Step 1 – Convert Watts into BTU/m
- Step 2 – Then multiplied by 60 to get that value into BTU/h.
- Step 3 – Speak to your Operations Department, send an email, shout from the rooftops!! We need at least a two ton Air Conditioning unit for Server Room #1
Now with the precious BTU/h value in hand. I was able to speak the same language as that of our Director of Operations & Facilities manager.
I wish you all could have been there when I walked up to Scott and told him the news. The dialogue went something like this:
“Scott I wanted to talk to you about server room #1’s cooling situation…(pause for dramatic expression). Almost immediately you could see Scott’s blood pressure rising. Scott’s brain quickly churning through mountains of Air Conditioning cooling information and data. I quickly calmed his anxiety and said these exact words. “Server Room #1’s BTU/h ratio is approximately 16,484.058 BTU/h.” It took Scott just a moment for this bit of information to register. I do believe that I actually heard the hallelujah chorus in the heavens. I could also see the peace that passes all understanding come across Scott’s face. It was if I could read his mind and he was thinking how is this non-operation/facilities type humanoid speaking my language? For Scott knew immediately that he had enough cooling power at this moment into to cool that data center down all summer long.
DATA CENTER #1 – 274.7343 BTU/m*60 = 16,484.058 BTU/h
How the heck are you making money today? Step out of the Box, Step Into a Cube Computers for CFD and FEA Simulation. http://www.cube-hvpc.com/
It is a fact. Microsoft Excel is the most used engineer tool in the world. If you are like me, you do everything you can’t do in ANSYS in Excel. And a lot of the time you wish you could talk directly from Excel to ANSYS – and in the day many of us wrote kludgey VB Macros that would write APDL scripts run ANSYS MAPDL in the background. In the past couple of releases our friendly neighborhood ANSYS developers have added a lot of different ways to work with Excel: saving tables to a file, Python scripting to talk to Excel, and an Excel System. I remember reading about these things as they came out, even wrote about how cool they were, but I never had the opportunity to use them.
Then, last week, I noticed an Excel icon just sitting there in the toolbox, mocking me, taunting me to use it.
So I dragged it out on my project page and tried to use it… and got no where. My assumptions were not valid, it didn’t work the way I expected because, it turned out, I was not thinking about how it fit into the project correctly. So I backed up, actually read the help (gasp!) and after a experimenting got it to work, I thought. Then I talked to some folks at ANSYS, Inc. and they reminded me of what this is: an Excel Solver System. It is not a tool that lets you “drive” your project from Excel. It lets you use Excel to calculate values.
This posting is a summary of what I learned. And as I was working through it I thought it would be good to also cover the python interface to Excel and how to save tabular information to Excel. These will be covered in future articles (hence this being Part 1).
What You Need to Know
As I said above, my problem was that I was thinking about how Excel fit into my project all wrong. The first thing you should do is read the help on the Excel System. The best way to find it is type excel into search. The item with the most hits will take you to the article for component systems, then click on Microsoft Office Excel. (I wish I could just put a link in… grumble… grumble… Instead, I leave the link to buy MS Office 2013 for those who does not have it).
To use the Excel system you do the following:
- Add the System to your project
- Make a spreadsheet and use range names to identify parameters
- Attach an Excel spreadsheet
- Edit the system and tell the program which parameters are input and which are output
- Go into the parameter manager and hook up any derived parameters you want to pass to Excel and use any of the Excel parameters with other parameters as needed
- Tell ANSYS to run a VB macro (if you want)
- Update your project or Design Points
We will go through the process in detail but first, a few things you should know:
- The system kind of looks and feels like the parameter manager in Workbench, but it is not. You have to think of Excel as a “solver” that feeds parameters from and to the parameter manager.
- I struggled with this because I thought of output parameters as values calculated by Workbench and input parameters as ones that come from Excel, but the opposite is true.
- Excel Input Parameter: A value calculated in Workbench parameter manager
- Excel Output Parameter: A value calculated in Excel
- You need to get your head around this or you will get stuck like I did. The example should help.
- Parameters that come from DesignModeler are dimensionless in the parameter manager.
- This one really held me up for a while. If you assign a parameter from Excel that has a unit to drive your geometry in design modeler, you get an error.
- The solution is to make sure that you DO NOT use units on Excel parameters that you get or pass to DesingModeler
- You will get burned by this if you go to your original excel file, edit it, then try and update your project. Your changes will not show up. That is because it is not linked to the original file, it is linked to a copy stored in that XLS directory.
- Once you have linked a file you should exit Excel then open the file by RMB on the Excel system and choose “Open File in Excel” (see below for more on this whole process)
- I recommend that you start buy making your Excel file, save it with the name you want in the C:\Temp directory, attach that file, close Excel, then open from Workbench.
- Now you have a file to add your stuff to and you don’t have to worry about having an earlier version lurking around.
- An important side effect of this is if you delete your system, it deletes your Excel file! So make sure you make a copy or do a save as before you remove the Excel system
- Making a change to he Excel file will put the system out of date. A refresh on the project page or a reload on the “Edit Configuration” page will update things.
- The easiest way to check and change this is to click on the parameter bar and view its properties. Under Design Point Update Process set Update Option to Run in Foreground.
- Although annoying at first glance, it kind of makes sense. If you feed a value to Excel and then Excel calculates a new value that effects your ANSYS model, you need to update the ANSYS model, which will change the value that gets passed into Excel, which will change the value that gets passed out which changes your ANSYS model, which… and so it goes in a loop. This is considered a bad thing.
- This goes back to the fact that Excel should be used as a solver, not as ‘”driver” of you simulation.
- If you do want to drive your analysis from Excel, you’ll need to do some scripting. We’ll cover that in a future article.
I started this article with a really cool valve demo model. Then found that it was just too slow and a pain to work with for showing how the Excel system works. So I went back to my second favorite type of model, a simple “tower of test.” (my favorite is a FPWHII – flat plate with a hole in it). You can download the project here.
Add the System to Your Project
Like every other system in Workbench, you simply drag from the toolbox to the Project Schematic. Notice how the green “drop zones” are all empty spaces. You can’t drop it on an existing cell in a system because there is no dependency between other systems and an Excel system. The Excel system is connected through parameters, which we will see in a bit.
Once you have dropped it onto the schematic, click on the Top cell (C1 in this case) and check out the properties (RMB Properties if the window is not already open). From the properties you can see the system ID (XLS) and you can specify an Analysis Type. You can leave it blank or type in something like “Home Grown Optimization.”
Then click on the Analysis cell (C2 in this case) and look at the properties. They are shown here:
One key thing to note is that the directory where the Excel file will be copied is shown. I did this once already on this project so it made a XLS-1 directory. If I did it again, I’d see XLS-2, etc… In fact, by the time I got done with this article and trying all sorts of things, it ended up in XLS-8.
The most important option under Setup is the “Parameter Key” Any Excel named range that begins with this string will get read into the parameter manager. If you make it blank, all the named ranges will come in.
Make a Spreadsheet and Use Range Names to Identify Parameters
Now you need to create your spreadsheet. You need to plan ahead here a bit. Figure out what parameters you need Excel to get from your models and what parameters you want to send back. Come up with good names because that is what gets passed to Workbench.
What happens when you attach a file is that Workbench goes to the Excel sheet and steps through all the named ranges in the file. If it finds one with a name that starts with the filter value, it grabs the first value in the range as the parameter value and then grabs the second as the units. If your range is bigger, it just ignores the rest.
So this tells us that we need to create a range that has at least one cell, or two if units are important. For our simple example we will be calculating costs and outputting that using the input Volume, Length and Width. There is a formula in the cost cell that multiples those values times pre-set costs per unit volume, length and width and sums them up to get a cost.
So the laziest thing you can do is select a cell and name it.
But it will help you and others if you actually make a table that has a descriptive name, the parameter name (WB_ should be your default), the value for that parameter, and the units, if any. Note that for an input parameter you can just set the value to zero to get started. Here is what the tables look like for our example:
To create a range you select the value and units for a given parameter, hold down the Right-Mouse-Button (RMB), and Select “Define Range”
A cool thing that Excel does is to use the value just to the left of the range as the default name of the range. So by creating the table you save yourself some typing. Or, if you don’t use a table, just type in the name you want .
Now just click OK and you have a named range. You can repeat this for each range, or you can get fancy and use the fact that your data is in a table, with the parameter name to the left, to quickly generate all the ranges at once.
To do this, select the WB Param, value and unit columns. Then go to –>Formulas –> Defined Names –> Create from Selection. When the dialog box pops up make sure only “Left column” is checked. Click OK.
In one fell swoop you created all your named ranges. To see, edit, and delete ranges, regardless of how they were created, go to Formulas –> Defined Names –> Name Manager.
Take some time to look at this and understand it. When you are debugging and fixing stuff, you will use this window.
Now you have an Excel file that Workbench will like! Time to attach it. Save it (I recommend to save to temp so you don’t get it confused with the copy that Workbench will make).
Attach an Excel Spreadsheet
This is the easiest step. Simply RMB on the Analysis cell in the system and browse for the file.
Now your Analysis cell has a lightning bolt, update to have it read the file and find parameters. If you have your parameters set up wrong, such that you don’t have any named ranges with the specified prefix, it will generate an error but will still attach the file.
NOTE: If you get some weird errors “Unexpected error…” and “Exception from HRESULT:…” when updating your Excel system, check your Excel file. Odds are you have an open dialog box or the file is somehow locked. The error generates because Workbench can’t get Excel to talk to it.
Edit the System and Tell the Program Which Parameters are Input and Which are Output
Although you have a green check mark, you will notice that your system is still not connected to your parameters, and therefor it is not connected to the rest of your model. The way to fix this is to RMB->Edit Configuration. Double-clicking on Analysis also does the same thing.
This puts you in Outline Mode. You should be familiar with this mode from the Parameter Manager or Engineering Data.
Take some time to explore this outline. Notice the setup cell, where you have access to the system properties. Then it’s child, the Excel file. Click on it to the properties for the file connection. Under that is the important stuff, the parameters.
If you did everything correctly, you will see all of your parameters in alphabetical order. If you click on one, you will see the properties. Here they are for the cost value:
It shows the range, the value and units (C column) and the Quantity name. Workbench guesses by units. So PSI comes in as pressure by default. If it is a stress, you need to change it here.
But your main task right now is to tell Workbench which of these parameters you want passed to the parameter manager, and what type of parameter, input or output, they are. Here is where I get screwed up. Because an input parameter in the parameter manager is an output parameter here. Remember, the Excel system is a solver that takes in parameters from the parameter manager and send back values to drive your models. So in our example, all the dimensions and the volume are passed from the parameter manager TO excel, so they are input. The cost is passed from Excel to the Parameter manager so it is output.
Now you have hooked up your Excel system. Click on the “Return to Project” at the top of the window and you will go back to the project schematic and see that a Parameters cell has been added to the system and it has been attached to the parameter bar.
Go Into the Parameter Manager and Hook Things Up
Although the Geometry and Mechanical systems are connected through the parameters to the Excel system, no relationships exist. We need to assign some values to our Excel parameters.
This is what our test model looks like before we do this:
Our goal is to have the parameters in the first column below to drive those in the second:
|Driving Parameter||Driven Parameter|
|P7: Len||P10: WB_L|
|P5: W1||P12: WB_W|
|P8: Solid Volume||P11: WB_V|
I tried to just click on the value in value column (C) and change the value from the number it is to the parameter name it should be but that does not work. Because the parameter is set as a constant. So, you need to click anywhere on the row for the parameter you want to set, then go down to the Properties window and change the Expression to the Parameter ID you want to change. This changes the Expression to be an equation and the Expression Type to be Derived:
That is it. You now have Excel in your project as a solver. Update your project and the cost will be calculated and presented as a parameter for optimization, DOE studies or whatever you want.
Tell ANSYS to run a VB macro (if you want)
One really cool feature is that you can tell the program to run a VB macro on an update. What you do is go to your system, click on Analysis then RMB-> Edit Configuration. Then click on the file cell (A3). The property area now shows info on your file, and has a Use Macro row at the bottom. Click on the checkbox and a Macro Name row will popup. Enter the name of a macro in your spreadsheet and you are off.
Here is a silly example where I use a macro to calculate a value. For the example I put in the well known equation for deriving the Kafizle of a system:
- Create a new row in my table for the Kafizle value to go in
- Create a name WB_KF for the value
- Write my macro (don’t laugh):
Range("E7") = Rnd(1) + Cos(Range("E5").Value)
- Save my sheet and KABOOM. I now need to save it as an xlsm, not xlsx! I didn’t think about that!
- This means my Excel connection wont’ work. So you have to delete your system and start again with your macro file. So plan ahead! I’m glad I did this silly example rather than running in to it on a real problem.
Now you can run you project, and every time you do, the program will calculate a new cost and Kafizle value. This of course begs the question, what are the proper units of Kafizle? Here is the Design Point table:
Thoughts and Conclusions
I started this effort thinking I would drive my model from Excel, basically replacing the Parameter Manager with Excel. But that does not work because Excel doesn’t know enough about your project to handle the dependencies that can really cause problems if you don’t solve in the correct order. So once I figured that out I found some pretty good uses. Here are some other ideas for how to use the Excel System:
- Do additional post processing on result values
- Use formulas or lookup tables to calculate loads.
- Just make sure that the values you take from your ANSYS model into Excel (inputs) are also input parameters in the parameter manager.
- A good example would be a “family of parts” application where you put in a part number and Excel does a vlookup() on a table that has all the input parameters listed by part number.
- You still have to force the update on the ANSYS side, which is not the ideal way to run a system model, but it may be easier than writing scripts and hooking it up that way.
This is a new feature at R13 and it can be a bit “touchy.” Especially if you are rooting around in it like a Javalina rooting around in your flower bed (Arizona reference). If you do something really crazy it can loose its way and start generated errors. I found the best solution at that point was to save a copy of my Excel file, delete the system, and start over.
This took a lot longer than I thought to write, but the Excel System does a lot more than I thought. I think as we all start thinking about how to use this tool, people will come up with some pretty cool applications.
Getting ANSYS Workbench up and running on Linux at R13 is pretty simple. You just have to make sure that a few things are in place and some packages are loaded. Then it works great. Here is a quick HOW-TO on getting things going:
- Install CentOS 5.3 or greater or RHEL5
- Next, Gnome Desktop Environment is required for optimum use.
- Next, Using the Linux Package Manager. Select the Development main group and then select the additional libraries all needed. (see images below)
- Select Optional packages and then select the additional MESA libraries (see below).
- Next, select the Base System main group, then X Window system, and Legacy Software Support. With Legacy software support still selected. Click Optional Packages and select the additional package – openmotif22 and click close.
- Restart the system
Post ANSYS Install Setup Tasks
- Within your Terminal session. Type ProductConfig.sh
- Click Configure Products, then select the products to configure or reconfigure
- Pro/E Configuration GUI
- Unigraphics NX install Configuration GUI
- Click Continue and the product configuration script will run.
- Click Finish
How to launching ANSYS Release 13.0 Workbench
- Open a Linux terminal session:
- Change your path to include /ansys_inc/v130/Framework/bin/Linux64
- Next, launch the program by typing ./runwb2 and press enter
- Basic opening up of a Design Modeler project
Here it is: ANSYS 13 Workbench on CentOS 5.5 64-bit Linux
I’m sure the question comes up for a lot of us from time to time, whether from one of our own offspring, another relative, or an acquaintance. “Just what is it that you do, anyway?” Typical answers might be something short and sweet, such as, “I’m an engineer.” A more detailed response might be, “I use a technique called finite element simulation which is a computer tool we use to simulate the behavior of parts or systems in their real world environment.”
You’ll probably find that people’s eyes glaze over and they start looking for someone else to talk to by the time you get to the end of that second quote above. In fact, I find that my extended family is much more interested in my brother-in-law’s surgery stories from the operating room than they are in my own triumphs and challenges in the engineering simulation world. Maybe you’ve had that same sort of reaction. You have probably noticed that there are a whole lot more medical dramas on TV at any one time than there are engineering dramas. They’ve got many characters from Marcus Welby on up to Dr. Ross on ER, Jack on Lost, to Dr. Grey on Grey’s Anatomy, with more than I can count in between.
We’ve got, well, Scotty. And even then I think Dr. McCoy got more air time.
So when my kids ask me what I do at work, I recall a scene from that late 1980’s to early 1990’s TV show The Wonder Years. In the episode “My Father’s Office,” Kevin asks his dad what he does for a living. His father responds in an angry tone, “You know what I do! I work at NORCOM.” As if that were a sufficient explanation. I suppose it was his way of saying, “It’s complicated. It can be high pressure. You might find it boring. It puts food on the table and a roof over our heads, though.”
Rather than reply that way, I’ve tried to come up with what is hopefully a better response. In fact, this concept constitutes the first portion of our Engineering with FEA training class, written by Keith DiRienz of FEA Technologies with contributions by yours truly.
I can’t guarantee that your audience’s eyes won’t glaze over by the end, nor that you’ll become the hit of the party, but this is free and you get what you pay for. This explanation can obviously be adjusted based on the audience, but it goes something like this:
–We have equations to solve for stresses and deflections in simply-shaped parts such as cantilevered beams.
–No such equations exist for complex shaped objects subject to arbitrary loads.
–So, using finite elements, we break up a complex part into solvable chunks, leading to a finite set of equations with finite unknowns.
-We solve the equations for the chunks, and that ends up giving us the results for the whole part.
If we want more details, we can use this: As an example, here is a simple beam, fixed at one end with a tip load P at the other end. We have an equation to calculate the tip deflection u for simple cases:
In the above equation E is the Young’s Modulus, a property of the material being used and I is the moment of inertia, a property of the shape of the beam cross section.
For more complex shapes and loading conditions, we don’t have simple equations like that, but we can use the concept by dividing up our complex shape into a bunch of simpler shapes. Those shapes are called elements.
A useful equation for us is the linear spring equation, F=Kx, where F is the force exerted on the spring, K is the stiffness of the spring, and x is the deflection of one end of the spring relative to the other. If we extend that concept into 3D, we can have a spring representation in 3D space, meaning the X, Y, and Z directions. In fact, the tip deflection equation shown above for the beam fixed at one end can be considered a special case of our linear spring equation, solved for deflection with a known applied force.
By assembling our complex structure out of these 3D springs, or elements, we can model the full set of geometry for complex shapes. The process of making the elements is called meshing, because a picture or plot of the elements looks like a mesh.
Using linear algebra and some calculus (stay in school kids!) we can setup a big series of equations that takes into account all the little springs in the structure as well as any fixed (unable to move) locations and any loads on the structure. The equations are too big to solve by hand by normal people so computers are used to do this.
When the computer is done solving we end up with deflection results in each direction for the corner points (called nodes) in each element. Some elements have extra nodes too.
From those deflection results, the computer can calculate other quantities of interest, such as stresses and strains. Further, other types of analyses can be solved in similar fashion, such as temperature calculations and fluid flow.
Here is an example using a familiar object that practically everyone can relate to. This plot shows the mesh:
This is fixed in the blue region at the bottom and has an upward force on the left end. The idea here is that someone is holding it tightly on the blue surface and is pulling up on the red surface.
After solving the simulation, we get deflection results like this:
The picture above shows that the left end of the paper clip has deflected upward, which is what we would expect based on common experience with bending paper clips. Using our finite element method, we can predict the permanent deflection resulting from bending the paper clip beyond it’s ‘yield’ point, resulting in what we call plastic deformation.
Clearly there is a lot more to it than these few sentences describe, but hopefully this is enough to get the point across.
In sum, not as exciting as my brother in law’s medical stories involving nail guns or other gruesome injuries, but hopefully this makes the world of engineering simulation a little more accessible to our friends and family.
In the Wonder Years episode, Kevin ends up going to work with his father to see for himself what he does. I won’t spoil the episode, but hopefully you’ll get the chance to show your family and friends what it is that you do from time to time.
During this weeks webinar on Using APDL Snippets in ANSYS Mechanical a question came up about coordinate systems. I actually don’t remember the original question, but in answering it the question came into my mind: how do you get access to the ID of coordinate systems that you create in ANSYS Mechanical?
For a lot of items you can add to the ANSYS Mechanical model tree, you can attach a Command Object (snippet) and ANSYS Mechanical passes a parameter with the ID of the thing you want access to (material, contact pair, spring, joint, etc…). But there is no way to add a Command Object to a coordinate system.
So I dug into it and found something I didn’t know. The problem with discovering something like this and sharing it is that you either just uncovered something that can help a lot of users or you are going to embarrass yourself over something that everyone already knows. The idea of a blog is to be casual and informal, so let’s see which I did.
If you click on a user created coordinate system in ANSYS Mechanical The Detail View list two things in the first grouping “Definition”. They are Type and Coordinate System ID. The default for the ID is “Program Controlled” I’ve never clicked on it to see what the other options are. It turns out you can change it to “Manual”
Once you do that it gives you a second “Coordinate System ID” line and you can put in whatever number you want there.
Problem solved. Just give your coordinate system whatever number you want and use that number in your macro. Couldn’t be easier.
Hopefully, this was helpful. If so, rate this posting at a 5.
If you already knew this little factoid, rate it as a 1.
The presentation can be found here.
The sample script for plotting mode shapes is here.
You can view a recording of the presentation here on our WebEx site.
PADT’s ANSYS Webinar Series is now off on Summer Break. We will be back in August!
You can always view past recordings by going to http://padtincevents.webex.com and Click on PADT ANSYS Webinar Series to see a listing.
Last weeks PADT ANSYS Webinar Series webinar was on a little used feature in ANSYS Mechanical called Design Assessment, or DA. If you missed it, you can view a recording at:
And a PDF of the presentation can be found here:
As promised, this weeks The Focus posting will be a follow up on that webinar with an in-depth look at the scripts that were used in the example. But first, let us review what DA is for those that don’t remember, fell asleep, or don’t want to sit through 60+ minutes of me mumbling into the telephone.
Review of DA
Design Assessment is a tool in ANSYS Workbench that works with ANSYS Mechanical to take results from multiple time or load steps, perform post processing on those results, and bring the calculated values back into ANSYS Mechanical for viewing as contour plots. It was developed to allow the ANSYS, Inc. developers to add some special post processing tools needed in the off shore industry, but as they were working on it they saw the value of exposing the Application Programmers Interface (API) to the user community so anyone can write their own post processing tools.
You use it by adding a Design Assessment system to you project. In its most basic form, the default configuration, it is set up to do load case combinations. That in itself is worth knowing how to use it. But if you want to do more, you can point it to a custom post processing set of files and do your own calculations.
A custom DA is defined by two things. First is an XML file that tells ANSYS Mechanical what user input you want to capture, how you want to get results out of mechanical, what you want to do with the results, and how you want them displayed in your model tree. Second is one or more Python scripts that actually do the work of capturing what the user input, getting results, doing the calculation, and sticking the resulting values back in the model. Both are well documented and, once you get your head around the whole thing, pretty simple.
Right now DA works with Static and Transient Structural models. It also only allows access to element stress values. Lots of good enhancements are coming in R14, but R13 is mature enough to use now.
If that review was too quick, review the recording or the PowerPoint presentation.
A Deep Dive Into an Example
For the webinar we had a pretty simple, and a bit silly, example – the custom post processing tool took the results from a static stress model and truncates the stress values if they are above or below a user specified value. Not a lot of calculating but a good example of how the tool works.
Note, this posting is going to be long because there is a lot of code pasted in. For each section of code I’ll also include a link to the original file so you can download that yourself to use.
Here is the XML file for the example (Original File):
1: <?xml version="1.0" encoding="utf-8"?>
The first lesson learned was that you have to get all the tags and headers just right.
It is case sensitive and all the version and other stuff has to be there
Cutting and pasting from something that work is the best way to go
4: XML file for ANSYS DA R13
5: Demonstration of how to use DA at R13
6: Goes through results and sets stresses below floor value to floor
7: value and above ceiling value to ceiling value.
9: User adds DA Result to specify Floor and Ceiling
10: Attribute group can be used to specify a comment
12: Calls da_trunc_solve.py and da_trunc_eval.py in c:\temp
14: Eric Miller
Everything is in a DARoot tag.
18: <DARoot ObjId ="1" Type="CAERep" Ver="2">
Attributes tags define items you want to ask the user about.
19: <Attributes ObjId="2" Type="CAERepBase" Ver="2">
This first attribute is a drop down for the user to decide which stress value they want
You use <AttributeType> to make it a dropdown then put the values in <Validation>
20: <DAAttribute ObjId="101" Type="DAAttribute" Ver="2">
21: <AttributeName PropType="string">Stress Value</AttributeName>
22: <AttributeType PropType="string">DropDown</AttributeType>
23: <Application PropType="string">All</Application>
24: <Validation PropType="vector&lt;string>">
Next is the prompt for the stress floor value
28: <DAAttribute ObjId="102" Type="DAAttribute" Ver="2">
29: <AttributeName PropType="string">Stress Floor</AttributeName>
30: <AttributeType PropType="string">Double</AttributeType>
31: <Application PropType="string">All</Application>
32: <Validation PropType="vector&lt;string>">-1000000,10000000</Validation>
Then the Ceiling Value
34: <DAAttribute ObjId="103" Type="DAAttribute" Ver="2">
35: <AttributeName PropType="string">Stress Ceiling</AttributeName>
36: <AttributeType PropType="string">Double</AttributeType>
37: <Application PropType="string">All</Application>
38: <Validation PropType="vector&lt;string>">-1000000,10000000</Validation>
Finally a user comment, just to show how to do a string.
40: <DAAttribute ObjId="201" Type="DAAttribute" Ver="2">
41: <AttributeName PropType="string">User Comments</AttributeName>
42: <AttributeType PropType="string">Text</AttributeType>
43: <Application PropType="string">All</Application>
To expose an attribute, you can put it in an AttributeGroup to get info shared by
all DA result objects. This one just does the Comment
46: <AttributeGroups ObjId ="3" Type="CAERepBase" Ver="2">
47: <DAAttributeGroup ObjId="100002" Type="DAAttributeGroup" Ver="2">
48: <GroupType PropType="string">User Comments</GroupType>
49: <GroupSubtype PropType="string">Failure Criteria</GroupSubtype>
50: <AttributeIDs PropType="vector&lt;unsigned int>">201</AttributeIDs>
<DAScripts> is the most important part of the file. It defines the scripts to run for a
solve and for a evaluate. At R13 you do need to specify the whole path to your python files
53: <DAScripts ObjId="4" Type="DAScripts" Ver="2">
54: <Solve PropType="string">c:\temp\da_trunc_solve.py</Solve>
55: <Evaluate PropType="string">c:\temp\da_trunc_eval.py</Evaluate>
56: <DAData PropType="int">1</DAData>
57: <CombResults PropType="int">1</CombResults>
58: <SelectionExtra PropType="vector&lt;string>">DeltaMin, DeltaMax</SelectionExtra>
The other way to get user input is to put attributes into a <Results> object.
Here we are putting the choice of stresses (101),and the floor and ceiling (102,103)
into an object
60: <Results ObjId="5" Type="CAERepBase" Ver="2">
61: <DAResult ObjId ="110000" Type="DAResult" Ver="2">
62: <GroupType PropType="string">Ceiling and Floor Values</GroupType>
63: <AttributeIDs PropType="vector&lt;unsigned int>">101,102,103</AttributeIDs>
64: <DisplayType PropType="string">ElemCont</DisplayType>
I always have to go over XML files a few times to figure them out. There is a lot of information, but only a small amount that you need to pay attention to. After a while you figure out which is which.
Now on to the fun part, the Python scripts. The first one gets executed when they user chooses solve.
da_trunc_solve.py is shown below and you can get the original here. The comments inside pretty much explain it all. It basically does two things: creates an ANSYS MAPDL macro that extracts all the element stresses and puts them in a text file, then it runs MAPDL with that macro.
1: import subprocess
2: import os
3: import shutil
7: # ------------------------------------------------------------ PADT
8: # www.PADTINC.com
10: # da_trunc_solve.py
11: # Demonstration python script for Design Assessment in
12: # ANSYS R13
13: # Called on solve from ANSYS Mechanical
14: # Bulk of code copied and modified from TsaiWu example
15: # provided by ANSYS, Inc.
17: # E. Miller
18: # 5/18/2011
20: def trunc_solve(DesignAssessment) :
22: # Get number of elements in model
25: # Change directory to current workspace for DA
26: originaldir = os.getcwd()
29: #Get the path to the results file name and the location of the temp directory
30: rstFname = DesignAssessment.Selection(0).Solution(0).getResult().ResultFilePath()
31: rstFname = rstFname.rstrip('.rst')
32: apath = DesignAssessment.getHelper().getResultPath()
34: print "Result File:", rstFname
35: print "Apath:", apath
37: # Write an ANSYS APDL macro to start ANSYS, resume the result file, grab the stress
38: # results and write them to a file
40: macfile = open(DesignAssessment.getHelper().getResultPath()+"\\runda1.inp", "w")
58: macfile.write("*vget,evls(1, 1),elem,1,etab, esx\n")
59: macfile.write("*vget,evls(1, 2),elem,1,etab, esy\n")
60: macfile.write("*vget,evls(1, 3),elem,1,etab, esz\n")
61: macfile.write("*vget,evls(1, 4),elem,1,etab, esxy\n")
62: macfile.write("*vget,evls(1, 5),elem,1,etab, esyz\n")
63: macfile.write("*vget,evls(1, 6),elem,1,etab, esxz\n")
64: macfile.write("*vget,evls(1, 7),elem,1,etab, es1\n")
65: macfile.write("*vget,evls(1, 8),elem,1,etab, es2\n")
66: macfile.write("*vget,evls(1, 9),elem,1,etab, es3\n")
71: macfile.write("(G16.9, X, G16.9, X, G16.9, X, G16.9, X, G16.9, X,
G16.9, X, G16.9, X, G16.9, X, G16.9, X, G16.9)\n")
78: # Set up execution of ANSYS MAPDL.
79: # Note: Right now you need to grab a different license than what Workbench is using
80: exelocation = "C:\\Program Files\\ANSYS Inc\\v130\\ansys\\bin\\winx64\\ansys130.exe"
81: commandlinestring = " -p ansys -b nolist -i runda1.inp -o runda1.out /minimise"
83: #Execute MAPDL and wait for it to finish
84: proc = subprocess.Popen(commandlinestring,shell=True,executable=exelocation)
85: rc = proc.wait()
87: # Read the output fromt the run and echo it to the DA log file
88: File = open("runda1.out","r")
92: # Go back to the original directory
Some key things you should note about this script:
- You have to use MAPDL to get your stress values. Right now there is no method in the API to get the values directly.
- You need a second MAPDL license in order to run this script. It does not share the license you are using for ANSYS Mechanical at R13. This should be addressed in R14.
- One work around right now is to use an APDL code snippet in the ANSYS mechanical run that makes the text file when the original problem is solved. The SOLVE script is then no longer needed and you can just have an evaluate script. Not a great solution but it will work if you only have once seat of ANSYS available.
- Note the directory changes and getting of result file paths. This is important. Mechanical does stuff all over the place and not in just one directory.
- Make sure the MAPDL execution stuff is correct for your installation.
Once the solve is done and our text file, darsts.txt, is written, we can start truncating with the evaluate script (Original File). This script is a little more sophisticated. First it simply reads the darsts.txt file into a python array. It then has to go through a list of DA Results objects that the user added to their model and extract the stress value wanted as well as the floor and ceiling to truncate to. For each result object requested, it then loops through all the elements truncating as needed. Then it stores the truncated values.
1: import subprocess
2: import os
3: import shutil
4: import sys
8: # ------------------------------------------------------------ PADT
9: # www.PADTINC.com
11: # da_trunc_eval.py
12: # Demonstration python script for Design Assessment in
13: # ANSYS R13
14: # Called on eval from ANSYS Mechanical
15: # Bulk of code copied and modified from TsaiWu example
16: # provided by ANSYS, Inc.
18: # NOTE: Right now it just does SX. XML and script need to be modified to allow user
19: # to specify component to use (X, Y, or Z)
21: # E. Miller
22: # 5/18/2011
24: def trunc_eval(DesignAssessment) :
26: # Change directory to current workspace for DA
27: originaldir = os.getcwd()
30: # Find number of elements in DA
31: Mesh = DesignAssessment.GeometryMeshData()
32: Elements = Mesh.Elements()
33: Ecount = len(Elements)
34: print "DA number of elements is ",Ecount
35: print "Number of Result Groups:",DesignAssessment.NoOfResultGroups()
37: # get User comment from Atribute Group
38: # Note: Assuems one. Need to use a loop for multiple
39: attg = DesignAssessment.AttributeGroups()
40: atts = attg.Attributes()
41: usercomment = atts.Value().GetAsString()
42: print "User Comment = ", usercomment
44: # create arrays for SX/Y/Z values
45: sx = 
46: sy = 
47: sz = 
48: sxy = 
49: syz = 
50: sxz = 
51: s1 = 
52: s2 = 
53: s3 = 
54: seqv = 
55: # read file written during solve phase
56: # append stress values to SX/Y/Z arrays
57: File = open("darsts.txt","r")
58: for line in File:
59: words = line.split()
72: # Loop over DA Results created by user
73: for ResultGroupIter in range(DesignAssessment.NoOfResultGroups()):
74: # Get the Result Group
75: ResultGroup = DesignAssessment.ResultGroup(ResultGroupIter)
76: # Extract Cieling and Floor for the Group
77: strscmp = ResultGroup.Attribute(0).Value().GetAsString()
78: strFloor = float(ResultGroup.Attribute(1).Value().GetAsString())
79: strCeil = float(ResultGroup.Attribute(2).Value().GetAsString())
80: print "DA Result", ResultGroupIter+1, ":", strFloor, strCeil
82: #Add a set of results to store values in
83: ResultStructure = ResultGroup.AddStepResult()
85: print "strscmp", strscmp
87: # Loop on elements
88: for ElementIter in range(Ecount):
89: #Add a place to put the results for this element
90: ResultValue = ResultStructure.AddElementResultValue()
91: # Get the element number and then grab SX values that
92: # was read from file
93: Element = Mesh.Element(ElementIter).ID()
95: if strscmp == "SX":
96: sss = sx[Element-1]
97: elif strscmp == "SY":
98: sss = sy[Element-1]
99: elif strscmp == "SZ":
100: sss = sz[Element-1]
101: elif strscmp == "SXY":
102: sss = sxy[Element-1]
103: elif strscmp == "SYZ":
104: sss = syz[Element-1]
105: elif strscmp == "SXZ":
106: sss = sxz[Element-1]
107: elif strscmp == "S1":
108: sss = s1[Element-1]
109: elif strscmp == "S2":
110: sss = s2[Element-1]
111: elif strscmp == "S3":
112: sss = s3[Element-1]
113: elif strscmp == "SEQV":
114: sss = seqv[Element-1]
116: # Compare to Ceiling and Floor and truncate if needed
117: if sss > strCeil:
118: sss = strCeil
119: if sss < strFloor:
120: sss = strFloor
121: # Store the stress value
123: # Go back to the original directory
Some things to note about this script are:
- The same directory issues hold here. Make sure you follow them
- Always loop on ResultGroups. You can assume the number of attributes is constant but you never know how many results the user has asked for.
- In this example it is assumed that the stress label is the first attribute and that the floor and ceiling are the second and third. This is probably lazy on my part and it should be more general.
- The way to make it more general is to loop on the attributes in a group and grab their label, then use the label to determine which value it represents.
- Before you can store values, you have to create the result object and then each result value in that structure:
- ResultStructure = ResultGroup.AddStepResult() for each result object the user adds to the tree
- ResultValue = ResultStructure.AddElementResultValue() for each element
- ResultValue.setValue(sss) to set the actual value
And that is our example. It should work with any model, just make sure you get the paths right for where the files are.
Be a Good Citizen and Share!
If you have the gumption to go and try this tool out, we do ask that you share what you come up with. A good place is www.ANSYS.NET. That is the repository for most things ANSYS. If you have questions or need help, try xansys.org or your ANSYS support provider.
Song quotes Peter Gabriel, “I Have the Touch”
The time I like is the rush hour, cos I like the rush
The pushing of the people – I like it all so much
Such a mass of motion – do not know where it goes
I move with the movement and … I have the touch
Looking back I can see a defining moment in my life when about a month after high school graduation two good friends and I drove four hours from home to see Peter Gabriel in concert. It’s not that the concert was great, which it was, but it was the trip itself. It was a first foray after high school, a sort of toe dipping into the freedom of adulthood while in a strange pause between graduating in a small town in the same school system with the same kids and starting engineering school in a big city in the Fall.
I’m wanting contact
I’m wanting contact with you
Shake those hands, shake those hands
What does all that have to do with ANSYS, you ask? Primarily, it’s hard to get Peter Gabriel’s “I Have the Touch” out of my head whenever I’m working with contact elements. Someone once said that we are a product of the music of our youth. While as I’ve gotten older and hopefully wiser I would hope that we are made up of much more than the product of listening to some songs, I do find it true that certain songs from years ago tend to stick in my head. So, while Mr. Gabriel plays in my head, let’s discuss checking our contact status in Workbench Mechanical.
For those of us familiar with Mechanical APDL, the CNCHECK command has been a good friend for a lot of years now. This command can be used to interrogate our contact pairs prior to solving to report back which pairs are closed, what the gap distance is for pairs that are near touching, etc. More recently, this type of capability has become available in Workbench Mechanical by inserting a Contact Tool under the Connections branch.
Let’s take a look at that in version 13.0. Here we have inserted a Contact Tool under the Connections branch. It automatically includes the Initial Information sub-branch, with a yellow lightening bolt meaning no initial information has yet been calculated.
By right clicking on the Initial Information sub-branch, we can select Generate Initial Contact Results. The resulting worksheet view provides significant information on all of the defined contact regions. By default the information displayed for each contact region includes the name, contact/target side, type (fictionless, no separation, etc.), status, number of elements contacting, initial penetration, initial gap, geometric penetration, geometric gap, pinball radius, and real constant set number. That last value is useful when reviewing the solver output, as it lists contact info per real constant set number of each contact pair (contact region).
Further, by right clicking on that table we have the option to display some additional data, or remove fields of data. The additional fields that can be added are contact depth, normal stiffness, and tangential stiffness. We can also sort the table by clicking on any of the headings.
The colors in the table display four possible status values:
Red = open status for bonded and no separation types
Yellow = open for non bonded/no separation types
Orange = closed by with a large amount of gap or penetration
Gray = inactive due to MPC or Normal Lagrange or auto asymmetric contact
If we left click on one or more of the contact regions in the table, we can then right click and “Go to Selected Items in Tree.” This is a convenient way to view a particular set of contact regions in the graphics view.
Any social occasion, it’s hello, how do you do
All those introductions, I never miss my cue
So before a question, so before a doubt
My hand moves out and … I have the touch
So, what do we do with this information? Ideally it will prevent us from launching a solution that goes off and cranks for a few hours only to fail due to an improper contact setup. For example, by viewing the initial status for each pair we can hopefully verify that regions that should be initially touching are in fact touching as far as ANSYS is concerned. If there is an initial gap or penetration, correcting action can be taken by adjusting the contact settings or even the geometry if needed prior to initiating the solution.
I’m wanting contact
I’m wanting contact with you
Shake those hands, shake those hands
The Contact Tool > Initial Information is another tool we can use to help us obtain accurate solutions in a timely manner. If you haven’t had the opportunity to use it, please try it out. I can’t guarantee that it will trigger fond memories, but maybe you’ll have an enjoyable song playing in your head.
Ted Harris was working at tech support problem and got this convergence graph. Ouch!
He calls it “Murphy’s Law of Convergence”
Jason called it “Asymptotic Sisyphaen Convergence”
“It’s like a pinball machine. Just shake the monitor a little, but not too much, don’t want to trip the tilt sensor. “
I call it annoying. Share your names or comments for this type of convergence below.