Numerical simulation has been the bulk of my career for 30 years now. I love simulation. It has had a huge positive impact on product development as well as many other industries. In “What is numerical simulation? And why should I care?” I evangelize a bit about my professional passion.
Overcoming convergence difficulties in nonlinear structural problems can be a challenge. I’ve written a couple of times previously about tools that can help us overcome those difficulties:
- Overcoming Convergence Difficulties in ANSYS Workbench Mechanical, Part I: Using Newton-Raphson Residual Information
- Overcoming Convergence Difficulties in ANSYS Workbench Mechanical, Part II: Quick Usage of Mechanical APDL to Plot Distorted Elements
I’m pleased to announce a new tool in the ANSYS Mechanical tool belt in version 17.0.
With version 17.0 of ANSYS we get a new meshing option for structural simulations: Nonlinear Mechanical Shape Checking. This option has been added to the previously available Standard Mechanical Shape Checking and Aggressive Mechanical Shape Checking. For a nonlinear solution in which elements can become significantly distorted, if we start with better-shaped elements they can undergo larger deformations without encountering errors in element formulation we may encounter fewer difficulties as the nodes deflect and the elements become distorted. The nonlinear mechanical setting is more restrictive on the element shapes than the other two settings.
We’ve been recommending the aggressive mechanical setting for nonlinear solutions for quite a while. The new nonlinear mechanical setting is looking even better. Anecdotally, I have one highly nonlinear customer model that reached 95% of the applied load before a convergence failure in version 16.2. That was with the aggressive mechanical shape checking. With 17.0, it reached 99% simply by remeshing with the same aggressive setting and solving. That tells you that work has been going on under the hood with the ANSYS meshing and nonlinear technology. By switching to the new nonlinear mechanical shape checking and solving again, the solution now converges for the full 100% of the applied load.
Here are some statistics using just one measure of the ‘goodness’ of our mesh, element quality. You can read about the definition of element quality in the ANSYS Help, but in summary better shaped elements have a quality value close to 1.0, while poorly shaped elements have a value closer to zero. The following stats are for tetrahedral meshes of a simple turbomachinery blade/rotor sector model (this is not a real part, just something made up) comparing two of the options for element shape checking. The table shows that the new nonlinear mechanical setting produces significantly fewer elements with a quality value of 0.5 or less. Keep in mind this is just one way to look at element quality – other methods or a different cutoff might put things in a somewhat different perspective. However, we can conclude that the Nonlinear Mechanical setting is giving us fewer ‘lower quality’ elements in this case.
|Shape Checking Setting||Total Elements||Elements w/Quality <0.5||% of elements w/Quality <0.5|
Here are images of a portion of the two meshes mentioned above. This is the mesh with the Aggressive Mechanical Shape Checking option set:
The eyeball test on these two meshes confirms fewer elements at the lower quality contour levels.
And this is the mesh with the Nonlinear Mechanical Shape Checking option set:
So, if you are running nonlinear structural models, we urge you to test out the new Nonlinear Mechanical mesh setting. Since it is more restrictive on element shapes, you may see longer meshing times or encounter some difficulties in meshing complex geometry. You may see a benefit in easier to converge nonlinear solutions, however. Give it a try!
The development of small modular nuclear reactors, or SMR’s, is a complex task that involves balancing the thermodynamic performance of the entire system. Flownex is the ideal tool for modeling pressure drop [flow] and heat transfer [temperature] for the connected components of a complete system in steady state and transient, sizing and optimizing pumps or compressors, pipes, valves, tanks, and heat exchangers.
To highlight this power and capability, PADT and Flownex will be exhibiting at the 2016 SMR conference in Atlanta where we will be available to discuss exciting new Flownex developments in system and subsystem simulations of SMRs. If you are attending this year’s event, please stop by the Flownex booth and say hello to experts from M-Tech and PADT.
If you are not able to make the conference or if you want to know more now, you can view more information from the new Flownex SMR brochure or this video:
Why is Flownex a Great Tool for SMR Design and Simulation?
These developments offer greatly reduced times for performing typical design tasks required for Small Modular Nuclear Reactor (SMR) projects including sizing of major components, calculating overall plant efficiency, and design for controllability
This task involves typical components like the reactor primary loop, intermediate loops, heat exchangers or steam generators and the power generation cycle. Flownex provides for various reactor fuel geometries, various reactor coolant types and various types of power cycles.
Flownex can also be used for determining plant control philosophy. By using a plant simulation model, users can determine the transient response of sensed parameters to changes in input parameters and based on that, set up appropriate pairings for control loops.
For passive safety system design Flownex can be used to optimize the natural circulation loops. The program can calculate the dynamic plant-wide temperatures and pressures in response to various accident scenarios, taking into account decay heat generation, multiple natural circulation loops, transient energy storage and rejection to ambient conditions.
Adding Complexity and Moving
After playing with that block it seems like it may be time to try a more complex geometry. For business banking, I’ve got this key fob that generates a number every thirty seconds that I use for security when I log in. Might as well sort of model that.
So the first thing I do is start up a new model and orient myself on to the sketch plane:
Then I use the line and arc tools to create the basic shape. Play around a bit. I found that a lot of things I had to constrain in other packages are just assumed when you define the geometry. A nice thing is that as you create geometry, it locks to the grid and to other geometry.
I dragged around and typed in values for dimensions to get the shape I wanted. As I was doing it I realized I was in metric. I’m old, I don’t do metric. So I went in to File and selected SpaceClaim options from the bottom of the window. I used the Units screen to set things to Imperial.
This is the shape I ended up with:
I took this and pulled it up and added a couple of radii:
But if I look at the real object, the flat end needs to be round. In another tool, I’d go back to the sketch, modify that line to be an arc, and regen. Well in SpaceClaim you don’t have the sketch, it is gone. Ahhh. Panic. I’ve been doing it that way for 25 some years. OK. Deep breath, just sketch the geometry I need. Click on the three point arc tool, drag over the surface, then click on the first corner, the second, and a third point to define the arc:
Then us pull to drag it down, using the Up to icon to lock it to the bottom of the object.
Then I clicked on the edges and pulled some rounds on there:
OK, so the next step in SolidEdge would be to do a thin wall. I don’t see a thin wall right off the top, but shell looks like what I want, under the Create group on the Design tab. So I spinned my model around, clicked on the bottom surface I want to have open and I have a shell. A thickness of 0.035″ looks good:
My next feature will be the cutout for the view window. What I have not figured out yet is how to lock an object to be symmetrical. Here is why. I sketch my cutout as such, not really paying attention to where it is located. Now I want to move it so that it is centred on the circle.
Instead of specifying constraints, you move the rectangle to be centered. To do that I drag to select the rectangle then click Move. By default it puts the nice Move tool in the middle of the geometry. If I drag on the X direction (Red) you can see it shows the distance from my start.
So I have a couple of options, to center it. The easiest is to use Up To and click the X axis for the model and it will snap right there. The key thing I learned was I had to select the red move arrow or it would also center horizontally where I clicked.
If I want to specify how far away the edge is from the center of the circle, the way I did it is kind of cool. I selected my rectangle, then clicked move. Then I clicked on the yellow move ball followed by a click on the left line, this snapped the move tool to that line. Next I clicked the little dimension Icon to get a ruller, and a small yellow ball showed up. I clicked on this and dragged it to the center of my circle, now I had a dimension from the circle specified that I could type in.
After playing around a bit, if found a second, maybe more general way to do this. I clicked on the line I want to position. One of the icons over on the left of my screen is the Move Dimension Base Point icon. If you click on that you get another one of those small yellow balls you can move. I dragged it over to the center of the circle and clicked. then I can specify the distance as 0.75″
I’ve got the shape I want, so I pull, using the minus icon to subtract, and I get my cutout:
If you look closely,you will notice I put rounds on the corners of the cutout as well, I used Pull again.
The last thing I want to do is create the cutout for where the bank logo goes. It is a concentric circle with an arc on the right side. Saddly, this is the most complex thing I’ve ever sketched in SpaceClaim so I was a bit afraid. It was actually easy. I made a circle, clicking on the center of the outside arc to make them concentric. The diameter was 1″. Then I made another circle of 2″ centered on the right. To get the shape I wanted, I used the Trim Away command and clicked on the curves I don’t want. The final image is my cutout.
Now I can do the same thing, subtract it out, put in some rounds, and whalla:
Oh, and I used the built in rendering tool to quickly make this image. I’ll have to dedicate a whole posting to that.
But now that I have my part, it is time to play with move in 3D.
Moving in 3D
Tyler, who is one of our in-house SpaceClaim experts (and younger) pointed out that I need to start thinking about editing the 3D geometry instead of being obsessed with controlling my sketches. So here goes.
If I wanted to change the size of the rectangular cutout in a traditional CAD tool, I’d go edit the sketch. There is no sketch to edit! Fear. Unknown. Change.
So the first thing I’ll do is just move it around. Grab one of the faces and see happens.
It moves back and forth, pretty simple. The same tools for specifying the start and stop points are available. Now, if I ctrl-click on all four surfaces the whole thing moves. That is pretty cool.
Note: I’m using the undo all the time to go back to my un-moved geometry.
Another Note: As you select faces, you have to spin the model around a lot. I use the middle mouse button to do this rather than clicking on the spin Icon and then having to unclick it.
That is enough for this post. More soon.
Learning More About Pulling
As I explored ANSYS SpaceClaim in my first try, it became obvious that a lot of capabilities that are in multiple operations in most CAD systems, are all combined in Pull for SpaceClaim. In this posting I feel like it would be a really good idea for me to really understand all the things Pull can do.
Start with the Manual
Not very exciting or adventurous. But there is so much in this operation that I feel like I will miss something critical if I don’t read up first. It states:
“Use the Pull tool to offset, extrude, revolve, sweep, and draft faces; use it to round, chamfer, extrude, copy, or pivot edges. You can also drag a point with the Pull tool to draw a line on a sketch plane.”
Let’s think about that for a second. What it is basically saying is if I pull on an object of a given dimension, it creates an object that is one higher dimension. Point pulls to a curve, a curve pulls to a face, and a face pulls to a solid. Kind of cool. The big surprise for me is that there is no round or fillet command. To make a round you pull on an edge. This is change.
Pull some Stuff
I started by reading my block with a hole back in.
This fillet pull thing scares me so I thought I’d confront it first. So selecte Pull, and selected an edge:
Then I dragged it away from the block. Nothing. You can’t create a surface that way. Then I dragged in towards the center. A round was created.
If anything, too simple. Back in my day, adding a round to an edge took skill and experience!
So next I think I want to try and change the size of something. Maybe the diameter of the hole. So I select the cylinder’s face. Is shows the current radius. I could just change that value:
Instead I drag, and while I do that I noticed that there are two numbers, the current radius and the change to the radius! Kind of cool. No, really useful.
You use tab to go between them. So I hit tab once, typed 3 then tab again (or return) and I get a 8 mm diameter. I like the visual feedback as well as the ability to enter a specific change number.
Next thing that I felt like doing was rounding a corner. Put a 5mm round on the corner facing out:
So I grabbed the point and dragged, and got a line.
Remember, it only goes up one entity type – point to curve. Not point to surface. So I ctrl-clicked (that is how you select multiple entities) on the three curves that intersect at the corner:
Then I dragged and got my round.
Pulling Along or Around Something
This are all sort of dragging straight. After looking at the manual text it seems I can revolve and sweep as well with the Pull operation. Cool. But what do I revolve or sweep around and along? Looking at the manual (and it turns out the prompt on the screen) I use Alt-Clicking to define these control curves. Let’s try it out by revolving something about that line I mistakenly made.
I click on one of the curves on the round. then Alt-Click the line – It turns blue. So there is a nice visual clue that it is different than the source curve. Now I’ve also got spinny icons around the curve rather than pull icons.
So I drag and… funky revolved surface shows up. I had to spin the model to see it clearly:
Let me stop and share something special about this. In most other CAD tools, this would have involved multiple clicks, maybe even multiple windows. In SpaceClaim, it was Click, Alt-Click, Drag. Nice.
Using the Pop=up Icons
As you play with the model you may start seeing some popup icons near the mouse when you select geometry while using pull. The compound round on the block is complicated, so I spun it around and grabbed just one edge and pulled it in to be a round. Then I clicked on it and got this:
Not only can I put a value in there, I can drop ones I use a lot. I can also change my round to a chamfer, or I can change it to a variable radius. This is worth noting. In most other CAD tools you pick what type of thing you want to do to the edge. Here we start by dragging a round, then specify if it is a chamfer or a variable.
The variable radius is worth digging more in to. I clicked on it and it was not intuitive as to what I should do. Let’s try help. Search on Variable Radius… duh. Click on the arrow that shows up and drag that. There are three arrows. The one in the middle scales both ends the same, the one on either end, well it sets the radius for either end.
Clicking on a control point and hitting delete, gets rid of them.
That’s just one icon that pops up. Playing some more it seems the other icons control how it handles corners and multiple fillets merging… something to look at as I do more complex parts.
The other popup I want to look at is the Up To one. It looks like an arrow on a surface. In other tools I extrude, cut, revolve all the time to some other piece of geometry. This is the way to do it in Space Claim. Let’s say I want to pull a feature to the middle of my hole. First I sketch the outline on a face:
That is enough for pulling and for today. In the next session it may be time to explore the Move command.
This post is a table of contents to a series about ANSYS SpaceClaim. After over 31 years of CAD use, it has become difficult for me to learn new tools. In this series I will share my experience as I explore and learn how to use this fantastic tool.
Have you heard? It’s Pi Day! This post, “5 reasons why nerds celebrate Pi Day” shares the reasons why those of us in the know like Pi day so much.
Thirty-one. That is the number of years that I have been using CAD software. CADAM was the tool, 1985 was the year. As some of our engineers like to point out, they were not even born then.
Twenty-one. that is the number of years that I have been using SolidEdge. This classifies me as an old dog, a very old dog. As PADT has grown the amount of CAD I do has gone way down, but every once in a while I need to get in there and make some geometry happen. I’m usually in a hurry so I just pop in to SolidEdge and without really thinking, I get things done.
Then ANSYS, Inc. had to go and buy SpaceClaim. It rocks. It is not just another solid modeler, it is a better way to create, repair, and modify CAD. I watch our engineers and customers do some amazing things with it. I’m still faster in SolidEdge because I have more years of practice than they have been adults. But this voice in my head has been whispering “think how fast you would be in SpaceClaim if you took the time to learn it.” Then that other voice (I have several) would say “you’re too old to learn something new, stick with what you know. You might break your hip”
I had used SpaceClaim a bit when they created a version that worked with ANSYS Mechanical four or five years ago, but nothing serious. Last month I attended some webinars on R17 and saw how great the tool is, and had to accept that it was time. That other voice be damned – this old dog needs to get comfortable and learn this tool. And while I’m at it, it seemed like a good idea to bring some others along with me.
These posts will be a tutorial for others who want to learn SpaceClaim. Unlike those older tools, it does not require five days of structured training with workshops. The program comes with teaching material and tutorials. The goal is to guide the reader through the process, pointing out things I learned along the way, as I learn them.
A link to the table of contents is here.
The product I’m learning is ANSYS SpaceClaim Direct Modeler, a version of SpaceClaim that is built into the ANSYS simulation product suite. There is a stand alone SpaceClaim product but since most of our readers are ANSYS users, I’m going to stick with this version of the tool.
This is what you see when you start it up:
I’ve been using the same basic layout for 20 years, so this is a bit daunting for me. I like to start on a new program by getting to know what different areas of the user interface do. The “Welcome to ANSYS SCDM” kind of anticipates that and gives me some options.
Under “Getting Started” you will see a Quick Reference Card, Introduction, and Tutorials. Open up the Quick Reference and print it out. Don’t bother with it right now, but it will come in handy, especially if you are not going to use SpaceClaim every day.
The Introduction button is a video that gets you oriented with the GUI. Just what we need. It is a lot of information presented fast, so you are not going to learn everything the first viewing, but it will get you familiar with things.
Here I am watching the video. Notice how attentive I am.
Once that is done you should sort of know the basic lay of the land. Kind of like walking into a room and looking around. You know where the couch is, the window, and the shelf on one wall. Now it is time to explore the room.
It is kind of old school, but I like user guides. You can open the SpaceClaim User Guide from the Help line in the “Welcome” window. I leave it open and use it as a reference.
The best place to learn where things are in the interface is to look at the interface section in the manual. It has this great graphic:
The top bit is pretty standard, MS office like. You have your application menu, quick access toolbar, and Ribbon Bar. The Ribbon Bar is where all the operations sit. We used to call these commands but in an object oriented world, they are more properly referred to as operations – do something to objects, operate on them. I’ll come back and explore those later. Over on the left there are panels, the thing we need to explore first because they are a view into our model just like the graphics window.
The Structure Panel is key. This is where your model is shown in tree form, just like in most ANSYS products. In SpaceClaim your model is collection of objects, and they are shown in the tree in the order you added them. You can turn visibility on and off, select objects, and act on objects (using the right mouse button) using the tree. At this point I just had one solid, so pretty boring. I’m sure it will do more later.
Take a look at the bottom of the Structure Panel and you will find some tabs. These give access to Layers, Selection, Groups, and Views. All handy ways to organize and interact with your model. I felt like I needed to come back to these later when I had something to interact with.
TIP: If you are like me, you probably tried to drag these panels around and hosed up your interface. Go to File > SpaceClaim Options (button at the bottom) > Appearance and click the “Reset Docking Layout” button in the upper right of the window. Back to normal.
The options panel changes dynamically as you choose things from the ribbon. If you click on the Design > Line you get this:
And if you click on Pull you get this:
Keeps the clutter down and makes the commands much more capable.
Below that is the Properties Panel. If the Options panel is how you control an operation, then the Properties panel is how you view and control an object in your model. No point in exploring that till we have objects to play with. It does have an appearance tab as well, and this controls your graphics window.
At the bottom is the Status Bar. Now I’m a big believer in status bars, and SpaceClaim uses theirs well. It tells you what is going on and/or what to do next. It also has info on what you have selected and short cut icons for selection and graphics tools. Force yourself to read and use the status bar, big time saver.
The last area of the interface is the graphics window. It of course shows you your geometry, your model. In addition there are floating tools that show up in the graphics window based upon what you are doing. Grrr. #olddogproblem_1. I’m not a fan of these, cluttering up my graphics. But almost all modern interfaces work this way now and I will have to overcome my anger and learn to deal.
For most of the 30+ years that I’ve been doing this CAD thing, I’ve always started with the same object: A block with a hole in it. So that is what we will do next. I have to admit I’m a little nervous.
I’m nervous because I’m a history based guy. If you have used most CAD tools like SolidWorks or ANSYS DesignModeler you know what history based modeling is like. You make a sketch then you add or subtract material and it keeps track of your operations. SpaceClaim is not history based. You operate on objects and it doesn’t track the steps, it just modifies your objects. SolidEdge has done this for over ten years, but I never got up the nerve to learn how to use it. So here goes, new territory.
Things start the same way. But instead of a sketch you make some curves. The screen looks like this when you start:
The default plane is good enough, so I’ll make my curves on that. Under Design>Sketch click on the Rectangle icon then move your mouse on to the grid. You will notice it snaps to the grid. Click in the Upper Left and the Lower Right to make a rectangle then enter 25mm in to each text box, making a 25 x 25 square:
Next we want to make our block. In most tools you would find an extrude operation. But in SpaceClaim they have combined the huge multitude of operations into a few operation types, and then use context or options to give you the functionality you want. That is why the next thing we want to do is click on Pull on the Edit group.
But first, notice something important. If you look at the model tree you will notice that you have only one object in your design, Curves. When you click Pull it gets out of sketch mode and into 3D mode. It also automatically turns your curves into a surface. Look at the tree again.
This is typical of SpaceClaim and why it can be so efficient. It knows what you need to do and does it for you.
Move you mouse over your newly created surface and notice that it will show arrows. Move around and put it over a line, it shows what object will be selected if you click. Go to the inside of your surface and click. It selects the surface and shows you some options right there.
Drag your mouse over the popup menu and you can see that you can set options like add material, subtract material, turn off merging (it will make a separate solid instead of combining with any existing ones), pull both directions, get a ruler, or specify that you are going to pull up to something. For now, we are just going to take the default and pull up.
As you do this the program tells you how far you are pulling. You can type in a value if you want. I decided to be boring and I put in 25 mm. Geometry has been created, no one has been hurt, and I have not lost feeling in any limbs. Yay.
On the status bar, click on the little menu next to the magnifying glass and choose Zoom Extents. That centers the block. Whew. That makes me feel better.
Now for the hole. It is the same process except simpler than in most tools. Click on the circle tool in Sketch. The grid comes back and you can use that to sketch, or you can just click on the top of the block. Let’s do that. The grid snaps up there. To make the circle click in the middle of the grid and drag it out. Put 10 in for the diameter. A circle is born.
Now choose Pull from the Edit section. There is only a Solid now?
SpaceClaim went ahead and split that top surface into two surfaces. Saving a step again.
Click on the circle surface and drag it up and down. If you go up, it adds a cylinder, if you go down, it automatically subtracts. Go ahead and pull it down and through the block and let go. Done. Standard first part created. Use the File>Save command to save your awesome geometry.
That is it for the getting started part. In the next post we will use this geometry to explore SpaceClaim more, now that we have an object to work on. As you were building this you probably saw lots of options and input and maybe even played with some of it. This is just a first look at the power inside SpaceClaim.
Click here for Post 2 where the Pull command is explored.
Hey, did you know that you can access predefined views in both ANSYS Mechanical and DesignModeler using your numeric keypad? You can! Assuming the front view is looking down the +Z-axis at the X-Y plane, here are the various views you can access via your numeric keypad.
For this to work, make sure you’ve clicked within the graphics window itself—not on the top window bar, or one of the tool bars, but right in the region where the model is displayed. You may need to turn off Num Lock, though it works for me on both my laptop and desktop with Num Lock on or off.
With that out of the way, here are the views:
0) Isometric view, a bit more zoomed in than the standard auto-fit isometric view. This is my preferred level of zoom while still being able to see the whole model, to be honest.
1) Front view (looking down the +Z-axis)
2) Bottom view (looking down the -Y-axis)
3) Right view (looking down the +X-axis)
4) Back up to the previous view
5) Isometric view, standard autofit (I don’t like the standard auto-fit—too much empty space. I prefer the keypad 0 level of zoom.)
6) Go forward to the next view in the cache
7) Left view (looking down the -X-axis)
8) Top view (looking down the +Y-axis)
9) Back view (looking down the -Z-axis)
Here’s a handy-dandy chart you can print out to refer to when using the numeric keypad to change views in Mechanical or DesignModeler. Share it with your friends.
In the last post of this series I illustrated how I handled the nested call structure of the procedural interface to ANSYS’ BinLib routines. If you recall, any time you need to extract some information from an ANSYS result file, you have to bracket the function call that extracts the information with a *Begin and *End set of function calls. These two functions setup and tear down internal structures needed by the FORTRAN library. I showed how I used RAII principles in C++ along with a stack data structure to manage this pairing implicitly. However, I have yet to actually read anything truly useful off of the result file! This post centers on the design of a set of C++ iterators that are responsible for actually reading data off of the file. By taking the time to write iterators, we expose the ANSYS RST library to many of the algorithms available within the standard template library (STL), and we also make our own job of writing custom algorithms that consume result file data much easier. So, I think the investment is worthwhile.
If you’ve programmed in C++ within the last 10 years, you’ve undoubtedly been exposed to the standard template library. The design of the library is really rather profound. This image represents the high level design of the library in a pictorial fashion:
On one hand, the library provides a set of generic container objects that provide a robust implementation of many of the classic data structures available within the field of computer science. The collection of containers includes things like arbitrarily sized contiguous arrays (vectors), linked lists, associative arrays, which are implemented as either binary trees or as a hash container, as well as many more. The set of containers alone make the STL quite valuable for most programmers.
On the other hand, the library provides a set of generic algorithms that encompass a whole suite of functionality defined in classic computer science. Sorting, searching, rearranging, merging, etc… are just a handful of the algorithms provided by the library. Furthermore, extreme care has been taken within the implementation of these algorithms such that an average programmer would hard pressed to produce something safer and more efficient on their own.
However, the real gem of the standard library are iterators. Iterators bridge the gap between generic containers on one side and the generic algorithms on the other side. Need to sort a vector of integers, or a double ended queue of strings? If so, you just call the same sort function and pass it a set of iterators. These iterators “know” how to traverse their parent container. (Remember containers are the data structures.)
So, what if we could write a series of iterators to access data from within an ANSYS result file? What would that buy us? Well, depending upon which concepts our iterators model, having them available would open up access to at least some of the STL suite of algorithms. That’s good. Furthermore, having iterators defined would open up the possibility of providing range objects. If we can provide range objects, then all of the sudden range based for loops are possible. These types of loops are more than just syntactic sugar. By encapsulating the bounds of iteration within a range, and by using iterators in general to perform the iteration, the burden of a correct implementation is placed on the iterators themselves. If you spend the time to get the iterator implementation correct, then any loop you write after that using either the iterators or better yet the range object will implicitly be correct from a loop construct standpoint. Range based for loops also make your code cleaner and easier to reason about locally.
Now for the downside… Iterators are kind of hard to write. The price for the flexibility they offer is paid for in the amount of code it takes to write them. Again, though, the idea is that you (or, better yet somebody else) writes these iterators once and then you have them available to use from that point forward.
Because of their flexibility, standard conformant iterators come in a number of different flavors. In fact, they are very much like an ice cream Sunday where you can pick and choose what features to mix in or add on top. This is great, but again it makes implementing them a bit of a chore. Here are some of the design decisions you have to answer when implementing an iterator:
|Decision||Options||Choice for RST Reader Iterators|
|Dereference Data Type||Anything you want||Special structs for each type of iterator.|
|Iteration Category||1. Forward iterator
2. Single pass iterator
3. Bidirectional iterator
4. Random access iterator
|Forward, Single Pass|
Iterators syntactically function much like pointers in C or C++. That is, like a pointer you can increment an iterator with the ++ operator, you can dereference an iterator with the * operator and you can compare two iterators for equality. We will talk more about incrementing and comparisons in a minute, but first let’s focus on dereferencing. One thing we have to decide is what type of data the client of our iterator will receive when they dereference it. My choice is to return a simple C++ structure with data members for each piece of data. For example, when we are iterating over the nodal geometry, the RST file contains the node number, the nodal coordinates and the nodal rotations. To represent this data, I create a structure like this:
I think this is pretty self-explanatory. Likewise, if we are iterating over the element geometry section of an RST file, there is quite a bit of useful information for each element. The structure I use in that case looks like this:
Again, pretty self-explanatory. So, when I’m building a node geometry iterator, I’m going to choose the NodalCoordinateData structure as my dereference type.
The next decision I have to make is what “kind” of iterator I’m going to create. That is, what types of “iteration” will it support? The C++ standard supports a variety of iterator categories. You may be wondering why anyone would ever care about an “iteration category”? Well, the reason is fundamental to the design of the STL. Remember that the primary reason iterators exist is to provide a bridge between generic containers and generic algorithms. However, any one particular algorithm may impose certain requirements on the underlying iterator for the algorithm to function appropriately.
Take the algorithm “sort” for example. There are, in fact, lots of different “sort” algorithms. The most efficient versions of the “sort” algorithm require that an iterator be able to jump around randomly in constant time within the container. If the iterator supports jumping around (a.k.a. random access) then you can use it within the most efficient sort algorithm. However, there are certain kinds of iterators that don’t support jumping around. Take a linked list container as an example. You cannot randomly jump around in a linked list in constant time. To get to item B from item A you have to follow the links, which means you have to jump from link to link to link, where each jump takes some amount of processing time. This means, for example, there is no easy way to cut a linked list exactly in half even if you know how many items in total are in the list. To cut it in half you have to start at the beginning and follow the links until you’ve followed size/2 number of links. At that point you are at the “center” of the list. However, with an array, you simply choose an index equal to size/2 and you automatically get to the center of the array in one step. Many sort algorithms, as an example, obtain their efficiency by effectively chopping the container into two equal pieces and recursively sorting the two pieces. You lose all that efficiency if you have to walk out to the center.
If you look at the “types” of iterators in the table above you will see that they build upon one another. That is, at the lowest level, I have to answer the question, can I just move forward one step? If I can’t even do that, then I’m not an iterator at all. After that, assuming I can move forward one step, can I only go through the range once, or can I go over the range multiple times? If I can only go over the range once, I’m a single pass iterator. Truthfully, the forward iterator concept and the single pass iterator concept form level 1A and 1B of the iterator hierarchy. The next higher level of functionality is a bidirectional iterator. This type of iterator can go forward and backwards one step in constant time. Think of a doubly linked list. With forward and backward links, I can go either direction one step in constant time. Finally, the most flexible iterator is the random access iterator. These are iterators that really are like raw pointers. With a pointer you can perform pointer arithmetic such that you can add an arbitrary offset to a base pointer and end up at some random location in a range. It’s up to you to make sure that you stay within bounds. Certain classes of iterators provide this level of functionality, namely those associated with vectors and deques.
So, the question is what type of iterator should we support? Perusing through the FORTRAN code shipped with ANSYS, there doesn’t appear to be an inherent limitation within the functions themselves that would preclude random access. But, my assumption is that the routines were designed to access the data sequentially. (At least, if I were the author of the functions that is what I would expect clients to do.) So, I don’t know how well they would be tested regarding jumping around. Furthermore, disk controllers and disk subsystems are most likely going to buffer the data as it is read, and they very likely perform best if the data access is sequential. So, even if it is possible to randomly jump around on the result file, I’m not sold on it being a good idea from a performance standpoint. Lastly, I explicitly want to keep all of the data on the disk, and not buffer large chunks of it into RAM within my library. So, I settled on expressing my iterators as single pass, forward iterators. These are fairly restrictive in nature, but I think they will serve the purpose of reading data off of the file quite nicely.
Regarding my choice to not buffer the data, let me pause for a minute and explain why I want to keep the data on the disk. First, in order to buffer the data from disk into RAM you have to read the data off of the disk one time to fill the buffer. So, that process automatically incurs one disk read. Therefore, if you only ever need to iterate over the data once, perhaps to load it into a more specialized data structure, buffering it first into an intermediate RAM storage will actually slow you down, and consume more RAM. The reason for this is that you would first iterate over the data stored on the disk and read it into an intermediate buffer. Then, you would let your client know the data is ready and they would iterate over your internal buffer to access the data. They might as well just read the data off the disk themselves! If the end user really wants to buffer the data, that’s totally fine. They can choose to do that themselves, but they shouldn’t have to pay for it if they don’t need it.
Finally, we are ready to implement the iterators themselves. To do this I rely on a very high quality open source library called Boost. Boost has within it a library called iterator_facade that takes care of supplying most all of the boilerplate code needed to create a conformant iterator. Using it has proven to be a real time saver. To define the actual iterator, you derive your iterator class from iterator_facade and pass it a few template parameters. One is the category defining the type of iterator you are creating. Here is the definition for the nodal geometry iterator:
You can see that there are a few private functions here that actually do all of the work. The function “increment” is responsible for moving the iterator forward one spot. The function “equal” is responsible for determining if two different iterators are in fact equal. And the function “dereference” is used to return the data associated with the current iteration spot. You will also notice that I locally buffer the single piece of data associated with the current location in the iteration space inside the iterator. This is stored in the pData member function. I also locally store the current index. Here are the implementations of the functions just mentioned:
First you can see that testing iterator equality is easy. All we do is just look to see if the two iterators are pointing to the same index. If they are, we define them as equal. (Note, an important nuance of this is that we don’t test to see if their buffered data is equal, just their index. This is important later on.) Likewise, increment is easy to understand as well. We just increase the index by one, and then buffer the new data off of disk into our local storage. Finally, dereference is easy too. All we do here is just return a reference to the local data store that holds this one node’s data. The only real work occurs in the readData() function. Inside that function you will see the actual call to the CResRdNode() function. We pass that function our current index and it populates an array of 6 doubles with data and returns the actual node number. After we have that, we simply parse out of that array of 6 doubles the coordinates and rotations, storing them in our local storage. That’s all there is to it. A little more work, but not bad.
With these handful of operations, the boost iterator_facade class can actually build up a fully conformant iterator will all the proper operator overloads, etc… It just works. Now that we have iterators, we need to provide a “begin” and “end” function just like the standard containers do. These functions should return iterators that point to the beginning of our data set and to one past the end of our data set. You may be thinking to yourself, wait a minute, how to we provide an “end” iterator without reading the whole set of nodes? The reality is, we just need to provide an iterator that “equality tests” to be equal to the end of our iteration space? What does that mean? Well, what it means is that we just need to provide an iterator that, when compared to another iterator which has walked all the way to the end, it passes the “equal” test. Look at the “equal” function above. What do two iterators need to have in common to be considered equal? They need to have the same index. So, the “end” iterator simply has an index equal to one past the end of the iteration space. We know how big our iteration space is because that is one of the pieces of metadata supplied by those ResRd*Begin functions. So, here are our begin/end functions to give us a pair of conformant iterators.
Notice, that the nodes_end() function creates a NodeIterator and passes it an index that is one past the maximum number of nodes that have coordinate data stored on file. You will also notice, that it does not have a second Boolean argument associated with it. I use that second argument to determine if I should immediately read data off of the disk when the iterator is constructed. For the begin iterator, I need to do that. For the end iterator, I don’t actually need to cache any data. In fact, no data for that node is defined on disk. I just need a sentinel iterator that is one past the iteration space.
So, there you have it. Iterators are defined that implicitly walk over the rst file pulling data off as needed and locally buffering one piece of it. These iterators are standard conformant and thus can be used with any STL algorithm that accepts a single pass, read only, forward iterator. They are efficient in time and storage. There is, though, one last thing that would be nice. That is to provide a range object so that we can have our cake and eat it too. That is, so we can write these C++11 range based for loops. Like this:
To do that we need a bit of template magic. Consider this template and type definition:
There is a bit of machinery that is going on here, but the concept is simple. I just want the compiler to stamp out a new type that has a “begin()” and “end()” member function that actually call my “nodes_begin()” and “nodes_end()” functions. That is what this template will do. I can also create a type that will call my “elements_begin()” and “elements_end()” function. Once I have those types, creating range objects suitable for C++11 range based for loops is a snap. You just make a function like this:
This function creates one of these special range types and passes in a pointer to our RST reader. When the compiler then sees this code:
It sees a range object as the return type of the “nodes()” function. That range object is compatible with the semantics of range based for loops, and therefore the compiler happily generates code for this construction.
Now, after all of this work, the client of the RST reader library can open a result file, select something of interest, and start looping over the items in that category; all in three lines of code. There is no need to understand the nuances of the binlib functions. But best of all, there is no appreciable performance hit for this abstraction. At the end of the day, we’re not computationally doing anything more than what a raw use of the binlib functions would perform. But, we’re adding a great deal of type safety, and, in my opinion, readability to the code. But, then again, I’m only slightly partial to C++. Your mileage may vary…
In the last post in this series I illustrated how you can interface C code with FORTRAN code so that it is possible to use the ANSYS, Inc. BinLib routines to read data off of an ANSYS result file within a C or C++ program. If you recall, the routines in BinLib are written in FORTRAN, and my solution was to use the interoperability features of the Intel FORTRAN compiler to create a shim library that sits between my C++ code and the BinLib code. In essence, I replicated all of the functions in the original BinLib library, but gave them a C interface. I call this library CBinLib.
You may remember from the last post that I wanted to add a more C++ friendly interface on top of the CBinLib functions. In particular, I showed this piece of code, and alluded to an explanation of how I made this happen. This post covers the first half of what it takes to make the code below possible.
What you see in the code above is the use of C++11 range based “for loops” to iterate over the nodes and elements stored on the result file. To accomplish this we need to create conformant STL style iterators and ranges that iterate over some space. I’ll describe the creation of those in a subsequent post. In this post, however, we have to tackle a different problem. The solution to the problem is hidden behind the “select” function call shown in the code above. In order to provide some context for the problem, let me first show you the calling sequence for the procedural interface to BinLib. This is how you would use BinLib if you were programming in FORTRAN or if you were directly using the CBinLib library described in the previous post. Here is the recommended calling sequence:
You can see the design pattern clearly in this skeleton code. You start by calling ResRdBegin, which gives you a bunch of useful data about the contents of the file in general. Then, if you want to read geometric data, you need to call ResRdGeomBegin, which gives you a little more useful metadata. After that, if you want to read the nodal coordinate data you call ResRdNodeBegin followed by a loop over nodes calling ResRdNode, which gives you the actual data about individual nodes, and then finally call ResRdNodeEnd. If at that point you are done with reading geometric data, you then call ResRdGeomEnd. And, if you are done with the file you call ResRdEnd.
Now, one thing jumps off the page immediately. It looks like it is important to pair up the *Begin and*End calls. In fact, if you peruse the ResRd.F FORTRAN file included with the ANSYS distribution you will see that in many of the *End functions, they release resources that were allocated and setup in the corresponding *Begin function.
So, if you forget to call the appropriate *End, you might leak resources. And, if you forget to call the appropriate *Begin, things might not be setup properly for you to iterate. Therefore, in either case, if you fail to call the right function, things are going to go badly for you.
This type of design pattern where you “construct” some scaffolding, do some work, and then “destruct” the scaffolding is ripe for abstracting away in a C++ type. In fact, one of the design principles of C++ known as RAII (Resource Acquisition Is Initialization) maps directly to this problem. Imagine that we create a class in which in the constructor of the class we call the corresponding *Begin function. Likewise, in the destructor of the class we call the corresponding *End function. Now, as long as we create an instance of the class before we begin iterating, and then hang onto that instance until we are done iterating, we will properly match up the *Begin, *End calls. All we have to do is create classes for each of these function pairs and then create an instance of that class before we start iterating. As long as that instance is still alive until we are finished iterating, we are good.
Ok, so abstracting the *Begin and *End function pairs away into classes is nice, but how does that actually help us? You would still have to create an instance of the class, hold onto it while you are iterating, and then destroy it when you are done. That sounds like more work than just calling the *Begin, *End directly. Well, at first glance it is, but let’s see if we can use the paradigm more intelligently. For the rest of this article, I’ll refer to these types of classes as BeginEnd classes, though I call them something different in the code.
First, what we really want is to fire and forget with these classes. That is, we want to eliminate the need to manually manage the lifetime of these BeginEnd classes. If I don’t accomplish this, then I’ve failed to reduce the complexity of the *Begin and *End requirements. So, what I would like to do is to create the appropriate BeginEnd class when I’m ready to extract a certain type of data off of the file, and then later on have it magically delete itself (and thus call the appropriate *End function) at the right time. Now, one more complication. You will notice that these *Begin and*End function pairs are nested. That is, I need to call ResRdGeomBegin before I call ResRdNodeBegin. So, not only do I want a solution that allows me to fire and forget, but I want a solution that manages this nesting.
Whenever you see nesting, you should think of a stack data structure. To increase the nesting level, you push an item onto the stack. To decrease the nesting level, you pop and item off of the stack. So, we’re going to maintain a stack of these BeginEnd classes. As an added benefit, when we introduce a container into the design space, we’ve introduced something that will control object lifetime for us. So, this stack is going to serve two functions for us. It’s going to ensure we call the *Begin’s and *End’s in the right nested order, and second, it’s going to maintain the BeginEnd object lifetimes for us implicitly.
To show some code, here is the prototype for my pure virtual class that serves as a base class for all of the BeginEnd classes. (In my code, I call these classes FileSection classes)
You can see that it is an interface class by noting the pure virtual function getLevel. You will also notice that this function returns a ResultFileSectionLevel. This is just an enum over file section types. I like to use an enum as opposed to relying on RTTI. Now, for each BeginEnd pair, I create an appropriate derived class from this base ResultFileSection class. Within the destructor of each of the derived classes I call the appropriate *End function. Finally, here is my stack data structure definition:
You can see that it is just a std::stack holding objects of the type SectionPtrT. A SectionPtrT is a std::unique_ptr for objects convertible to my base section class. This will enable the stack to hold polymorphic data, and the std::unique_ptr will manage the lifetime of the objects appropriately. That is, when we pop and object off of the stack, the std::unique_ptr will make sure to call delete, which will call the destructor. The destructor calls the appropriate *End function as we’ve mentioned before.
At this point, we’ve reduced our problem to managing items on a stack. We’re getting closer to making our lives easier, trust me! Let’s look at a couple of different functions to show how we pull these pieces together. The first function is called reduceToLevelOrBegin(level). See the code below:
The operation of this function is fairly straightforward, yet it serves an integral role in our BeginEnd management infrastructure. What this function does is it iteratively pops items off of our section stack until it either reaches the specified level, or it reaches the topmost ResRdBegin level. Again, through the magic of C++ polymorphism, when an item gets popped off of the stack, eventually its destructor is called and that in turn calls the appropriate *End function. So, what this function accomplishes is it puts us at a known level in the nested section hierarchy and, while doing so, ensures that any necessary *End functions are called appropriately to free resources on the FORTRAN side of things. Notice that all of that happens automatically because of the type system in C++. By popping items off of the stack, I implicitly clean up after myself.
The second function to consider is one of a family of similar functions. We will look at the function that prepares the result file reader to read element geometry data off of the file. Here it is:
You will notice that we start by reducing the nested level to either the “Geometry” level or the “Begin” level. Effectively what this does is unwind any nesting we have done previously. This is the machinery that makes “fire and forget” possible. That is, whenever in ages past that we requested something to be read off of the result file, we would have pushed onto the stack a series of objects to represent the level needed to read the data in question. Now that we wish to read something else, we unwind any previously existing nested Begin calls before doing so. That is, we clean up after ourselves only when we ask to read a different set of data. By waiting until we ask to read some new set of data to unwind the stack, we implicitly allow the lifetime of our BeginEnd classes to live beyond iteration.
At this point we have the stack in a known state; either it is at the Begin level or the Geometry level. So, we simply call the appropriate *Begin functions depending on what level we are at, and push the appropriate type of BeginEnd objects onto the stack to record our traversal for later cleanup. At this point, we are ready to begin iterating. I’ll describe the process of creating iterators in the next post. Clearly, there are lots of different select*** functions within my class. I have chosen to make all of them private and have a single select function that takes an enum descriptor of what to select and some additional information for specifying result data.
One last thing to note with this design. Closing the result file is easy. All that is required is that we simply fully unwind the stack. That will ensure all of the appropriate FORTRAN cleanup code is called in the right order. Here is the close function:
As you can see, cleanup is handled automatically. So, to summarize, we use a stack of polymorphic data to manage the BeginEnd function calls that are necessary in the FORTRAN interface. By doing this we ensure a level of safety in our class design. Also, this moves us one step closer to this code:
In the next post I will show how I created iterators and range objects to enable clean for loops like the ones shown above.
What s xRDP? Taking directly from the xRDP website.
“Xrdp is the main server accepting connections from RDP clients. Xrdp contains the RDP, security, MCS, ISO, and TCP layers, a simple window manager and a few controls. Its a multi threaded single process server. It is in this process were the central management of the sessions are maintained. Central management includes shadowing a session and administrating pop ups to users. Xrdp is control by the configuration file xrdp.ini.
RDP has 3 security levels between the RDP server and RDP client. Low, medium and high. Low is 40 bit, data from the client to server is encrypted, medium is 40 bit encryption both ways and high is 128 bit encryption both ways. Xrdp currently supports all 3 encryption levels via the xrdp.ini file. RSA key exchange is used with both client and server randoms to establish the RC4 keys before the client connect.
Modules are loaded at runtime to provide the real functionality. Many different modules can be created to present one of many different desktops to the user. The modules are loadable to conserve memory and support both GPL and non GPL modules.
Multi threaded to provide optimal user performance. One client can’t slow them all down. One multi threaded process is also required for session shadowing with any module. The module doesn’t have to consider shadowing, the xrdp server does it. For example, you could shadow a VNC, RDP or a custom module session all from the same shadowing tool.
Build in window manager for sending pop ups to any user running any module. Also can be user to provide connection errors or prompts.
Libvnc, a VNC module for xrdp. Libvnc provides a connection to VNC servers. Its a simple client only supporting a few VNC encodings(raw, cursor, copyrect). Emphasis on being small and fast. Normally, the xrdp server and the Xvnc server are the same machine so bitmap compression encodings would only slow down the session.
Librdp, an RDP module for xrdp. Librdp provides a connection to RDP servers. It only supports RDP4 connections currently.
Sesman, the session manager. Sesman is xrdp’s session manager. Xrdp connect to sesman to verify the user name / password, and also starts the user session if credentials are ok. This is a multi process / Linux only session manager. Sessions can be started or viewed from the command line via sesrun.”
STEP 1 – Setup xRDP Same on your CUBE Linux Compute Server:
- Add the following repository for the needed extra packages for enterprise linux
- rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
I am using the platform CentOS release 6.7 – 64 Bit for this installation of xRDP
- rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
- Install xRDP
- yum install xrdp tigervnc-server -y
- Start xRDP
- service xrdp start
- Enter the following commands to ensure that the xRDP services restart on a reboot
- chkconfig xrdp on
- chkconfig vncserver on
- Add the ANSYS linux users into the following groups:
- users & video
- Now try it out!
STEP 2.0.0 (optional) – How To Setup xRDP Same Session Remote Desktop on your CUBE Linux Compute Server:
2.0.1) Login as root via SSH
2.0.2) cd /etc/xrdp/
2.0.3) Edit the xrdp.ini file as the root user. Open and edit the xrdp.ini file. For same session sharing, locate and modify the last line of the xrdp.ini configuration file.
- Change from port=-1 to port=ask-1
2.0.4) Save the xrdp.ini and restart the xrdp service (command is below)
- service xrdp restart
2.0.5) Next, For you MPI local or distributed Users, edit the following files
- cd /etc/pam.d/
- edit the file xrdp-sesman
- add –> session required pam_limits.so
2.0.6) For users of xRDP same session management.
- cd /etc/pam.d/
- edit the common-session file
- add –> session required pam_limits.so
STEP 2.1.0 – Open the Microsoft Remote Desktop client on your Windows Machine.
- Try logging in from two machines or two sessions of Microsoft Remote Desktop
- Enter the hostname or IP address of your CUBE Linux Compute Server
(see screen capture below)
STEP 2.2.0 – Pay Attention to a few things while logging in.
- For you the originator of the RDP desktop session:
- AS You are logging into the linux machine note the port number used for your login as the login window script executes.
- PORT 5910
(see screen capture below)
- Login! The new xRDP console session has been created on the Linux machine.
- This session is the remote desktop session that you created so that you can share the same desktop with another user.
STEP 2.3.0 – Login process for you the secondary RDP user:
- As you begin the remote desktop login process. Enter the port provided or that was created for the primary user. Our primary user noted and informed you that the port number for his RDP session was 5910.
- Enter this port number into your session window when entering your login information via RDP:
(see screen capture below)
- Click OK to login to the desktop
- Success! You are now both logged into the same RDP session. Both users will see the same screen and cursor move being controlled by the one or other user.
(see screen capture below)
Final Thoughts pertaining to xRDP/remote desktop connections and screen sharing on 64-bit Linux.
Other/Secondary users who do not need to login to an already running/existing remote desktop session. Do not enter the port number the server, leave the port setting as -1. Logging in this way ensure you will have a unique NONSHARED remote desktop experience.
The Primary or the originator of the xRDP session. Do not use SYSTEM –> LOGOUT to close the RDP session.Simply minimize the session or click the X to close your window out.
(see screen capture below)
Are you aware that ANSYS recently released ANSYS 17.0? Well for you ANSYS CFD users check out the beautiful ANSYS FLUENT 17.0 GUI for Linux. If you look close enough at the screen capture you will notice that I was running one of the ANSYS fluent benchmarks.
The External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m) an ANSYS FLUENT benchmark.
(see screen capture below)
References/Notes/Performance Tuning for xRDP:
xRDP website – xRDP
Verify that you have the latest Nvidia graphics card driver and/or you are having openGL issues:
- Not sure what Nvidia graphics card you have?
- Try running this command –> lspci -k | grep -A 2 -E “(VGA|3D)”
- If you already have the Nvidia graphics card driver installed but you are unsure what driver version is currently installed.
- Try running this command –> nvidia-smi
- Direct rendering –> Yes or No
- glxinfo|head -n 25
- glxinfo | grep OpenGL
Uh Oh! If the output of these commands look something like what you see below:
$ glxinfo|head -n 25
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Error: couldn’t find RGB GLX visual or fbconfig
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
name of display: :11.0
Then please add this information to the end of the xorg.conf file and reboot the server.
- The xorg.conf file is located in: /etc/X11
- Section “Module”
Other Features of the NVIDIA Installer
Without options, the .run file executes the installer after unpacking it. The installer can be run as a separate step in the process, or can be run at a later time to get updates, etc. Some of the more important commandline options of nvidia-installer are:
During installation, the installer will make backups of any conflicting files and record the installation of new files. The uninstall option undoes an install, restoring the system to its pre-install state.
Connect to NVIDIA’s FTP site, and report the latest driver version and the url to the latest driver file.
Connect to NVIDIA’s FTP site, download the most recent driver file, and install it.
The installer uses an ncurses-based user interface if it is able to locate the correct ncurses library. Otherwise, it will fall back to a simple commandline user interface. This option disables the use of the ncurses library.
This how to install for xRDP was installed onto a CentOS release 6.7 (Final) Linux using PADT, Inc. – CUBE engineering simulation compute servers.
Recently, I’ve encountered the need to read the contents of ANSYS Mechanical result files (e.g. file.rst, file.rth) into a C++ application that I am writing for a client. Obviously, these files are stored in a proprietary binary format owned by ANSYS, Inc. Even if the format were published, it would be daunting to write a parser to handle it. Fortunately, however, ANSYS supplies a series of routines that are stored in a library called BinLib which allow a programmer to access the contents of a result file in a procedural way. That’s great! But, the catch is the routines are written in FORTRAN… I’ve been programming for a long time now, and I’ll be honest, I can’t quite stomach FORTRAN. Yes, the punch card days were before my time, but seriously, doesn’t a compiler have something better to do than gripe about what column I’m typing on… (Editor’s note: Matt does not understand the pure elegance of FORTRAN’s majestic simplicity. Any and all FORTRAN bashing is the personal opinion of Mr. Sutton and in no way reflects the opinion of PADT as a company or its owners. – EM)
So, the problem shifts from how to read an ANSYS result file to how to interface between C/C++ and FORTRAN. It turns out this is more complicated than it really should be, and that is almost exclusively because of the abomination known as CHARACTER(*) arrays. Ah, FORTRAN… You see, if weren’t for the shoddy character of CHARACTER(*) arrays the mapping between the basic data types in FORTRAN and C would be virtually one for one. And thus, the mechanics of function calls could fairly easily be made to be identical between the two languages. If the function call semantics were made identical, then sharing code between the two languages would be quite straightforward. Alas, because a CHARACTER array has a kind of implicit length associated with it, the compiler has to do some kind of magic within any function signature that passes one or more of these arrays. Some compilers hide parameters for the length and then tack them on to the end of the function call. Some stuff the hidden parameters right after the CHARACTER array in the call sequence. Some create a kind of structure that combines the length with the actual data into a special type. And then some compilers do who knows what… The point is, there is no convention among FORTRAN compilers for handling the function call semantics, so there is no clean interoperability between C and FORTRAN.
Fortunately, the Intel FORTRAN compiler has created this markup language for FORTRAN that functions as an interoperability framework that enables FORTRAN to speak C and vice versa. There are some limitations, however, which I won’t go into detail on here. If you are interested you can read about it in the Intel FORTRAN compiler manual. What I want to do is highlight an example of what this looks like and then describe how I used it to solve my problem. First, an example:
What you see in this image is code for the first function you would call if you want to read an ANSYS result file. There are a lot of arguments to this function, but in essence what you do is pass in the file name of the result file you wish to open (Fname), and if everything goes well, this function sends back a whole bunch of data about the contents of the file. Now, this function represents code that I have written, but it is a mirror image of the ANSYS routine stored in the binlib library.
I’ve highlighted some aspects of the code that constitute part of the interoperability markup. First of all, you’ll notice the markup BIND highlighted in red. This markup for the FORTRAN function tells the compiler that I want it to generate code that can be called from C and I want the name of the C function to be “CResRdBegin”. This is the first step towards making this function callable from C. Next, you will see highlighted in blue something that looks like a comment. This however, instructs the compiler to generate a stub in the exports library for this routine if you choose to compile it into a DLL. You won’t get a .lib file when compiling this as a .dll without this attribute. Finally, you see the ISO_C_BINDING and definition of the type of character data we can make interoperable. That is, instead of a CHARACTER(261) data type, we use an array of single CHARACTER(1) data. This more closely matches the layout of C, and allows the FORTRAN compiler to generate compatible code. There is a catch here, though, and that is in the Title parameter. ANSYS, Inc. defines this as an array of CHARACTER(80) data types. Unfortunately, the interoperability stuff from Intel doesn’t support arrays of CHARACTER(*) data types. So, we flatten it here into a single string. More on that in a minute.
You will notice too, that there are markups like (c_int), etc… that the compiler uses to explicitly map the FORTRAN data type to a C data type. This is just so that everything is explicitly called out, and there is no guesswork when it comes to calling the routine. Now, consider this bit of code:
First, I direct your attention to the big red circle. Here you see that all I am doing is collecting up a bunch of arguments and passing them on to the ANSYS, Inc. routine stored in BinLib.lib. You also should notice the naming convention. My FORTRAN function is named CResRdBegin, whereas the ANSYS, Inc. function is named ResRdBegin. I continue this pattern for all of the functions defined in the BinLib library. So, this function is nothing more than a wrapper around the corresponding binlib routine, but it is annotated and constrained to be interoperable with the C programming language. Once I compile this function with the FORTRAN compiler, the resulting code will be callable directly from C.
Now, there are a few more items that have to be straightened up. I direct your attention to the black arrow. Here, what I am doing is converting the passed in array of CHARACTER(1) data into a CHARACTER(*) data type. This is because the ANSYS, Inc. version of this function expects that data type. Also, the ANSYS, Inc. version needs to know the length of the file path string. This is stored in the variable ncFname. You can see that I compute this value using some intrinsics available within the compiler by searching for the C NULL character. (Remember that all C strings are null terminated and the intent is to call this function from C and pass in a C string.)
Finally, after the call to the base function is made, the strings representing the JobName and Title must be converted back to a form compatible with C. For the jobname, that is a fairly straightforward process. The only thing to note is how I append the C_NULL_CHAR to the end of the string so that it is a properly terminated C string.
For the Title variable, I have to do something different. Here I need to take the array of title strings and somehow represent that array as one string. My choice is to delimit each title string with a newline character in the final output string. So, there is a nested loop structure to build up the output string appropriately.
After all of this, we have a C function that we can call directly. Here is a function prototype for this particular function.
So, with this technique in place, it’s just a matter of wrapping the remaining 50 functions in binlib appropriately! Now, I was pleased with my return to the land of C, but I really wanted more. The architecture of the binlib routines is quite easy to follow and very well laid out; however, it is very, very procedural for my tastes. I’m writing my program in C++, so I would really like to hide as much of this procedural stuff as I can. Let’s say I want to read the nodes and elements off of a result file. Wouldn’t it be nice if my loops could look like this:
That is, I just do the following:
- Ask to open a file (First arrow)
- Tell the library I want to read nodal geometric data (Second arrow)
- Loop over all of the nodes on the RST file using C++11 range based for loops
- Repeat for elements
Isn’t that a lot cleaner? What if we could do it without buffering data and have it compile down to something very close to the original FORTRAN code in size and speed? Wouldn’t that be nice? I’ll show you how I approached it in Part 2.
So we have known for a long time that we can parameterize material properties in the Engineering Data screen. That works great if we want to adjust the modulus of a material to account for material irregularities. But what if you want to change the entire material of a part from steel to aluminum? Or if you have 5 different types of aluminum to choose, on several different parts, and you want to run a Design Study to see what combination of materials is the best? Well, then you do this. The process includes some extra bodies, some Named Selections, and a single command snippet.
The first thing to do is to add a small body to your model for each different material that you want to swap in and out, and assign your needed material to them. You’ll have to add the materials to your Engineering Data prior to this. For my example I added three cubes and just put Frictionless supports on three sides of each cube. This assures that they are constrained but not going to cause any stresses from thermal loads if you forget and import a thermal profile for “All Bodies”.
Next, you make a Named Selection for each cube, named Holder1, Holder2, etc. This allows us to later grab the correct material based on the number of the Holder.
You also make a Named selection for each group of bodies for which you want to swap the materials. Name these selections as MatSwap1, MatSwap2, etc.
The command snippet goes in the Environment Branch. (ex. Static Structural, Steady-State Thermal, etc.)
!############################################################################################################################### ! MATSWAP.MAC ! Created by Joe Woodward at PADT,Inc. ! Created on 2/12/2016 ! ! Usage: Create Named Selections, Holder1, Holder2, etc.,for BODIES using the materials that you want to use. ! Create Named Selections called MatSwap1, MatSwap2, etc. for the groups of BODIES for which you want to swap materials. ! Set ARG1 equal to the Holder number that has the material to give to MatSwap1. ! Set ARG2 equal to the Holder number that has the material to give to MatSwap2. ! And so on.... ! A value of 0 will not swap materials for that given group. ! ! Use as is. No Modification to this command snippet is necessary. !############################################################################################################################### /prep7 *CREATE,MATSWAP,MAC *if,arg1,NE,0,then *get,isthere,COMP,holder%arg1%,TYPE *get,swapgood,COMP,matswap%ARG2%,TYPE *if,isthere,eq,2,then esel,s,,,holder%arg1% *get,newmat,elem,ELNEXT(0),ATTR,MAT !swap material for Body 1 *if,swapgood,eq,2,then esel,s,,,matswap%ARG2% emodif,all,mat,newmat *else /COM,The Named Selection - MatSwap%ARG2% is not set to one or more bodies *endif *else /COM,The Named Selection Holder%ARG1% is not set to one or more bodies *endif *endif *END MATSWAP,ARG1,1 !Use material from Holder1 for Swap1 MATSWAP,ARG2,2 !Use material from Holder1 for Swap2 MATSWAP,ARG3,3 !Use material from Holder1 for Swap3 MATSWAP,ARG4,4 !Use material from Holder1 for Swap4 MATSWAP,ARG5,5 !Use material from Holder1 for Swap5 MATSWAP,ARG6,6 !Use material from Holder1 for Swap6 MATSWAP,ARG7,7 !Use material from Holder1 for Swap7 MATSWAP,ARG8,8 !Use material from Holder1 for Swap8 MATSWAP,ARG9,9 !Use material from Holder1 for Swap9 alls /solu
Now, each of the Arguments in the Command Snippet Details corresponds to the ‘MatSwap’ Name Selection of the same number. ARG1 controls the material assignment for all the bodies in the MatSwap1 name selection. The value of the argument is the number of the ‘Holder’ body with the material that you want to use. A value of zero leaves the material assignment alone and does not change the original material assignment for the bodies of that particular ‘MatSwap’ Named Selection. There is no limit on the number of ‘Holder’ bodies and materials that you can use, but there is a limit of nine ‘MatSwap’ groups that you can modify, because there are only nine ARG variables that you can parameterize in the Command Snippet details.
You can see how the deflection changes for the different material combinations. These three steps, holder bodies, Named Selections, and the command snippet above, will give you design study options that were not available before. Hopefully I’ll have an even simpler way in the future. Stay tuned.
This how to describes how to install PuTTY and Xming and then hook the two together to provide you the end-user with an X Window System display server, a set of traditional sample X applications and tools, and a set of fonts. These two products will help to eliminate many of your frustrations! Xming features support of several languages that many of our ANSYS Analyst’s use here at PADT, Inc. We truly enjoy and use these two products. One reason for why would should be interested is that by combining Xming and PuTTY for use in numerical simulation Mesa 3D, OpenGL, and GLX 3D graphics extensions capabilities work amazingly well! Kudos to the programmers, we love you!
Server: CUBE Linux 64-bit Server
Client: Windows 7 Professional 64-bit
Step 1 – Install PuTTY first (accept defaults)
Double-check that the Normal PuTTy link with SSH client is checked
Step 3 – After the program has completed installation.
Step 4 – Install the Xming fonts that you had downloaded earlier.
Verify that Xming has been started. You will notice a new running task inside of your task bar. If you hover over the X icon in your taskbar. It should Say something like “Xming Server:0.0”
Now let us hook them together. It is X and PUTTY time!
Step 5 – Open your PUTTY application.
- Enter the hostname or IP address.
- Enter in a Session name:
- On the left side bar within PUTTY. Locate –> Connection and then expand out –> SSH –> X11
o Check –> Enable X11 forwarding
Save the new session –> Locating on the left panel of your PUTTY program (you may need to scroll up a little bit).
Click on the text –> Session and then Save the new session.
Yay! now open your newly saved session and login to a CUBE linux server to test and verify.
I always forget to remember tell people this TIP but for multi display types: Start Xming in -multiwindow mode.
How? from Command Prompt (the Windows cmd console) or create a desktop shortcut.
“C:\Program Files\Xming\Xming.exe” -multiwindow -clipboard
Have a Happy Valentines Day Weekend and do not forget to show the penguin some love too. This penguin looks lonely and maybe needs a date?