How To Install And Configure xRDP and Same Session xRDP on CentOS 6.7 / RHEL 6.7


What s xRDP? Taking directly from the xRDP website.

“Xrdp is the main server accepting connections from RDP clients. Xrdp contains the RDP, security, MCS, ISO, and TCP layers, a simple window manager and a few controls. Its a multi threaded single process server. It is in this process were the central management of the sessions are maintained. Central management includes shadowing a session and administrating pop ups to users. Xrdp is control by the configuration file xrdp.ini.

RDP has 3 security levels between the RDP server and RDP client. Low, medium and high. Low is 40 bit, data from the client to server is encrypted, medium is 40 bit encryption both ways and high is 128 bit encryption both ways. Xrdp currently supports all 3 encryption levels via the xrdp.ini file. RSA key exchange is used with both client and server randoms to establish the RC4 keys before the client connect.

Modules are loaded at runtime to provide the real functionality. Many different modules can be created to present one of many different desktops to the user. The modules are loadable to conserve memory and support both GPL and non GPL modules.

Multi threaded to provide optimal user performance. One client can’t slow them all down. One multi threaded process is also required for session shadowing with any module. The module doesn’t have to consider shadowing, the xrdp server does it. For example, you could shadow a VNC, RDP or a custom module session all from the same shadowing tool.

Build in window manager for sending pop ups to any user running any module. Also can be user to provide connection errors or prompts.


Libvnc, a VNC module for xrdp. Libvnc provides a connection to VNC servers. Its a simple client only supporting a few VNC encodings(raw, cursor, copyrect). Emphasis on being small and fast. Normally, the xrdp server and the Xvnc server are the same machine so bitmap compression encodings would only slow down the session.


Librdp, an RDP module for xrdp. Librdp provides a connection to RDP servers. It only supports RDP4 connections currently.


Sesman, the session manager. Sesman is xrdp’s session manager. Xrdp connect to sesman to verify the user name / password, and also starts the user session if credentials are ok. This is a multi process / Linux only session manager. Sessions can be started or viewed from the command line via sesrun.”

STEP 1 – Setup xRDP Same on your CUBE Linux Compute Server:

  1. Add the following repository for the needed extra packages for enterprise linux
    • rpm -Uvh
      I am using the platform CentOS release 6.7 – 64 Bit for this installation of xRDP
  2. Install xRDP
    • yum install xrdp tigervnc-server -y
  3. Start xRDP
    • service xrdp start
  4. Enter the following commands to ensure that the xRDP services restart on a reboot
    • chkconfig xrdp on
    • chkconfig vncserver on
  5. Add the ANSYS linux users into the following groups:
    • users & video
  6. Now try it out!


STEP 2.0.0 (optional) – How To Setup xRDP Same Session Remote Desktop on your CUBE Linux Compute Server:

2.0.1) Login as root via SSH

2.0.2) cd /etc/xrdp/

2.0.3) Edit the xrdp.ini file as the root user. Open and edit the xrdp.ini file. For same session sharing, locate and modify the last line of the xrdp.ini configuration file.

  • Change from port=-1 to port=ask-1

vi /etc/X11/Xrdp.ini


2.0.4) Save the xrdp.ini and restart the xrdp service (command is below)

  • service xrdp restart

2.0.5) Next, For you MPI local or distributed Users, edit the following files

  • cd /etc/pam.d/
  • edit the file xrdp-sesman
  • add –> session required

2.0.6) For users of xRDP same session management.

  • cd /etc/pam.d/
  • edit the common-session file
  • add –> session required

STEP 2.1.0 – Open the Microsoft Remote Desktop client on your Windows Machine.

  • Try logging in from two machines or two sessions of Microsoft Remote Desktop
  • Enter the hostname or IP address of your CUBE Linux Compute Server
  • Login

(see screen capture below)


STEP 2.2.0 – Pay Attention to a few things while logging in.

  • For you the originator of the RDP desktop session:
  • AS You are logging into the linux machine note the port number used for your login as the login window script executes.
  • PORT 5910

(see screen capture below)


  •  Login! The new xRDP console session has been created on the Linux machine.
    • This session is the remote desktop session that you created so that you can share the same desktop with another user.

STEP 2.3.0 – Login process for you the secondary RDP user:

  • As you begin the remote desktop login process. Enter the port provided or that was created for the primary user. Our primary user noted and informed you that the port number for his RDP session was 5910.
  • Enter this port number into your session window when entering your login information via RDP:

(see screen capture below)


  •  Click OK to login to the desktop
  • Success! You are now both logged into the same RDP session. Both users will see the same screen and cursor move being controlled by the one or other user.

(see screen capture below)


Final Thoughts pertaining to xRDP/remote desktop connections and screen sharing on 64-bit Linux.

Other/Secondary users who do not need to login to an already running/existing remote desktop session. Do not enter the port number the server, leave the port setting as -1. Logging in this way ensure you will have a unique NONSHARED remote desktop experience.

The Primary or the originator of the xRDP session. Do not use SYSTEM –> LOGOUT to close the RDP session.Simply minimize the session or click the X to close your window out.

(see screen capture below)


Are you aware that ANSYS recently released ANSYS 17.0? Well for you ANSYS CFD users check out the beautiful ANSYS FLUENT 17.0 GUI for Linux. If you look close enough at the screen capture you will notice that I was running one of the ANSYS fluent benchmarks.

The External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m) an ANSYS FLUENT benchmark.

(see screen capture below)


References/Notes/Performance Tuning for xRDP:

xRDP website – xRDP

Nvidia’s website reference: NVIDIA Graphics Cards: NVIDIA How To

Performance Tuning:

Verify that you have the latest Nvidia graphics card driver and/or you are having openGL issues:

  1. Not sure what Nvidia graphics card you have?
    1. Try running this command –> lspci -k | grep -A 2 -E “(VGA|3D)”
  2. If you already have the Nvidia graphics card driver installed but you are unsure what driver version is currently installed.
    1. Try running this command –> nvidia-smi
  3. Direct rendering –> Yes or No
    1. glxinfo|head -n 25
    2. glxinfo | grep OpenGL

Uh Oh! If the output of these commands look something like what you see below:

$ glxinfo|head -n 25
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Error: couldn’t find RGB GLX visual or fbconfig
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
Xlib: extension “GLX” missing on display “:11.0”.
name of display: :11.0

Then please add this information to the end of the xorg.conf file and reboot the server.

  • The xorg.conf file is located in: /etc/X11
  • Section “Module”
    Load “extmod”
    Load “dbe”
    Load “type1”
    Load “freetype”
    Load “glx”

Other Features of the NVIDIA Installer

Without options, the .run file executes the installer after unpacking it. The installer can be run as a separate step in the process, or can be run at a later time to get updates, etc. Some of the more important commandline options of nvidia-installer are:

nvidia-installer options

During installation, the installer will make backups of any conflicting files and record the installation of new files. The uninstall option undoes an install, restoring the system to its pre-install state.

Connect to NVIDIA’s FTP site, and report the latest driver version and the url to the latest driver file.

Connect to NVIDIA’s FTP site, download the most recent driver file, and install it.

The installer uses an ncurses-based user interface if it is able to locate the correct ncurses library. Otherwise, it will fall back to a simple commandline user interface. This option disables the use of the ncurses library.

This how to install for xRDP was installed onto a CentOS release 6.7 (Final) Linux using PADT, Inc. – CUBE engineering simulation compute servers.

Reading ANSYS Mechanical Result Files (.RST) from C/C++ (Part 1)

ansys-fortran-to-c-cpp-1-00Recently, I’ve encountered the need to read the contents of ANSYS Mechanical result files (e.g. file.rst, file.rth) into a C++ application that I am writing for a client. Obviously, these files are stored in a proprietary binary format owned by ANSYS, Inc.  Even if the format were published, it would be daunting to write a parser to handle it.  Fortunately, however, ANSYS supplies a series of routines that are stored in a library called BinLib which allow a programmer to access the contents of a result file in a procedural way.  That’s great!  But, the catch is the routines are written in FORTRAN… I’ve been programming for a long time now, and I’ll be honest, I can’t quite stomach FORTRAN.  Yes, the punch card days were before my time, but seriously, doesn’t a compiler have something better to do than gripe about what column I’m typing on… (Editor’s note: Matt does not understand the pure elegance of FORTRAN’s majestic simplicity. Any and all FORTRAN bashing is the personal opinion of Mr. Sutton and in no way reflects the opinion of PADT as a company or its owners. – EM) 

So, the problem shifts from how to read an ANSYS result file to how to interface between C/C++ and FORTRAN.  It turns out this is more complicated than it really should be, and that is almost exclusively because of the abomination known as CHARACTER(*) arrays.  Ah, FORTRAN… You see, if weren’t for the shoddy character of CHARACTER(*) arrays the mapping between the basic data types in FORTRAN and C would be virtually one for one. And thus, the mechanics of function calls could fairly easily be made to be identical between the two languages.   If the function call semantics were made identical, then sharing code between the two languages would be quite straightforward.  Alas, because a CHARACTER array has a kind of implicit length associated with it, the compiler has to do some kind of magic within any function signature that passes one or more of these arrays.  Some compilers hide parameters for the length and then tack them on to the end of the function call.  Some stuff the hidden parameters right after the CHARACTER array in the call sequence.  Some create a kind of structure that combines the length with the actual data into a special type. And then some compilers do who knows what…  The point is, there is no convention among FORTRAN compilers for handling the function call semantics, so there is no clean interoperability between C and FORTRAN.

Fortunately, the Intel FORTRAN compiler has created this markup language for FORTRAN that functions as an interoperability framework that enables FORTRAN to speak C and vice versa.  There are some limitations, however, which I won’t go into detail on here.  If you are interested you can read about it in the Intel FORTRAN compiler manual.  What I want to do is highlight an example of what this looks like and then describe how I used it to solve my problem.  First, an example:


What you see in this image is code for the first function you would call if you want to read an ANSYS result file.  There are a lot of arguments to this function, but in essence what you do is pass in the file name of the result file you wish to open (Fname), and if everything goes well, this function sends back a whole bunch of data about the contents of the file.  Now, this function represents code that I have written, but it is a mirror image of the ANSYS routine stored in the binlib library.

I’ve highlighted some aspects of the code that constitute part of the interoperability markup.  First of all, you’ll notice the markup BIND highlighted in red.  This markup for the FORTRAN function tells the compiler that I want it to generate code that can be called from C and I want the name of the C function to be “CResRdBegin”.  This is the first step towards making this function callable from C.  Next, you will see highlighted in blue something that looks like a comment.  This however, instructs the compiler to generate a stub in the exports library for this routine if you choose to compile it into a DLL.  You won’t get a .lib file when compiling this as a .dll without this attribute.  Finally, you see the ISO_C_BINDING and definition of the type of character data we can make interoperable.  That is, instead of a CHARACTER(261) data type, we use an array of single CHARACTER(1) data.  This more closely matches the layout of C, and allows the FORTRAN compiler to generate compatible code.  There is a catch here, though, and that is in the Title parameter.  ANSYS, Inc. defines this as an array of CHARACTER(80) data types.  Unfortunately, the interoperability stuff from Intel doesn’t support arrays of CHARACTER(*) data types.  So, we flatten it here into a single string.  More on that in a minute.

You will notice too, that there are markups like (c_int), etc… that the compiler uses to explicitly map the FORTRAN data type to a C data type.  This is just so that everything is explicitly called out, and there is no guesswork when it comes to calling the routine.  Now, consider this bit of code:


First, I direct your attention to the big red circle. Here you see that all I am doing is collecting up a bunch of arguments and passing them on to the ANSYS, Inc. routine stored in BinLib.lib.  You also should notice the naming convention.  My FORTRAN function is named CResRdBegin, whereas the ANSYS, Inc. function is named ResRdBegin.  I continue this pattern for all of the functions defined in the BinLib library.  So, this function is nothing more than a wrapper around the corresponding binlib routine, but it is annotated and constrained to be interoperable with the C programming language.  Once I compile this function with the FORTRAN compiler, the resulting code will be callable directly from C.

Now, there are a few more items that have to be straightened up.  I direct your attention to the black arrow.  Here, what I am doing is converting the passed in array of CHARACTER(1) data into a CHARACTER(*) data type.  This is because the ANSYS, Inc. version of this function expects that data type.  Also, the ANSYS, Inc. version needs to know the length of the file path string.  This is stored in the variable ncFname.  You can see that I compute this value using some intrinsics available within the compiler by searching for the C NULL character.  (Remember that all C strings are null terminated and the intent is to call this function from C and pass in a C string.)

Finally, after the call to the base function is made, the strings representing the JobName and Title must be converted back to a form compatible with C.  For the jobname, that is a fairly straightforward process.  The only thing to note is how I append the C_NULL_CHAR to the end of the string so that it is a properly terminated C string.

For the Title variable, I have to do something different.  Here I need to take the array of title strings and somehow represent that array as one string.  My choice is to delimit each title string with a newline character in the final output string.  So, there is a nested loop structure to build up the output string appropriately.

After all of this, we have a C function that we can call directly.  Here is a function prototype for this particular function.


So, with this technique in place, it’s just a matter of wrapping the remaining 50 functions in binlib appropriately!  Now, I was pleased with my return to the land of C, but I really wanted more.  The architecture of the binlib routines is quite easy to follow and very well laid out; however, it is very, very procedural for my tastes.  I’m writing my program in C++, so I would really like to hide as much of this procedural stuff as I can.   Let’s say I want to read the nodes and elements off of a result file.  Wouldn’t it be nice if my loops could look like this:


That is, I just do the following:

  1. Ask to open a file (First arrow)
  2. Tell the library I want to read nodal geometric data (Second arrow)
  3. Loop over all of the nodes on the RST file using C++11 range based for loops
  4. Repeat for elements

Isn’t that a lot cleaner?  What if we could do it without buffering data and have it compile down to something very close to the original FORTRAN code in size and speed?  Wouldn’t that be nice?  I’ll show you how I approached it in Part 2.

Can I parameterize ANSYS Mechanical material assignments?

So we have known for a long time that we can parameterize material properties in the Engineering Data screen. That works great if we want to adjust the modulus of a material to account for material irregularities. But what if you want to change the entire material of a part from steel to aluminum? Or if you have 5 different types of aluminum to choose, on several different parts, and you want to run a Design Study to see what combination of materials is the best? Well, then you do this. The process includes some extra bodies, some Named Selections, and a single command snippet.
The first thing to do is to add a small body to your model for each different material that you want to swap in and out, and assign your needed material to them. You’ll have to add the materials to your Engineering Data prior to this. For my example I added three cubes and just put Frictionless supports on three sides of each cube. This assures that they are constrained but not going to cause any stresses from thermal loads if you forget and import a thermal profile for “All Bodies”.


Next, you make a Named Selection for each cube, named Holder1, Holder2, etc. This allows us to later grab the correct material based on the number of the Holder.


You also make a Named selection for each group of bodies for which you want to swap the materials. Name these selections as MatSwap1, MatSwap2, etc.


The command snippet goes in the Environment Branch. (ex. Static Structural, Steady-State Thermal, etc.)


! Created by Joe Woodward at PADT,Inc.
! Created on 2/12/2016
! Usage: Create Named Selections, Holder1, Holder2, etc.,for BODIES using the materials that you want to use.
! Create Named Selections called MatSwap1, MatSwap2, etc. for the groups of BODIES for which you want to swap materials.
! Set ARG1 equal to the Holder number that has the material to give to MatSwap1.
! Set ARG2 equal to the Holder number that has the material to give to MatSwap2.
! And so on....
! A value of 0 will not swap materials for that given group.
! Use as is. No Modification to this command snippet is necessary.
 !swap material for Body 1
 /COM,The Named Selection - MatSwap%ARG2% is not set to one or more bodies
 /COM,The Named Selection Holder%ARG1% is not set to one or more bodies
MATSWAP,ARG1,1 !Use material from Holder1 for Swap1
MATSWAP,ARG2,2 !Use material from Holder1 for Swap2
MATSWAP,ARG3,3 !Use material from Holder1 for Swap3
MATSWAP,ARG4,4 !Use material from Holder1 for Swap4
MATSWAP,ARG5,5 !Use material from Holder1 for Swap5
MATSWAP,ARG6,6 !Use material from Holder1 for Swap6
MATSWAP,ARG7,7 !Use material from Holder1 for Swap7
MATSWAP,ARG8,8 !Use material from Holder1 for Swap8
MATSWAP,ARG9,9 !Use material from Holder1 for Swap9


Now, each of the Arguments in the Command Snippet Details corresponds to the ‘MatSwap’ Name Selection of the same number. ARG1 controls the material assignment for all the bodies in the MatSwap1 name selection. The value of the argument is the number of the ‘Holder’ body with the material that you want to use. A value of zero leaves the material assignment alone and does not change the original material assignment for the bodies of that particular ‘MatSwap’ Named Selection. There is no limit on the number of ‘Holder’ bodies and materials that you can use, but there is a limit of nine ‘MatSwap’ groups that you can modify, because there are only nine ARG variables that you can parameterize in the Command Snippet details.


You can see how the deflection changes for the different material combinations. These three steps, holder bodies, Named Selections, and the command snippet above, will give you design study options that were not available before. Hopefully I’ll have an even simpler way in the future. Stay tuned.

Phoenix Business Journal: When was the last time you thanked an engineer?

pbj-phoenix-business-journal-logoHave you ever thanked an engineer?  In this week’s TechFlash post I explore how we live in a world that has been transformed for the better (mostly) by engineers.  We are simple creatures who avoid the spotlight… but a thanks you would be nice. When was the last time you thanked an engineer?

The 3D Printing Value Proposition

At a recent Lunch-n-Learn organized by the Arizona Technology Council, I had the opportunity to speak for 10 minutes on 3D printing. I decided to focus my talk on trying to answer one question: how can I determine if 3D printing can benefit my business? In this blog post, I attempt to expand on the ideas I presented there.

While a full analysis of the Return-On-Investment would require a more rigorous and quantitative approach, I believe there are 5 key drivers that determine the value proposition for a company to invest in 3D printing, be it in the form of outsourced services or capital expenditure. If these drivers resonate with opportunities and challenges you see in your business, it is likely that 3D printing can benefit you.

1. Accelerating Product Development

3D printing has its origins in technologies that enabled Rapid Prototyping (RP), a field that continues to have a significant impact in product development and is one most people are familiar with. As shown in Figure 1, PADT’s own product development process involves using prototypes for alpha and beta development and for testing. RP is a cost- and time effective way of iterating upon design ideas to find ones that work, without investing in expensive tooling and long lead times. If you work in product development you are very likely already using RP in your design cycle. Some of the considerations then become:

  • Are you leveraging the complete range of materials including high temperature polymers (such as ULTEM), Nylons and metals for your prototyping work? Many of these materials can be used in functional tests and not just form and fit assessments.
  • Should you outsource your RP work to a service bureau or purchase the equipment to do it in-house? This will be determined by your RP needs and one possibility is to purchase lower-cost equipment for your most basic RP jobs (using ABS, for example) and outsource only those jobs requiring specialized materials like the ones mentioned above.
PADT's Product Development process showing the role of prototypes (3D printed most of the time)
Figure 1. PADT’s Product Development process showing the role of prototypes (most often 3D printed)

The video below contains several examples of prototypes made by PADT using a range of technologies over the past two decades.

2. Exploiting Design Freedom

Due to its additive nature, 3D printing allows for the manufacturing of intricate part geometries that are prohibitively expensive (or in some cases impossible) to manufacture with traditional means. If you work with parts and designs that have complex geometries, or are finding your designs constrained by the requirements of manufacturing, 3D printing can help. This design freedom can be leveraged for several different benefits, four of which I list below:

2.1 Internal Features

As a result of its layer-by-layer approach to manufacturing a part, 3D printing enables complex internal geometries that are cost prohibitive or even impossible to manufacture with traditional means. The exhaust gas probe in Fig. 2 was developed by RSC engineering in partnership with Concept Laser has 6 internal pipes surrounded by cooling channels and was printed as one part.

3D Printed Exhaust Gas Probe (RSC Engineering and Concept Laser Inc.)
Fig 2. 3D Printed Exhaust Gas Probe with intricate internal features (RSC Engineering and Concept Laser Inc.)

2.2 Strength-to-Weight Optimization

One of the reasons the aerospace industry has been a leader in the application of 3D printing is the fact that you are now able to manufacture complex geometries that emerge from a topology optimization solution and reduce component weight, as shown in the bracket manufactured by Airbus in Figure 3.

Titanium Airbus bracket made by Concept Laser on board the A350
Fig 3. Titanium Airbus bracket made by Concept Laser on board the A350

2.3 Assembly Consolidation

The ability to work in a significantly less constrained design space also allows the designer to integrate parts in an assembly thereby reducing assembly costs and sourcing headaches. The part below (also from Airbus) is a fuel assembly that integrated 10 parts into 1 printed part.

Airbus Fuel Assembly 3D printed out of metal (Airbus / Concept Laser)
Fig 4. Airbus Fuel Assembly 3D printed out of metal (Airbus / Concept Laser)

2.4 Bio-inspiration

Nature provides several design cues, optimized through the process of evolution over millenia. Some of these include lattices and hierarchical structures. 3D printing makes it possible to translate more of these design concepts into engineering structures and parts for benefits of material usage minimization and property optimization. The titanium implant shown in Figure 5 exploits lattice designs to optimize the effective modulus in different locations to more closely represent the properties of an individuals bone in that region.

Titanium implant leveraging lattice designs (Concept Laser)
Fig 5. Titanium implant leveraging lattice designs (Concept Laser)

3. Simplifying the Supply Chain, Reducing Lead Times

One of the most significant impacts 3D printing has is on lead time reduction, and this is the reason why it is the preferred technology for “rapid” prototyping. Most users of 3D printing for end-part manufacturing identify a 70-90% reduction in lead time, primarily as a result of not requiring the manufacturing of tooling, reducing the need to identify one or more suppliers. Additionally, businesses can reduce their supplier management burden by in-sourcing the manufacturing of these parts. Finally, because of the reduced lead times, inventory levels can be significantly reduced. The US Air Force sees 3D printing as a key technology in improving their sustainability efforts to reduce the downtime associated with aircraft awaiting parts. Airbus recently also used 3D printing to print seat belt holders for their A310 – the original supplier was out of business and the cost and lead time to identify and re-tool a new supplier were far greater than 3D printed parts.

4. Reducing Costs for High Mix Low Volume Manufacturing

According to the 2015 Wohlers report, about 43% of the revenue generated in 3D printing comes from the manufacturing of functional, or end-use parts. When 3D printing is the process of choice for the actual manufacturing of end-use parts, it adds a direct cost to each unit manufactured (as opposed to an indirect R&D cost associated with developing the product). This cost, when compared to traditional means of manufacturing, is significantly lower for high mix low volume manufacturing (High Mix – LVM), and this is shown in Figure 6 for two extreme cases. At one extreme is mass customization, where each individual part has a unique geometry of construction (e.g. hearing aids, dental aligners) – in these cases, 3D printing is very likely to be the lowest cost manufacturing process. At the other end of the spectrum is High Volume Manufacturing (HVM) (e.g. semiconductor manufacturing, children’s toys), where the use of traditional methods lowers costs. The break-point lies somewhere in between and will vary by the the part being produced and the volumes anticipated. A unit cost assessment that includes the cost of labor, materials, equipment depreciation, facilities, floor space, tooling and other costs can aid with this determination.

Chart showing how volumes drive unit prices and where 3D Printing can be the cheaper option
Fig 6. Chart showing how volumes drive unit prices and where 3D Printing can be the cheaper option for low volumes and high mix manufacturing

5. Developing New Applications

Perhaps the most exciting aspect of 3D printing is how people all around the world are using it for new applications that go beyond improving upon conventional manufacturing techniques. Dr. Anthony Atala’s 2011 TED talk involved the demonstration of an early stage technique of depositing human kidney cells that could someday aid with kidney transplants (see Figure 7). Rarely does a week go by with some new 3D printing application making the news: space construction, 3D surgical guides, customized medicine to name a few. The elegant and intuitive method of building something layer-by-layer lends itself wonderfully to the imagination. And the ability to test and iterate rapidly with a 3D printer by your side allows for accelerating innovation at a rate unlike any manufacturing process that has come before it.

Dr. Anthony Atala showing a 3D printed kidney [Image Attr. Steve Jurvetson]
Fig 7. Dr. Anthony Atala showing a 3D printed kidney [Image Attr. Steve Jurvetson, Wikimedia Commons]


As I mentioned in the introduction, if you or your company have challenges and needs in one or more of the 5 areas above, it is unlikely to be a question of whether 3D printing can be of benefit to you (it will), but one of how you should best invest in it for maximum return. Further, it is likely that you will accrue a combination of benefits (such as assembly consolidation and supply simplification) across a range of parts, making this technology an attractive long term investment. At PADT, we offer 3D printing both as a service and also sell most of the printers we use on a daily basis and are thus well positioned to help you make this assessment, so contact us!

The very latest install guide for PuTTY and Xming is here!


This how to describes how to install PuTTY and Xming and then hook the two together to provide you the end-user with an X Window System display server, a set of traditional sample X applications and tools, and a set of fonts. These two products will help to eliminate many of your frustrations! Xming features support of several languages that many of our ANSYS Analyst’s use here at PADT, Inc. We truly enjoy and use these two products. One reason for why would should be interested is that by combining Xming and PuTTY for use in numerical simulation Mesa 3D, OpenGL, and GLX 3D graphics extensions capabilities work amazingly well! Kudos to the programmers, we love you!

Program references:



Server: CUBE Linux 64-bit Server
Client: Windows 7 Professional 64-bit

Step 1 – Install PuTTY first (accept defaults)

Step 2 – Install Xming (accept defaults)
o Download and install the program and fonts for XMING files:
 Program:
 Fonts:

Double-check that the Normal PuTTy link with SSH client is checked


Step 3 – After the program has completed installation.

Step 4 – Install the Xming fonts that you had downloaded earlier.

Verify that Xming has been started. You will notice a new running task inside of your task bar. If you hover over the X icon in your taskbar. It should Say something like “Xming Server:0.0”

Now let us hook them together. It is X and PUTTY time!

Step 5 – Open your PUTTY application.


  • Enter the hostname or IP address.
  • Enter in a Session name:
  • On the left side bar within PUTTY. Locate –> Connection  and then expand out –> SSH –> X11
    o Check –> Enable X11 forwarding


Save the new session –> Locating on the left panel of your PUTTY program (you may need to scroll up a little bit).

Click on the text –> Session and then Save the new session.



Yay! now open your newly saved session and login to a CUBE linux server to test and verify.

I always forget to remember tell people this TIP but for multi display types:  Start Xming in -multiwindow mode.

How? from Command Prompt (the Windows cmd console) or create a desktop shortcut.

“C:\Program Files\Xming\Xming.exe” -multiwindow -clipboard

Have a Happy Valentines Day Weekend and do not forget to show the penguin some love too. This penguin looks lonely and maybe needs a date?


Webinar Content: Answers to your Questions on Metal 3D Printing

Download a recording and the slides from this this informative webinar.

3d-metal-printing-webinar-slide-1Metal Additive Manufacturing, or Metal 3D Printing, is a topic that generates a lot of interest, and even more questions.  So we held a webinar on February 9th, 2016 to try and answer the most common questions we encounter. It was a huge success with over 150 people logging in to watch live.  But many of you could not make it so we have put the slides and a recording of the webinar out there.  Just go to this link to access the information.

The presentation answered the fllowing common questions:

  • Introductory:
    • Who are PADT and Concept Laser?
    • How does laser-based metal 3D printing work?
    • Are there other ways to 3D print in metal and how do they compare?
  • Technical:
    • What are the different process steps involved?
    • How “good” are 3D printed metal parts?
  • Strategic:
    • What materials and machines do you offer?
    • Who uses this technology today?
    • What is the value proposition of metal 3D printing for me?
    • What can I do after this webinar?

As always, our technical team is available to answer any additional questions you may have. Just shoot an email to or give us a call at 480.813.4884.


Keyless SSH in two easy steps. Wait, What?!

cubelogo-2014Within the ANSYS community and even more specifically, in regards to the various numerical simulation techniques the ANSYS users use to solve their problems.

One of the most powerful approaches to overcome the limitations of new complex problems is take multiple CPU’s and link them together over a distributed network of computers. Further unpacking this for you the reader, one critical piece to using parallel processing using a quality high performance message passing interface (MPI). The latest IBM® Platform™ MPI version 9.1.3 Fix Pack 1 is provided for you in your release with ANSYS 17.0.

When solving a model using distributed parallel algorithms, lately the communication for authenticating your credentials to make this login process seamless is known as Secure Shell, or SSH. SSH is a cryptographic (encrypted) network protocol to allow remote login and other network services to operate securely over an unsecured network.

Today let us all take the mystery and hocus pocus out setting up your keyless or password ssh keys. As you will see this very easy process to complete.

I begin my voyage into keyless freedom by first logging into one of our CUBE Linux server’s.


STEP 1 – Create the key

  • Type ssh-keygen –t rsa
  • Press the enter key three times
    •  (in some instances as shown in the screen capture below, You may see a prompt asking you to overwrite. In that case type y)


STEP 2 – Apply the key

  • Type ssh-copy-id –i ~/.ssh/
  • Type ssh-copy-id –i ~/.ssh/
  • Enter your current password that you would use to login to


All Done!

Now give it a try and verify test.
Login to the first server you setup, In my case CS0.
At the terminal command prompt type ssh cs1



I find it is my best practice is to also repeat the ssh-copy-id command using the simple name at the same time on each of the server.

That command would look like:
1. After you have completed Step 2.a listed out below you will also perform the same command locally.
a. ssh-copy-id –i ~/.ssh/ mastel@cs0
b. enter your old password press enter.

Done, Done!

Additional Links:

Bring the kids for an evening of STEM fun at PADT’s AZ SciTech Festival Open House

Scitech Logo

PADT is excited to open our doors to the community and show you and your families what engineering is all about.  Bring the family down for a tour of PADT’s Tempe office and we will show them why engineering rocks. This family friendly event is a great way for kids to see what engineers really do all day.  Tour our 3D printing lab and check out how “We Make Innovation Work”.          Register Here

WHEN: Wednesday, February 24th from 6:00pm to 7:30pm
WHERE: PADT Headquarters
  7755 S. Research Drive, Suite 110
  Tempe, AZ 85284

The Arizona SciTech Festival is a state-wide celebration of science, technology, engineering and math held annually in February and March.  Through a series of over 1,000 expos, workshops, conversations, exhibitions and tours held in diverse neighborhoods throughout the state, the Arizona SciTech Festival excites and informs Arizonans from ages 3 to 103 about how STEM will drive our state for next 100 years. Spearheaded by the Arizona Commerce Authority, Arizona Science Center, the Arizona Technology Council Foundation, Arizona Board of Regents, the University of Arizona and Arizona State University, the Arizona SciTech Festival is a grass roots collaboration of over 700 organizations in industry, academia, arts, civic, community and K-12.

Phoenix Business Journal: ​Flint’s water problem and the dangers of ignoring science

pbj-phoenix-business-journal-logoThe first “opinion” piece for the TechFlash blog of the Phoenix Business Journal. My thoughts on how the trend of ignoring science is harmful: “Flint’s water problem and the dangers of ignoring science

Phoenix Business Journal: The real reason 3D printing is important


In all the hoopla around 3D Printing the real reason why it is important often gets lost. Check out this article to learn “The real reason 3D printing is important” to wrap your head around the long term impact of this key technology. PB

Phoenix Business Journal: ​How to close business deals faster

pbj-phoenix-business-journal-logoAfter some end of year reflection we hit upon a key factor that constantly let us close business deals faster. We share the key driver in the PBJ’s Phoenix Business Blog with the to-the-point title of “How to close business deals faster

10x with ANSYS 17.0 – Get an Order of Magnitude Impact

The ANSYS 17.0 release improves the impact of driving design with simulation by a factor of 10.  This 10x  jump is across physics and delivers real step-change enhancements in how simulation is done or the improvements that can be realized in products.

Unless you were disconnected from the simulation world last week you should be aware of the fact that ANSYS, Inc released their latest version of the entire product suite.  We wanted to let the initial announcement get out there and spread the word, then come back and talk a little about the details.  This blog post is the start of a what should be a long line of discussions on how you can realize 10x impact from your investment in ANSYS tools.

As you may have noticed, the theme for this release is 10x. A 10x improvement in speed, efficiency, capability, and impact.  Watch this short video to get an idea of what we are talking about.

Where is the Meat?

We are already seeing this type of improvement here at PADT and with our customers. There is some great stuff in this release that delivers some real game-changing efficiency and/or capability.  That is fine and dandy, but how is this 10x achieved.  There are a lot of little changes and enhancements, but they can mostly be summed up with the following four things:

temperature-on-a-cpu-cooler-bgTighter Integration of Multiphysics

Having the best in breed simulation tools is worth a lot, and the ANSYS suite leads in almost every physics.  But real power comes when these products can easily work together.  At ANSYS 17.0 almost all of the various tools that ANSYS, Inc. has written or acquired can be used together. Multiphysics simulation allows you to remove assumption and approximations and get a more accurate simulation of your products.

And Multiphysics is about more than doing bi-directional simulation, which ANSYS is very good at. It is about being able to transfer loads, properties, and even geometry between different software tools. It is about being able to look at your full design space across multiple physics and getting more accurate answers in less time.  You can take heat loads generated in ANSYS HFSS and use them in ANSYS Mechanical or ANSYS FLUENT.  You can take the temperatures from ANSYS FLUENT and use them with ANSYS SiWave.  And you can run a full bidirectional fluid-solid model with all the bells and whistles and without the hassles of hooking together other packages.

simplorer-17-1500-modelica-components-smTo top it all off, the system level modeler ANSYS Simplorer has been improved and integrated further, allowing for true system level Multiphysics virtual prototyping of your entire system.  One of the changes we are most excited about is full support for Modelica models – allowing you to stay in Simplorer to model your entire system.

Improved Usability

Speed is always good, and we have come to expect 10%-30% increases in productivity at almost every release. A new feature here, a new module there. This time the developers went a lot further and across the product lines.

clip-regions-with-named-selections-bgThe closer integration of ANSYS SpaceClaim really delivers on a 10x or better speedup for geometry creation and cleanup when compared to other methods. We love SpaceClaim here at PADT and have been using it for some time.  Version 17 is not only integrated tighter, it also introduces scripting that allows users to take processes they have automated in older and clunker interfaces into this new more powerful tool.

One of our other favorites is the new interface in ANSYS Fluent, just making things faster and easier. More capability in the ANSYS Customization Toolkit (ACT) also allows users to get 10x or better improvements in productivity.  And for those who work with electronics, a host of ECAD geometry import tools are making that whole process an order of magnitude faster.

import-ecad-layout-geometry-bgIndustry Specific Workflows

Many of the past releases have been focused on establishing underlying technology, integration, and adding features. This has all paid off and at 17.0 we are starting to see some industry specific workflows that get models done faster and produce more accurate results.

The workflow for semiconductor packaging, the Chip Package System or CPS, is the best example of this. Here is a video showing how power integrity, signal integrity, thermal modeling, and integration across tools:

A similar effort was released in Turbomachinary with improvements to advanced blade row simulation, meshing, and HPC performance.

ansys-fluent-hpc-max-coresOverall Capability Enhancements

A large portion of the improvements at 17.0 are made up of relatively small enhancements that add up to so big benefits.  The largest development team in simulation has not been sitting around for a year, they have been hard at work adding and improving functionality.  We will cover a lot of these in coming posts, but some of our favorites are:

  1. Improvements to distributed solving in ANSYS Mechanical that show good scaling on dozens of cores
  2. Enhancements to ACT allowing for greater automation in ANSYS Mechanical
  3. ACT is now available to automate your CFD processes
  4. Significant improvements in meshing robustness, accuracy and speed (If you are using that other CFD package because of meshing, its time to look at ANSYS Fluent again)
  5. Fracture mechanics
  6. ECAD import in electromagnetic, fluids, and mechanical products.
  7. A new solver in ANSYS Maxwell that solves more than 10x faster for transient runs
  8. ANSYS AIM just keeps getting more functions and easier to use
  9. A pile of SpaceClaim new and improved features that greatly speed up geometry repair and modification
  10. Improved rigid body dynamics in ANSYS Mechanical

ansys-17-ribbons-UIMore to Come

And a ton more. It may take us all of the time we have before ANSYS 18.0 comes out before we have a chance to go over in The Focus all of the great new stuff.  But we will be giving a try in the coming weeks and months. ANSYS, Inc. will be hosting some great webinars as well.

If you see something that interests you or something you would like to see that was not there, shoot us an email at or call 480.813.4884.

Additive Manufacturing – Back to the Future!

Paul Nigh's '' Back to the Future DeLorean Time Machine
Production of Back to the Future began in 1984 – it was recently announced that the DeLorean is to go back in production with new cars rolling out in 2017

Most histories of Additive Manufacturing (3D printing) trace the origins of the technology back to Charles Hull’s 1984 patent, the same year production began on the first of the Back to the Future movies. Which is something of a shock when you see 3D printing dotting the Gartner Hype Cycle like it was invented in the post-Seinfeld era. But that is not what this post is about.

When I started working on Additive Manufacturing (AM), I was amazed at the number of times I was returning to text books and class notes I had used in graduate school a decade ago. This led me to reflect on how AM is helping bring back to the forefront disciplines that had somehow lost their cool factor – either by becoming part of the old normal, or because they contained ideas that were ahead of their time. I present three such areas of research that I state, with only some exaggeration, were waiting for AM to come along.

  • Topology Optimization: I remember many a design class where we would discuss topology optimization, look at fancy designs and end with a conversation that involved one of the more cynical students asking “All that’s fine, but how are you going to make that?”. Cue the elegant idea of building up a structure layer-by layer. AM is making it possible to manufacture parts with geometries that look like they came right out of a stress contour plot. And firms such as ANSYS, Autodesk and Altair, as well as universities and labs are all working to improve their capabilities at the intersection of topology optimization and additive manufacturing.
Topology optimization applied to the design of an automobile upper control-arm done with GENESIS Topology for ANSYS Mechanical (GTAM) from Vanderplaats Research & Development and ANSYS SpaceClaim
Topology optimization applied to the design of an automobile upper control-arm done with GENESIS Topology for ANSYS Mechanical (GTAM) from Vanderplaats Research & Development and ANSYS SpaceClaim
And we printed that!
And we printed that!
  • Lattice Structures: One of the first books I came across when I joined PADT was a copy of Cellular Solids by Lorna Gibson and M.F. Ashby. Prof. Gibson’s examples of these structures as they occur in nature demonstrate how they provide an economy of material usage for the task at hand. Traditionally, in engineering structures, cellular designs are limited to foams or consistent shapes like sandwich panels where the variation in cell geometry is limited – this is because manufacturing techniques do not normally lend themselves well to building complex, three dimensional structures like those found in nature. With AM technologies however, cell sizes and structures can be varied and densities modified depending on the design of the structure and the imposed loading conditions, making this an exciting area of research.

    Lattice specimens made with the Fused Deposition Modeling (FDM) process
    Lattice specimens made with the Fused Deposition Modeling (FDM) process
  • Metallurgy: As I read the preface to my “Metallurgy for the Non-Metallurgist” text book, I was surprised to note the author openly bemoan the decline of interest in metallurgy, and subsequently, fewer metallurgists in the field. And I guess it makes sense: materials science is today mostly concerned with much smaller scales than the classical metallurgist trained in. Well, lovers of columnar grain growth and precipitation hardening can now rejoice – metallurgy is at the very heart of AM technology today – most of the projected growth in AM is in metals. The science of powder metallurgy and the microstructure-property-process relationships of the metal AM technologies are vital building blocks to our understanding of metal 3D printing. Luckily for me, I happen to possess a book on powder metallurgy. And it too, is from 1984.
This book was printed in 1984, and is very relevant today
Published 1984

Phoenix Business Journal: Build and bust is so 20th century: How to develop better products with simulation

pbj-phoenix-business-journal-logoFor this week’s contribution to the PBJ’s TechFlash blog I cover something that is near and dear to PADT – the replacement of testing with simulation, or virtual prototyping.  Learn why “Build and Bust is so 20th Century