HiperLogic

Virtualization, High Performance Computing, Healthcare IT, Enterprise Computing

June 2009

You are currently browsing the monthly archive for June 2009.

ILO is a management port on HP servers that allow you to power nodes on and off, obtain sensor information, and other useful management tasks. You can save a lot of time when trying to configure the ILO management port on HP cluster nodes by scripting the ILO configuration with HP tools.

At a high level, to automatically configure ILO you feed the HP hponcfg.exe tool XML fragments which automatically configure ILO to your specifications.

I typically like the ILO management port to be configured to use DHCP, set the ILO hostname to be -ilo, and have SSH enabled. I normally set the username and password to be the same on each cluster blade.
First, download the hponcfg.exe tool from the HP website and put it on a share accessible to the compute nodes.

The first item it to set all compute nodes to have the same ILO name and password. In this example, I will just set them to Administrator/Administrator ( obviously you will want to change that!)

To reset the password create a file named reset_pass.xml on a share accessible from the compute nodes with the following contents:

<RIBCL VERSION="2.0">
 <LOGIN USER_LOGIN="Administrator" PASSWORD="Administrator">
  <USER_INFO MODE="write">
   <MOD_USER USER_LOGIN="Administrator">
    <PASSWORD value="Administrator"/>
   </MOD_USER>
  </USER_INFO>
 </LOGIN>
</RIBCL>

Then run this command to execute the password reset on all nodes, via clusrun or other mechanism ( psexec if you are not on HPC Server 2008):

hponcfg /f reset_pass.xml

Next, lets reset the ILO hostname to -ilo and enable DHCP. Create a batch file with the following contents, copy it to the temp directory on each node, and execute it on each node via clusrun.

for /f "delims=" %%a in ('hostname') do @set HOSTNAME=%%a
echo ^<RIBCL VERSION="2.1"^> >> network.xml > network.xml
echo ^<LOGIN USER_LOGIN="Administrator" PASSWORD="Administrator"^> >> network.xml
echo   ^<DIR_INFO MODE="write"^> >> network.xml
echo   ^<MOD_DIR_CONFIG^> >> network.xml
echo     ^<DIR_AUTHENTICATION_ENABLED VALUE = "N"/^> >> network.xml
echo     ^<DIR_LOCAL_USER_ACCT VALUE = "Y"/^> >> network.xml
echo     ^<DIR_SERVER_ADDRESS VALUE = ""/^> >> network.xml
echo     ^<DIR_SERVER_PORT VALUE = "636"/^> >> network.xml
echo     ^<DIR_OBJECT_DN VALUE = ""/^> >> network.xml
echo     ^<DIR_OBJECT_PASSWORD VALUE = ""/^> >> network.xml
echo     ^<DIR_USER_CONTEXT_1 VALUE = ""/^> >> network.xml
echo     ^<DIR_USER_CONTEXT_2 VALUE = ""/^> >> network.xml
echo     ^<DIR_USER_CONTEXT_3 VALUE = ""/^> >> network.xml
echo   ^</MOD_DIR_CONFIG^> >> network.xml
echo   ^</DIR_INFO^> >> network.xml
echo   ^<RIB_INFO MODE="write"^> >> network.xml
echo   ^<MOD_NETWORK_SETTINGS^> >> network.xml
echo     ^<SPEED_AUTOSELECT VALUE = "Y"/^> >> network.xml
echo     ^<NIC_SPEED VALUE = "10"/^> >> network.xml
echo     ^<FULL_DUPLEX VALUE = "N"/^> >> network.xml
echo     ^<DHCP_ENABLE VALUE = "Y"/^> >> network.xml
echo     ^<DHCP_GATEWAY VALUE = "Y"/^> >> network.xml
echo     ^<DHCP_DNS_SERVER VALUE = "Y"/^> >> network.xml
echo     ^<DHCP_STATIC_ROUTE VALUE = "Y"/^> >> network.xml
echo     ^<DHCP_WINS_SERVER VALUE = "Y"/^> >> network.xml
echo     ^<REG_WINS_SERVER VALUE = "Y"/^> >> network.xml
echo     ^<DNS_NAME VALUE = "%HOSTNAME%"/^> >> network.xml
echo     ^<DOMAIN_NAME VALUE = "mydomainname.com"/^> >> network.xml
echo     ^<SEC_WINS_SERVER value = "0.0.0.0"/^> >> network.xml
echo     ^<STATIC_ROUTE_1 DEST = "0.0.0.0" GATEWAY = "0.0.0.0"/^> >> network.xml
echo     ^<STATIC_ROUTE_2 DEST = "0.0.0.0" GATEWAY = "0.0.0.0"/^> >> network.xml
echo     ^<STATIC_ROUTE_3 DEST = "0.0.0.0" GATEWAY = "0.0.0.0"/^> >> network.xml
echo   ^</MOD_NETWORK_SETTINGS^> >> network.xml
echo   ^</RIB_INFO^> >> network.xml
echo  ^</LOGIN^> >> network.xml
echo ^</RIBCL^> >> network.xml

hponcfg /f network.xml

Finally, lets enable SSH for ILO. Put the following contents in a file called ssh.xml, copy it to a share and run hponcfg /f ssh.xml via clusrun

<RIBCL VERSION="2.0">
  <LOGIN USER_LOGIN="Administrator" PASSWORD="Password">
  <RIB_INFO MODE="write">
    <MOD_GLOBAL_SETTINGS>
      <SSH_PORT value="22"/>
      <SSH_STATUS value="Yes"/>
    </MOD_GLOBAL_SETTINGS>
  </RIB_INFO>
  </LOGIN>
</RIBCL>

For more information, read the extensive HP documentation and examples.

Tags:

Maya is a popular network renderer used for Hollywood movies and a variety of other applications. It can be used on clusters of workstations, as well as HPC Server 2008. While there is no native HPC Server 2008 integration as of this post, it is fairly straight-forward to integrate Maya’s mental ray renderer with some scripting.

The scripting language I chose was Perl,  though any language you like will work. ( You can download Perl for Windows free from ActiveState).

Before you begin do the following:

1. set

SPM_HOST

on each compute node to the license server for Maya.
2. set the PATH to the Maya Mental Ray renderer, which by default is

c:\Program Files\Autodesk\mrstand3.7.1-maya2009\

.
3. Make sure the designers export to the “.mi” format which is the Mental Ray format.
4. Finally, make sure the designers are referencing resources using relative paths, and not hardcoded paths like c:\myresoure.img which will not exist at render time on the compute nodes.

The custom script I created called “render.pl” takes a few arguments , the job ( In .mi format, required for Mental Ray rendering), the start frame, the end frame, and the number of frames per job.

   render.pl -i "job.mi"  -s 1 -e 800 -f 4

Each job (or task) takes a chunk of the frames and renders them to images/, the more nodes you have the more you can render in parallel to speed up the render job.

The script generates the HPC Server 2008 jobs  based on the user input, ultimately calling the Maya Mental Ray renderer with

 mentalrayrender.cmd -threads 8 -render $start $end $input

The advantage of using HPC Server 2008 for Maya is HPC Server 2008 has all the built in scheduling, job management, and job submission built in. In this case the cluster was being used for many other applications, so Maya was able to play nicely with the existing cluster.

Tags:

A common management task is to set an environment variable of some sort across a compute cluster.

To set an environment across the cluster, you can use cluscfg:
  

cluscfg setenvs name=value

For example
  

cluscfg setenvs LM_LICENSE_FILE=license_server@4000

Note this will apply only to batch jobs started with the Microsoft HPC scheduler, not to normal programs running outside of batch.

If you need to set a variable across all nodes, and need it to be referenced outside of user batch jobs, then do the following:

Select the nodes into the node management view and run the following command 

setx  name value /M

This command will add this environement variable in the system of all the selected nodes:

clusrun setx PATH  "%PATH%;\\headnode\soft\bin"

Tags:

© 2006-2010 HiperLogic, LLC.  |  Serving the Ann Arbor, Southeast Michigan, and Ohio region  |  (888)-268-3930  |