8. Notes

8.1. Solaris, IRIX, Tru64

Here is an email from Steve Wagner about the state of the ganglia on Solaris, IRIX and Tru64. Steve is to thank for porting ganglia to Solaris and Tru64. He also helped with the IRIX port.

State of the IRIX port:

*  CPU percentage stuff hasn't improved despite my efforts.  I fear there
   may be a flaw in the way I'm summing counters for all the CPUs.
*  Auto-detection of network interfaces apparently segfaults.
*  Memory and load reporting appear to be running properly.
*  CPU speed is not being reported properly on multi-proc machines.
*  Total/running processes are not reported.
*  gmetad untested.
*  Monitoring core apparently stable in foreground, background being tested
(had a segfault earlier).

State of the Tru64 port:

*  CPU percentage stuff here works perfectly.
*  Memory and swap usage stats are suspected to be inaccurate.
*  Total/running processes are not reported.
*  gmetad untested.
*  Monitoring core apparently stable in foreground and background.

State of the Solaris port:
*  CPU percentages are slightly off, but correct enough for trending
   purposes.
*  Load, ncpus, CPU speed, breads/writes, lreads/writes, phreads/writes,
   and rcache/wcache are all accurate.
*  Memory/swap statistics are suspiciously flat, but local stats bear
   this out (and they *are* being updated) so I haven't investigated
   further.
*  Total processes are counted, but not running ones.
*  gmetad appears stable

Anyway, all three ports I've been messing with are usable and fairly
stable.  Although there are areas for improvement I think we really can't
keep hogging all this good stuff - what I'm looking at is ready for
release.

8.2. Debian Users

Here is an email message from Preston Smith for Debian users

 Debian packages for Debian 3.0 (woody) are available at
  http://www.physics.purdue.edu/~psmith/ganglia
 (i386, sparc, and powerpc are there presently, more architectures will
  appear when I get them built.)
 Packages for "unstable" (sid) will be available in the main Debian
  archive soon.

 Also, a CVS note: I checked in the debian/ directory used to create
 debian packages.

8.3. Multihomed Machines

Here is an email I sent to a user having problems with a multi-homed machine.

i need to add a section in the documentation talking about this since it 
seems to be a common question.

when you use...

mcast_if eth1

.. in /etc/gmond.conf that tells gmond to send its data out the "eth1"
network interface but that doesn't necessarily mean that the source
address of the packets will match the "eth1" interface.  to make sure that
data sent out eth1 has the correct source address run the following...

% route add -host 239.2.11.71 dev eth1

... before starting gmond.  that should do the trick for you.

-matt

> I have seen some post related to some issues
> with gmond + multicast running on a dual nic
> frontend.
> 
> Currently I am experiencing a weird behavior
> 
> I have the following setup:
> 
>   -----------------------
>   | web server + gmetad |
>   -----------------------
>              |
>              |
>              |
>     ----------------------
>     |   eth0 A.B.C.112   |
>     |                    |
>     |  Frontend + gmond  |
>     |                    |
>     | eth1 192.168.100.1 |
>     ----------------------
>              |
>              |
> 
>        26 nodes each
>           gmond
> 
> In the frontend /etc/gmond.conf I have the
> following statement: mcast_if  eth1
> 
> The 26 nodes are correctly reported. 
> 
> However the Frontend is never reported.
> 
> I am running iptables on the Frontend, and I am seing
> things like:
> 
> INPUT packet died: IN=eth1 OUT= MAC= SRC=A.B.C.112 DST=239.2.11.71 
> LEN=36 TOS=0x00 PREC=0x00 TTL=1 ID=53740 DF PROTO=UDP SPT=41608 DPT=8649
> LEN=16 
> 
> I would have expected the source to be 192.168.100.1 with mcast_if eth1
> 
> Any idea ?

8.4. Cisco Catalyst Switches

Perhaps information regarding gmond on networks set up through cisco
catalyst switches should be mentioned in the ganglia documentation. I think
by default multicast traffic on the catalyst will flood all devices unless
configured properly. Here is a relavent snipet from a message forum, with a
link to cisco document.

--
If what you are trying to do, is minimizing the impact on your network due
to a multicast application, this link may describe what you want to do:
http://www.cisco.com/warp/public/473/38.html

We set up our switches according to this after a consultant came in and
installed an application multicasting several hundred packets per second.
This made the network functional again.