INIT Script for Tivoli Storage Manager | SUSE Communities

INIT Script for Tivoli Storage Manager

Share
Share

Application:

The Tivoli Storage Manager client acceptor daemon does not come with any kind of INIT script for SUSE Linux Enterprise, so if you are in need of one you can use the following script here.

Explanation:

This INIT script is provided to start the Tivoli Storage Manager client acceptor daemon on SUSE Linux Enterprise.

Script:

Copy the text below into a file preferably named dsm or download it here.

#!/bin/sh
### BEGIN INIT INFO
# Provides:          dsm
# Required-Start:    $network $syslog $remote_fs
# Required-Stop:     $network $syslog $remote_fs
# Default-Start:     3 5
# Default-Stop:      0 1 2 6
# Description:       Tivoli Storage Manager client acceptor daemon
### END INIT INFO

# Check for existence of Binaries
DSMC_BIN=/opt/tivoli/tsm/client/ba/bin/dsmc
DSMCAD_BIN=/opt/tivoli/tsm/client/ba/bin/dsmcad

test -x $DSMC_BIN || { echo "$DSMC_BIN not installed";
	if [ "$1" = "stop" ]; then exit 0;
	else exit 5; fi; }

test -x $DSMCAD_BIN || { echo "$DSMCAD_BIN not installed";
	if [ "$1" = "stop" ]; then exit 0;
	else exit 5; fi; }

prog1="dsmcad"
prog2="dsmc"

export DSM_DIR=/opt/tivoli/tsm/client/ba/bin
export DSM_CONFIG=/opt/tivoli/tsm/client/ba/bin/dsm.opt

. /etc/rc.status

# First reset status of this service
rc_reset

case "$1" in
  start)
    echo -n $"Starting $prog2: "
    startproc $DSMC_BIN sched >/dev/null 2>/dev/null
    rc_status -v
    echo -n $"Starting $prog1: "
    startproc $DSMCAD_BIN  >/dev/null 2>/dev/null
    rc_status -v
    ;;
  stop)
    echo -n $"Stopping $prog2: "
    killproc -TERM $DSMC_BIN
    rc_status -v
    echo -n $"Stopping $prog1: "
    killproc -TERM $DSMCAD_BIN
    rc_status -v
    ;;
  restart)
    $0 stop
    $0 start
    rc_status
    ;;
  status)
    echo -n "Checking for DSMC"
    checkproc $DSMC_BIN
    rc_status -v
    echo -n "Checking for DSMCAD"
    checkproc $DSMCAD_BIN
    rc_status -v
    ;;
  *)
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1
    ;;
esac
rc_exit

Once you have this shell script created you can save it in /etc/init.d/ with chmod 755 permissions on it. Then you can issue a chkconfig dsmcad so that it inserts it into the proper runlevels in the right spot. Also, if you want to check it you can run chkconfig dsm on, and also check the corresponding /etc/init.d/rcx.d directory for the link that it would have created.

Enjoy!!

Share
(Visited 1 times, 1 visits today)

Comments

  • Avatar photo mrjcoopdk says:

    Hello.
    The script displayed on the page and the one linked to are slightly different.

  • Avatar photo cparker says:

    The version of the script displayed on the page above is the one that works on SLES 9, 10 & 11. The downloadable one only works on SLES9. The reason is that the “killproc -p $PID_FILE” command to stop the process doesn’t work on SLES10 & 11. The script displayed on the page just uses “killproc -TERM $DSMC_BIN” when executing the “dsm stop” option. That properly stops both Tivoli daemons on all versions of SLES.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar photo
    7,508 views
    cseader Senior Innovative Technologist with over 15 years of experience delivering creative, customer-centric value and solutions. Broad experience in many different verticals, architectures, and data center environments. Proven leadership experience ranging from evaluating technology, collaborating across engineering teams and departments, competitive analysis, and strategic planning. Highly-motivated with a track record of success in consistent achievement of projects and goals, and driving business function and management. Skilled problem identifier and troubleshooter, continually learning and adapting, and strong analytical skills. Efficient, organized leader with success in coordinating efforts within internal-external teams to reach and surpass expectations. Expert-level skills in the implementation, analysis, optimization, troubleshooting, and documentation of mode 1 and mode 2 data center systems.