Model9 Documentation 1.7.0

Model9 Documentation 1.7.0

Abstract

Release Notes

V1.7 Online release notes

Major Features and Enhancements

  • You can use the new BACKDSN CLI command to perform on-demand, data-set level backup operations directly from the mainframe to any cloud - on-prem, hybrid or public. This functionality enables the execution of backup operations within scripts and JCLs, and simplifies storage administration.

    • The BACKDSN CLI command also includes NEWNAME, NEWDATE and NEWTIME parameters intended for simplifying migration from legacy backup software by allowing you to specify the same backup attributes as were stored by the original backup software.

    • In addition, the BACKDSN CLI command supports specifying a backup retention period. The life cycle management process is enhanced to delete backups whose retention period has expired.

  • A new DELBACK CLI command is used to delete data set backups created using the BACKDSN CLI command. Moreover, the command enables management of backups within scripts and JCLs, and simplifies storage administration using native Mainframe tools only.

  • The agent is enhanced to manage the number of concurrent system utilities, such as ADRDSSU (DFDSS), active at the same time and queue additional requests when no more instances can be run in parallel due to address space virtual storage constraints. This enhancement enables running multiple server policies in parallel and avoiding errors due to agent resources being unavailable.

  • Obsolete activity logs can now be deleted to free up space on the management server file system and remove unneeded information from the management interface. New server parameters control how long activity logs shall be retained after all resources have expired.

Performance improvements

  • The life cycle management process has been enhanced to parallelize backup and archive deletes providing a performance improvement of up to 90% in processing time for large data sets.

  • The restore and recall actions, whether invoked from CLI or from the management UI, have been enhanced to better utilize network throughput, providing up to 35% reduction in restore or recall times for large data sets.

Changed interfaces

  • The output of the LISTDSN CLI command has been changed to include an additional column indicating whether a data set backup was created using a CLI command or a policy run. Please refer to the CLI User Guide for more information.

Resolved issues

  • The management graphical user interface is updated to correctly handle data set delete actions performed on Write-Once-Read-Many (WORM) protected storage, such as Hitachi HCP WORM or AWS S3 ObjectLock, and avoid inconsistencies between the management server database and the object storage. Note that data immutability is only maintained by the object storage platform..

  • Blocked the erroneous ability to restore archived data sets, which led to pending ENQs being left standing in the Model9 agent address space.

  • Fixed discovery process not to skip storage groups and volume serial names containing the ‘$’ sign.

  • Fixed discovery process not to skip data sets residing on EAV volumes during an archive policy run, which marked the entire run as ‘Ended Error’.

  • Fixed an issue with listing objects using certain API calls when using Hitachi HCP object storage as the target storage.

  • The Policies page in the user interface has been enhanced to remember the page number, sort order and search filter for the duration of the session, whether editing or creating a new policy.

V1.6 Online release notes

Major Features and Enhancements

Release 1.6 features an enriched set of mainframe-focused operations that eliminate dependence on external systems:

  • Use the new CLI ARCHIVE (Command Line Interface) command to perform on-demand, data-set level archiving operations directly from the mainframe to any cloud - on-prem, hybrid or public. This functionality enables the execution of archive operations within scripts and JCLs, and simplifies storage administration.

  • Run the Model9 management server in a z/OS container extension (zCX), and eliminate dependence on external systems. The Model9 docker container can be deployed on zCX, Linux on Z, or any other docker, while supporting the familiar graphical user interface. To learn more, join the Model9 speaking session at SHARE Virtual 2020, on September 30, 2020, 4:15 PM-5:00 PM.

image1.jpeg
  • Simplify monitoring by extracting the policy’s run logs from the management server to the SAPI batch job in z/OS.  No need to access the logs via the server.

Performance

  • Convert the “discovery” phase to a zIIP eligible process for improved policy performance and reduced CPU consumption. See the installation guide for further details.

Resolved issues

  • The Archive policy now skips data sets with expiration date in the past, instead of archiving them. The policy run log reports these data sets to notify the user that an action may be required outside of the archive policy in order to delete these data sets.

  • The Archive policy now uses the correct scope of SYSPLEX instead of the previously used SYSTEM when checking whether a data set is eligible for archive. The false scope would cause certain data sets to be marked as eligible for archive in the discovery phase of the policy, but to fail during the archive action itself.

Known issues

  • When backing up a symbolic link to a file, the policy will only backup the symbolic link itself and not the file that the link is pointing to. If the policy's filter matches the actual file itself, it will be backed up (just not via the link). When backing up a symbolic link to a directory, the policy will backup the directory's contents and the link pointing to the directory, but the directory itself (which might have permissions) will not be backed up.

  • IBM Java 8 SR6 FP15 is not supported for the z/OS agent.

V1.5.3 Online release notes

Major features and enhancements

  • Added full integration with Cohesity's DataPlatform and marketplace, consolidating onto Cohesity for unified management, simpler protection, and deep security.

image1.jpeg
  • Added detection and notification on recall and restore of objects that were moved to Amazon S3 Glacier Deep Archive.

  • Added support for archive and recall of GDG Extended format data sets.

Resolved issues

  • Added detection and handling of write inconsistencies in the underlying IBM JVM. The fix detects an incomplete write attempt, reports it to the log and performs a retry. Additionally, a change to the JVM interface was implemented to significantly decrease the probability of this situation to happen. 

  • Fixed the Export action to support SMF files format.

  • Fixed policy execution to prevent it from skipping the rest of a volume or of a storage group when encountering an unexpected error.

  • Reduced automatic recall retry attempts, to avoid long waits and excessive messages to the agent’s log and to the SYSLOG.

  • Fixed policy scheduling to ignore disabled policies.

Command Line Interface

  • Increase the CLI LISTDSN output size to accommodate very large queries.

User interface

  • Provide a configurable UI session timeout.

Installation

  • Support multiple licenses in a single license file to better facilitate Sysplex spanning over different machines and to avoid manual license updates in disaster recovery scenarios.

  • Simplify installation by improving the configuration file structure.

  • The server logs by default now rotate every 24 hours or 500MB.

Problem determination

  • The location of the server logs has been changed and is now under MODEL9_HOME/logs to allow easier retrieval of the logs when needed.

  • Improved jclouds log visibility to allow faster diagnosis of network-related issues.

Known issues

  • Automatic recall does not perform as expected in situations such as IDCAMS DELETE MASK and LISTDSI NORECALL. These situations are documented internally and will be handled in a future release. The suggested workaround is to use the CLI DELARC command to delete archived data sets and the ZM9$NORC DD to exclude automatic recall from specific jobs.

V1.5.2 Online release notes

Major features and enhancements

  • Introducing new data set import policy type that facilitates copying any cataloged data set to cloud storage directly from tape, without using interim DASD storage. Imported tape data sets stored in cloud storage may be exported back to disk or tape for processing by mainframe applications.

image1.png
  • Enhanced automatic recall to provide a straightforward installation and early detection and reporting of errors during processing.

Resolved issues

  • Fixed an automatic recall abend during allocation attempt of a JCLLIB on an offline catalog.

  • Removed irrelevant messages from the full-dump activity log.

  • Fixed the Simulate action not to fail for policies with special characters in the policy name.

Compatibility

  • Added support for SMS Management class attribute “# GDG Elements on Primary”

User interface

  • Enhanced the activity log to includes all the relevant parameters, including new ones

Installation

Automatic recall improvements

  • Simplify installation by reducing the number of dynamic exits from 2 to 1.

  • Improved installation verification by failing the exit activation in case of a version mismatch between the hook and the exit. 

Known issues

  • A policy defined with a data set name selection pattern might fail to list the pattern if one or more data set fields can’t be listed. Expected error message: SEVERE [io.model9.backup.common.mvs.resources.DiscoveryInformation-97] Failed searching catalog. Catalog search message: Catalog search error RC=4 (0x4)

System Prerequisites

Release 1.7.0

Firewall and network connectivity settings

Note: all port numbers may be customized. See installation guide for more information.

Base rules

From

To

TCP port

Description

Management server

z/OS agent

9999

Server to agent communication

Each web-UI user workstation

Management server

80, 443

HTTP/S access to the Web UI

Installing user workstation

Management server

22

SSH Access for installation and maintenance

z/OS agent

Management server

80, 443

Initiate Server API from the MF

Storage access rules when using object storage

From

To

TCP port

Description

Management server

Object storage

80, 443

HTTP/S access to the object storage

z/OS agent

Object storage

80, 443

HTTP/S access to the object storage

Storage access rules when using traditional storage

From

To

TCP port

Description

Management server

Management server

9000

S3 access to traditional storage

z/OS agent

Management server

9000

S3 access to traditional storage

Management Server

Requirement

Supported values

Server

Linux Virtual or physical machine, Linux on intel or Linux on z

Minimum 4 cores

Minimum 8 GBs of memory

Minimum 1 Gb network bandwidth

Linux distribution

RHEL 7 and up

SuSE 12 and up

Ubuntu 16 and up

Note: Local admin privileges are required during installation

Additional software packages

docker

unzip

Java version

JRE 1.8 64-bit

Disk storage

/var/lib/docker Minimum of 4 GBs

/data/model9 Minimum of 4 GBs

Local firewall

Disabled or maintained by customer

Date and Time

Linux UTC time must match z/OS time The UTC Date/Time can be displayed on Linux and z/OS USS using the command: date -u

System parameters

The following kernel parameters should be added to the /etc/sysctl.conf file:

net.ipv4.tcp_keepalive_time=600

net.ipv4.tcp_keepalive_intvl=30

net.ipv4.tcp_keepalive_probes=10

Verifying management server prerequisites

Use the verification scripts that were supplied with the installation package to verify the management server is ready for installation and that all prerequisites have been completed.

The scripts verify the following prerequisites:

  • Number of CPUs and memory is according to minimum requirements

  • Availability of network ports

  • Firewall rules permitting access to target storage and z/OS system

The following information must be supplied to run the script:

  • z/OS IP/DNS and port for the Model9 Agent

  • Object storage IP/DNS and port

Running the verification script
  1. Upload the VerificationScripts.zip file to the Linux server.

  2. Unzip the uploaded zip file to a temporary folder.

    unzip VerificationScripts.zip -d /tmp

  3. Execute the script

    cd /tmp/PrereqsScripts

    ./M9VerifyPrereqs

  4. Follow the instructions on the screen as prompted by the script.

z/OS Agent

Software

Requirement

Supported values

z/OS version

z/OS V2R1 and up

Java version

Java 8 64-bit SR5 FP16 and up with JZOS setup complete

Latest Java for z/OS can be downloaded from IBM website here: https://developer.ibm.com/javasdk/support/zos/#v8

Maintenance level

IBM APAR OA52913 - Required for archive using SMS attributes

IBM APAR OA58743 - Optional, to fix possible S878 agent abends

IBM APAR OA58113 - for DFDSS restore of extended format data sets

Hardware

Requirement

Supported values

Memory

1GB per LPAR

DASD space

Minimum 200 cylinders for a Model9 ZFS allocation

Security

Requirement

Supported values

Permissions of user running installation process

OMVS SEGMENT defined

BPX.SUPERUSER

CL(FACILITY)

ACC(READ)

SUPERUSER.FILESYS.MOUNT

CL(UNIXPRIV)

ACC(UPDATE)

BPX.FILEATTR.APF

CL(FACILITY)

ACC(READ)

BPX.FILEATTR.PROGCTL

CL(FACILITY)

ACC(READ)

BPX.FILEATTR.SHARELIB

CL(FACILITY)

ACC(READ)

Network

Requirement

Supported values

TCP/IP MTU size

Minimum 1492. Use the NETSTAT GATE command to check the defined MTU size, it is noted in column “pkt sz” of the “default” line

Verifying z/OS agent prerequisites

Use the verification scripts that were supplied with the installation package to verify the z/OS system is ready for installation and that all prerequisites have been completed.

The scripts verify the following prerequisites:

  • Java version

  • Status of the RACF class PROGRAM

  • Availability of network ports

  • Firewall rules permitting access to target storage

The following information must be supplied to run the script:

  • z/OS Java home directory

  • Temporary work directory

  • Object storage IP/DNS and port

Running the verification script
  1. Upload the z/OS verification JCL from VerificationScripts.zip, under /PrereqsScripts/zOS. Use ASCII transfer type.

  2. Edit and the JCL according to instructions in the comment, submit and review the RC and messages in SYSTSPRT DD.

RC

Explanation

0

All verification tests ended successfully

4

RACF CLASS PROGRAM is active - or -

Object storage connectivity was refused (port might be down)

8

Java version not compatible -or - Path not valid - or - Object storage is unavailable

16

Java version did not print any output

Installation Guide

Release 1.7

Introduction

This Installation Guide provides step by step instructions for installing or upgrading the Model9 Cloud Data Manager for Mainframe 1.7. The guide supports upgrades from 1.6.0 to 1.7 only.

The Model9 Cloud Data Manager for Mainframe consists of the following software components:

  1. One or more z/OS based agents and jobs.

  2. A Linux based management server which runs the Model9 docker containers.

image8.jpg

The Model9 management server includes:

  1. Web UI – Graphic user interface for management of backup and archive policies.

  2. Management database – maintains metadata for improved system performance.

  3. Optional MinIO container – An open-source proxy for private cloud storage.

    Note

    The Model9 agent supports various actions independent of the Model9 server, including listing, restore, archive and delete and automatic recall of data sets.

Installing the Model9 management server

Prerequisites

Prepare the environment for installation of the server by following these steps:

License key

Obtain a license key from Model9 by opening a “new license request” in the Model9 service portal: https://model9.atlassian.net/servicedesk/customer/portals

The output of the z/OS command “D M=CPU” is required.

Firewall

Configure the local firewall to allow connections to ports needed by the Model9 containers. For a list of required ports, see the Model9 System Prerequisites 1.7 document.

Modify the firewall settings to all the above-mentioned port connections, or make sure the local firewall is disabled, using the following commands:

#UBUNTU
sudo systemctl stop ufw
sudo systemctl disable ufw
#RHEL
sudo systemctl stop firewalld
sudo systemctl disable firewalld
#SUSE
sudo systemctl stop SuSEfirewall2
sudo systemctl disable SuSEfirewall2

If Docker is already installed, restart the Docker service using the following command:

sudo systemctl restart docker

System parameters

The management server implements a “keep-alive” mechanism to prevent the firewall from disconnecting because of long requests to the agent during policy execution. Add the following kernel parameters to your /etc/sysctl.conf file:

net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10

Apply the changes using the following command:

sudo sysctl -p

Docker

  1. Verify that the Docker is enabled using the following command:

  2. Verify that the Docker is “enabled” and “active (running)” as shown in the following example:

    image5.png
  3. If the Docker service is not enabled or active, use the following commands to enable and activate it:

    sudo systemctl enable docker
    sudo systemctl start docker
  4. Make sure that the expected output of Docker is displayed, by issuing the following command:

    sudo docker ps

    The following output is expected:

    image4.png

Note

The server installation is shipped as a Docker container, see the Docker Security documentation for additional information.

File system

The Model9 files should reside on a separate file system (other than the root file system) with enough free space to accommodate the Model9 management server and database. It is recommended to use the xfs filesystem type. Contact your Linux administrator to allocate adequate space and ensure it is mounted.

This procedure is intended for new and unmounted block devices only. It will overwrite any data that might already exist on the device.

Installation files

Upload the model9-server-home zip installation file to the designated server in binary mode. Select one of the two available files according to your environment:

Environment

Installation file

x86

model9-v1.7.0_build_666ff604-server.zip

Linux on z

model9-v1.7.0_build_666ff604-server-s390x.zip

Step 1: Unzip the installation file

Create the filesystem hierarchy using the following commands:

# Change user to root
sudo su -
# Define the path to model9 installation files you uploaded earlier
export M9INSTALL=/<path>
# If you haven’t done so already, set the model9 target installation path
export MODEL9_HOME=/data/model9
# Change the directory to $MODEL9_HOME
cd $MODEL9_HOME
# Unzip the server’s installation file
#on Linux on z issue:
unzip $M9INSTALL/model9-v1.7.0_build_666ff604-server-s390x.zip

#on Linux issue:
unzip $M9INSTALL/model9-v1.7.0_build_666ff604-server.zip

Step 2: Deploy the Model9 management server’s components

  1. Verify that the target storage is available and running. This can be done by running the Model9 pre-installation verification scripts, see document Model9 System Prerequisites 1.7. If the target storage to be used by Model9 is not object storage, a MinIO proxy must be installed. Proceed to Appendix A: Install MinIO S3-Proxy to install MinIO.

  2. Deploy the application components using the following commands:

    #on Linux on z issue:
    docker load -i $MODEL9_HOME/model9-v1.7.0_build_666ff604-s390x.docker
    docker load -i $MODEL9_HOME/postgres-s390x-12.3.docker.gz
    
    #on Linux issue:
    docker load -i $MODEL9_HOME/model9-v1.7.0_build_666ff604.docker
    docker load -i $MODEL9_HOME/postgres-x86-12.3.docker.gz

Optional: Replace the default self-signed certificate

The base installation provides a self-signed certificate for encrypting access to the user interface. To replace the default certificate for the WEB UI, see Appendix B: Secure web communication. Communications between the Model9 Server and the Model9 Agent are encrypted by default and further action should only be taken if site certificates are preferred.

Step 3: Update the Model9 management server parameters file

The model9-local.yml file residing in the $MODEL9_HOME/conf/ path contains some of the default parameters. You can update them if necessary. Some of the parameters are explained below:

model9.licenseKey: <license-key>

model9.home: 'MODEL9_HOME'

model9.security.dataInFlight.skipAgentHostNameVerification: true

model9.security.dataInFlight.truststore.fileName: 'MODEL9_HOME/keys/model9-backup-truststore.jks'
model9.security.dataInFlight.truststore.type: "JKS"
model9.security.dataInFlight.truststore.password: "model9"
model9.security.dataInFlight.keystore.fileName: 'MODEL9_HOME/keys/model9-backup-server.p12'
model9.security.dataInFlight.keystore.type: "PKCS12"
model9.security.dataInFlight.keystore.password: "model9"

model9.session.timeout.minutes: 30

model9.master_agent.name: "<ip_address>"
model9.master_agent.port: <port>

# model9.objstore.resources.container.name: model9-data
# model9.objstore.endpoint.api.id: s3
model9.objstore.endpoint.url: http://minio:9000
model9.objstore.endpoint.userid: <object store access key>
model9.objstore.endpoint.password: <object store secret>

model9.runlogs.expirationScanIntervalMinutes: <min>
model9.runlogs.maxRetentionPeriodDays: <days>

dataSource.user: postgres
dataSource.password: model9
dataSource.url: jdbc:postgresql://model9db:5432/model9
  1. License Key – A valid Model9 license key as obtained in the prerequisites section. When using multiple keys for multiple CPCs, specify one of the keys in the server’s yml file. The server-initiated actions are carried out by the agent using its own defined license. The license key specified for the server is used for displaying a message regarding the upcoming expiration of the license.

  2. Session timeout minutes - Specify the number of minutes following which an inactive UI session will end. The default is 30 minutes.

  3. Master Agent – The agent running on z/OS which verifies the UI login credentials, hostname, IP address and port number.

    Note

    Specifying a distributed virtual IP address (Distributed VIPA) can provide high availability by allowing the use of agent groups and multiple agents. See the Administrator and User Guide for more details.

  4. Objstore endpoint – object storage information including:

    Parameter

    Description

    Required

    Value

    resources. container.name

    Container/bucket name

    no

    default: model9-data

    model9.objstore.endpoint.url

    URL address of local or remote object storage, both HTTP and HTTPS** are supported

    yes

    default: none

    Amazon AWS*: https://s3.amazonaws.com

    Google Cloud Storage: https://storage.googleapis.com

    model9.objstore.endpoint.userid

    Access key to object storage

    yes

    default: none

    model9.objstore.endpoint.password

    Secret key to object storage

    yes

    default: none

    model9.objstore.endpoint.api.id

    The object storage API name

    no

    default: s3

    Amazon AWS*: aws-s3

    Microsoft Azure: azureblob

    api.s3.v4signatures

    When using object storage that uses V4 signatures, set this parameter to ‘true’ in addition to api.id: s3

    no

    default: false Cohesity: true HCP-CS: true

    no.verify.ssl

    when using the HTTPS protocol, whether to avoid SSL certificate verifications

    no

    default: true

    * When using Amazon S3, see Appendix C: AWS S3 bucket permissions .

    ** Using HTTPS for the object storage URL parameter enables Data-in-Flight encryption.

  5. Run logs expiration - Setting these parameters will trigger an automatic deletion of run logs from the server. Please note that the deletion is non-recoverable. The automatic deletion will not be executed as long as one of the following parameters is set to (-1):

    Parameter

    Description

    Required

    Value

    model9.runlogs.expirationScanIntervalMinutes

    This parameter determines the frequency of running the deletion process of old run logs.

    no

    default: -1 (never)

    model9.runlogs.maxRetentionPeriodDays

    This parameter determines after how many days a run log will expire and can be deleted by the automatic deletion process.

    no

    default: -1 (never)

  6. DataSource - DB connection information.

Step 4: Start the Model9 management server

  1. Start the Model9 PostgreSQL database container using the following command:

    #on Linux on z issue:
    docker run -p 127.0.0.1:5432:5432 \ 
    -v $MODEL9_HOME/db/data:/var/lib/postgresql/data:z \ 
    --name model9db --restart unless-stopped \  
    -e POSTGRES_PASSWORD=model9 -e POSTGRES_DB=model9 -d s390x/postgres
    
    #on Linux issue:
    docker run -p
    127.0.0.1:5432:5432 \
    -v $MODEL9_HOME/db/data:/var/lib/postgresql/data:z \
    --name model9db --restart unless-stopped \
    -e POSTGRES_PASSWORD=model9 -e POSTGRES_DB=model9 -d postgres
  2. Verify the health status of the container and make sure it is ready to accept connections by issuing the following command and verifying its output as shown in the following example:

    docker logs model9db
    image3.png
  3. Start the server

    1. When running policies with over 100k objects, update the heap size to Xmx4096.

    2. Edit the time zone (TZ) setting to ensure proper scheduling.

    3. When using an object storage provider other than MinIO, remove the “--link minio:minio” definition from the command.

    Once the object storage is available and the PostgreSQL container is running, start the server using the following command:

    #on Linux on z issue:
    docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
    -v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
    -e "TZ=America/New_York" \
    -e "CATALINA_OPTS=-Xmx2048m -Djdk.nativeCBC=false -Xjit:maxOnsiteCacheSlotForInstanceOf=0" \
    --link minio:minio --link model9db:model9db \
    --name model9-v1.7.0 model9:v1.7.0.666ff604
    
    #on Linux issue:
    docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
    -v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
    -e "TZ=America/New_York" -e "CATALINA_OPTS=-Xmx2048m -Djdk.nativeCBC=false" \
    --link minio:minio --link model9db:model9db \
    --name model9-v1.7.0 model9:v1.7.0.666ff604

    View the PostgreSQL and Model9 Server logs using the following commands:

    # dump logs to screen
    cat /data/model9/logs/model9.*.log
    docker logs model9db
    docker logs minio
    # dump logs to screen and keep displaying new log messages as they arrive
    tail -f /data/model9/logs/model9.*.log
    docker logs -f model9db
    docker logs -f minio
  4. View the container’s logs by using the tail command to verify that the Model9 container has started up successfully. For example:

    2020-09-29 01:56:44,719 [main] INFO zosbackupserver.ApplicationLoader - The following profiles are active: production 2020-09-29 01:56:45,873 [main] INFO  zosbackupserver.Application - Loading external configuration from /model9/conf/model9-local.yml 2020-09-29 01:57:08,860 [main] INFO  z.l.AddProjectionsToAllLiveArchivesAndDeleteExpired - Using container: model9-ci 2020-09-29 01:57:09,929 [main] INFO  z.l.AddProjectionsToAllLiveArchivesAndDeleteExpired - Migration complete. Created 0 expiration projections. Deleted 0 archive versions 2020-09-29 01:57:09,937 [main] INFO  z.l.BlobRepositoryChangeDashMetadataKeysToUnderscore - Using container: model9-ci 2020-09-29 01:57:10,165 [main] INFO  i.m.b.c.o.BucketValidator - Object store connectivity has been established successfully 2020-09-29 01:57:10,413 [main] INFO  zosbackupserver.BootStrap - Model9 Version: v1.7.0 Build 666ff604 Started 2020-09-29 01:57:13,799 [main] INFO  zosbackupserver.ApplicationLoader - Started ApplicationLoader in 30.488 seconds (JVM running for 39.514)[1] 

  5. The installation is complete. To stop, start or restart the server:

    docker stop|start|restart model9-v1.7.0
    docker stop|start|restart model9db
    docker stop|start|restart minio
  6. Display the server’s resource consumption using the following commands:

    docker stats model9-v1.7.0
    docker stats model9db
    docker stats minio
  7. Display the containers’ health status with the following command, and check the relevant logs if necessary:

    docker ps -a

Optional: Install the Stand-Alone Program for Stand-Alone Restore

Model9 full volume dumps can be used for stand-alone restore. To prepare a Bare-Metal recovery restorable volume, the stand-alone program must be installed on the server. The UI provides a special action to prepare a stand-alone copyfrom a regular full volume dump. The installation guide describes the required steps for enabling the creation of stand-alone copies. See the Model9 Administrator and User Guide for:

  1. How to prepare a Stand-Alone copy.

  2. How to perform a Stand-Alone restore.

Creating a Stand-Alone Copy - Requirements

Creating a stand-alone copy requires the following DFDSS files to be saved in the $MODEL9_HOME/SAbackup path:

  • DFSMSDSS.ins

  • DFSMSDSS.IMAGE

  • DFSMSDSS.PREFIX

These files can be obtained from the IBM Customized Offering Driver which can be downloaded from Shopz free of charge.

Note

Do not change the names or letter case of the DFSMSDSS files.

Stand-Alone Restore using FTP - Requirements

To perform a stand-alone restore from removable media accessed via FTP, install the VSFTPD default server using the following command:

#Ubuntu
sudo apt-get install vsftpd

#RHEL
sudo yum install vsftpd

A local user with sudo permissions can run the following systemctl commands to enable and start the service:

sudo systemctl enable vsftpd
sudo systemctl start vsftpd

Note

The SAbackups directory should be used to IPL from the HMC.

Stand-Alone Restore from a USB - Requirements

The USB device should be formatted using the FAT32 file system and can reside in any directory except for the root path.

Installing the Model9 management server on zCX

Prerequisites

Prepare the environment for installation of the server on zCX by following these steps:

License key

Obtain a license key from Model9 by opening a “new license request” in the Model9 service portal: https://model9.atlassian.net/servicedesk/customer/portals

The output of the z/OS command “D M=CPU” is required.

zCX Configuration

  1. Verify that the zCX instance has at least 8GB of memory.

  2. Verify that the zCX root filesystem has at least 8GB of storage space.

  3. Verify that the zCX data filesystem has at least 40GB of storage space (Extra data volumes can be added dynamically after instance creation).

Docker

  1. Create docker volumes for the Model9 management server and database:

    docker volume create model9
    docker volume create model9db
  2. Create a docker instance of alpine linux to unzip and edit the installation files.

    #Running an alpine container and mounting the model9 docker volume
    docker run -d --rm --name dummy -v model9:/root s390x/alpine \
    tail -f /dev/null
  3. Upload the s390x installation zip to the zCX instance.

  4. Copy the s390x installation zip from the zCX instance to the alpine container (one line):

    docker cp model9-v1.7.0_build_666ff604-s390x.zip dummy:/root/model9-1.7.0_build_666ff604-s390x.zip
  5. Optional: Install bash for advanced file edit capabilities

    docker exec -it dummy sh
    apk update && apk add bash

Step 1: Unzip the installation file

Create the filesystem hierarchy using the following commands:

docker exec -it dummy sh
cd /root
unzip /root/model9-v1.7.0_build_666ff604-s390x.zip
#Logout of Alpine container (CTRL+D)

Step 2: Copy the containers to the zCX instance

Copy the docker containers from the alpine docker container:

docker cp dummy:/root/model9-v1.7.0_build_666ff604-s390x.docker ./
docker cp dummy:/root/postgres-s390x-12.3.docker.gz ./

Step 3: Load the docker container to the zCX instance

Create the filesystem hierarchy using the following commands:

docker load -i model9-v1.7.0_build_666ff604-s390x.docker
docker load -i postgres-s390x-12.3.docker.gz

Step 4: Start the Model9 database container

  1. Start the Model9 PostgreSQL database container using the following command:

    docker run -p 127.0.0.1:5432:5432 \
    -v model9db:/var/lib/postgresql/data:z \
    --name model9db --restart unless-stopped \
    -e POSTGRES_PASSWORD=model9 -e POSTGRES_DB=model9 -d s390x/postgres
  2. Verify the health status of the container and make sure it is ready to accept connections by issuing the following command and verifying its output as shown in the following example:

    docker logs model9db
    image3.png

Step 5: Update the Model9 management server parameters file

Login to the alpine container and edit the model9-local-yml file:

docker exec -it dummy sh
cd /root
vi conf/model9-local.yml
#Logout of Alpine container (CTRL+D)

Some of the parameters are explained below:

model9.licenseKey: <license-key>

model9.home: 'MODEL9_HOME'

model9.security.dataInFlight.skipAgentHostNameVerification: true

model9.security.dataInFlight.truststore.fileName: 'MODEL9_HOME/keys/model9-backup-truststore.jks'
model9.security.dataInFlight.truststore.type: "JKS"
model9.security.dataInFlight.truststore.password: "model9"
model9.security.dataInFlight.keystore.fileName: 'MODEL9_HOME/keys/model9-backup-server.p12'
model9.security.dataInFlight.keystore.type: "PKCS12"
model9.security.dataInFlight.keystore.password: "model9"

model9.session.timeout.minutes: 30

model9.master_agent.name: "<ip_address>"
model9.master_agent.port: <port>

# model9.objstore.resources.container.name: model9-data
# model9.objstore.endpoint.api.id: s3
model9.objstore.endpoint.url: http://minio:9000
model9.objstore.endpoint.userid: <object store access key>
model9.objstore.endpoint.password: <object store secret>

model9.runlogs.expirationScanIntervalMinutes: <min>
model9.runlogs.maxRetentionPeriodDays: <days>

dataSource.user: postgres
dataSource.password: model9
  1. License Key – A valid Model9 license key as obtained in the prerequisites section. When using multiple keys for multiple CPCs, specify one of the keys in the server’s yml file. The server-initiated actions are carried out by the agent using its own defined license. The license key specified for the server is used for displaying a message regarding the upcoming expiration of the license.

  2. Session timeout minutes - Specify the number of minutes following which an inactive UI session will end. The default is 30 minutes.

  3. Master Agent – The agent running on z/OS which verifies the UI login credentials, hostname, IP address and port number.

    Note

    Specifying a distributed virtual IP address (Distributed VIPA) can provide high availability by allowing the use of agent groups and multiple agents. See the Administrator and User Guide for more details.

  4. Objstore endpoint – object storage information including:

    Parameter

    Description

    Required

    Value

    resources. container.name

    Container/bucket name

    no

    default: model9-data

    model9.objstore.endpoint.url

    URL address of local or remote object storage, both HTTP and HTTPS** are supported

    yes

    default: none

    Amazon AWS*: https://s3.amazonaws.com

    Google Cloud Storage: https://storage.googleapis.com

    model9.objstore.endpoint.userid

    Access key to object storage

    yes

    default: none

    model9.objstore.endpoint.password

    Secret key to object storage

    yes

    default: none

    model9.objstore.endpoint.api.id

    The object storage API name

    no

    default: s3

    Amazon AWS*: aws-s3

    Microsoft Azure: azureblob

    api.s3.v4signatures

    When using object storage that uses V4 signatures, set this parameter to ‘true’ in addition to api.id: s3

    no

    default: false Cohesity: true HCP-CS: true

    no.verify.ssl

    when using the HTTPS protocol, whether to avoid SSL certificate verifications

    no

    default: true

    * When using Amazon S3, see Appendix C: AWS S3 bucket permissions.

    ** Using HTTPS for the object storage URL parameter enables Data-in-Flight encryption.

  5. Run logs expiration - Setting these parameters will trigger an automatic deletion of run logs from the server. Please note that the deletion is non-recoverable. The automatic deletion will not be executed as long as one of the following parameters is set to (-1):

    Parameter

    Description

    Required

    Value

    model9.runlogs.expirationScanIntervalMinutes

    This parameter determines the frequency of running the deletion process of old run logs.

    no

    default: -1 (never)

    model9.runlogs.maxRetentionPeriodDays

    This parameter determines after how many days a run log will expire and can be deleted by the automatic deletion process.

    no

    default: -1 (never)

  6. DataSource - DB connection information.

Step 6: Starting the Model9 server

Once the object storage is available and the PostgreSQL container is running, start the server:

docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
-v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
-e "TZ=America/New_York" \
-e "CATALINA_OPTS=-Xmx2048m -Djdk.nativeCBC=false -Xjit:maxOnsiteCacheSlotForInstanceOf=0" \
--link minio:minio --link model9db:model9db \
--name model9-v1.7.0 model9:v1.7.0.666ff604

Installing the Model9 agent

Prerequisites

License key

Obtain your license key from Model9 by opening a “new license request” in the Model9 service portal: https://model9.atlassian.net/servicedesk/customer/portals

The output of the z/OS command “D M=CPU” is required for obtaining a new license.

Java version

Verify that the z/OS Java is version 8 64-bit SR5 FP16 or above. Verify the Java version and change the Java installation directory using the following commands:

cd /usr/lpp/java/J8.0_64/bin
./java -version
java version "1.8.0_191"
Java(TM) SE Runtime Environment (build 8.0.5.27 - pmz6480sr5fp27-20190104_01(SR5 FP16))
IBM J9 VM (build 2.9, JRE 1.8.0 z/OS s390x-64-Bit Compressed References 20181219_405297 (JIT enabled, AOT enabled)
OpenJ9 - 3f2d574
OMR - 109ba5b
IBM - e2996d1)
JCL - 20190104_01 based on Oracle jdk8u191-b26

UTC date and time

The object storage protocol requires the z/OS USS and object storage UTC date and time to match. Use the following USS command to display and verify the UTC date and time:

date -u

Note

If the UTC date and time do not match, the agent will fail with an HTTP error code 403 while trying to connect to the object storage.

JZOS setup

The Agent is invoked using the JZOS Batch Java Launcher which is a component of the IBM JVM for z/OS. To verify that the JZOS is correctly configured to run the Agent, check the JZOS JCL procedures and make sure member JVMPRC86 is located in a site standard JES PROCLIB. If any of the JZOS resources are missing, copy them from the Java installation to your installation:

//M9JAVAP JOB ACCT#,SYSPROG,TIME=NOLIMIT,REGION=0M,
// NOTIFY=&SYSUID,MSGLEVEL=(1,1),MSGCLASS=X
//COPYFILE EXEC PGM=IKJEFT01
//IN DD PATH='/usr/lpp/java/J8.0_64/mvstools/samples/jcl/JVMPRC86'
//OUT DD DISP=SHR,DSN=SYS1.PROCLIB(JVMPRC86)
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
OCOPY INDD(IN) OUTDD(OUT) TEXT
/*
//

Note

For more details about JZOS setup, refer to the JZOS Installation and User Guide: http://publibfp.dhe.ibm.com/epubs/pdf/ajvc0120.pdf

Step 1: Allocate and mount the ZFS filesystem

Create a dedicated ZFS filesystem for the Model9 Agent installation and mount point for the new ZFS, as demonstrated in the following JCL. Edit the JCL using the site’s standard naming convention for the mount point, ZFS and volume serial number if it is not managed by the SMS.

//M9AGTZFS JOB ACCT#,SYSPROG,TIME=NOLIMIT,REGION=0M,
// NOTIFY=&SYSUID,MSGLEVEL=(1,1),MSGCLASS=X
//* Create a Model9 installation directory 
//CREATE   EXEC   PGM=BPXBATCH,      
// PARM='SH mkdir -p /usr/lpp/model9'
//* Define and format a new zFS for Model9 
//DEFINE   EXEC   PGM=IDCAMS
//SYSPRINT DD     SYSOUT=*
//SYSIN    DD     *
 DEFINE CLUSTER (NAME(SYS2.MODEL9.ZFS) -
 VOLUMES(xxxxxx) -
 LINEAR CYL(200 50))
/*
//FORMAT   EXEC   PGM=IOEAGFMT,REGION=0M,
// PARM=('-aggregate SYS2.MODEL9.ZFS -compat')
//SYSPRINT DD     SYSOUT=*
//STDOUT   DD     SYSOUT=*
//STDERR   DD     SYSOUT=*
//* Mount the newly defined zFS to the Model9 installation directory
//MOUNT    EXEC   PGM=IKJEFT01,DYNAMNBR=10
//SYSTSPRT DD     SYSOUT=*
//SYSTSIN  DD     *
 MOUNT FILESYSTEM('SYS2.MODEL9.ZFS') +
 MOUNTPOINT('/usr/lpp/model9') +

Note

The parameters for the IOEAGFMT utility and the mount commands are case sensitive.

To automatically and permanently mount the ZFS after an IPL, add the mount command to the BPXPRMxx as shown in the following example:

MOUNT FILESYSTEM('SYS2.MODEL9.ZFS')
TYPE(ZFS)
MODE(RDWR)
MOUNTPOINT('/usr/lpp/model9') /* AUTOMOVE */

Note

For a Sysplex environment, remove the comment marks from the term AUTOMOVE in the JCL above to enable file system migration between Sysplex members.

Step 2: Upload the Model9 agent TAR file to the mainframe

Use an FTP utility to upload the Model9 agent’s installation tar file to the model9 directory created in the previous step. Use Passive Mode if supported by the FTP client. The tar file must be uploaded in binary mode as shown in the following example:

$ ftp mf-lp1
Connected to mf-lp1.
220-FTPD1 IBM FTP CS V2R2 at mf-lp1, 06:20:40 on 2017-02-23.
220 Connection will not timeout.
Name (mf-lp1:m9user): m9user
331 Send password please.
Password:
230 M9U is logged on. Working directory is "M9U.".
Remote system type is MVS.
ftp> cd /usr/lpp/model9/
250 HFS directory /usr/lpp/model9/ is the current working directory
ftp> bin
200 Representation type is Image
ftp> put model9-v1.7.0_build_666ff604-agent.tar
local: model9-v1.7.0_build_666ff604-agent.tar remote: model9-v1.7.0_build_666ff604-agent.tar
229 Entering Extended Passive Mode (|||1026|)
125 Storing data set /usr/lpp/model9/model9-v1.7.0_build_666ff604-agent.tar
250 Transfer completed successfully.
4483584 bytes sent in 00:02 (1.95 MiB/s)
ftp> quit
221 Quit command received. Goodbye.

Step 3: Extract the agent files from the TAR file

Use the tar command in the z/OS UNIX shell to extract the agent files. During extraction, all agent files are saved in the same directory and can only be run by users who have the following permissions in the FACILITY class:

  • BPX.FILEATTR.APF ACC(READ)

  • BPX.FILEATTR.PROGCTL ACC(READ)

  • BPX.FILEATTR.SHARELIB ACC(READ)

The directory name usually includes the agent’s release number. It is recommended to create an alias that does not include the release number for the directory. This makes future upgrades more transparent, as shown in the following example:

TSO OMVS
su
cd /usr/lpp/model9/
tar -xpf model9-v1.7.0_build_666ff604-agent.tar
# define the alias name “agent”
ln -s model9-v1.7.0_build_666ff604-agent agent

Step 4: Create and populate the configuration directory

Create and copy the Model9 sample configuration directory using the following commands:

cd /usr/lpp/model9
mkdir conf
cp agent/sampleConf/* conf/

Step 5: Create a program-controlled version of PAX

The Model9 z/OS UNIX files backup uses PAX. This functionality requires creation of a program-controlled version of PAX. Use the following commands:

cd /usr/lpp/model9
mkdir bin
cp -p /bin/pax bin
# Set program-control flag
extattr +p bin/pax

Step 6: Copy the Model9 libraries from USS to PDS

Edit the JCL CPY#PDS located in /usr/lpp/model9/agent/installation/ to create the Model9 LOADLIB, SAMPLIB and EXEC PDS files. Submit the JCL.

Step 7: Install the Model9 Command Line Interface

The Model9 CLI Command Line Interface provides an interface for issuing Model9 commands from TSO / JCL. Install the feature by creating a listener directory under the agent’s main path:

cd /usr/lpp/model9/
mkdir listener
chmod 777 listener
chmod +t listener

Customize the M9CLI rexx in the EXEC PDS to match installation standards:

fifodir = "/usr/lpp/model9/listener"
loaddir = "SYS2.MODEL9.V170.LOADLIB"

Note

Copy the M9CLI EXEC to your site’s local EXEC library.

Step 8: Define the agent to RACF

Use the sample JCL M9USERST in the SAMPLIB PDS to define the security settings required by the agent. M9USERST permits discrete profiles. If your site uses generic profiles, update the JCL accordingly. If your site uses unprotected resources, no further action is needed.

Note

Review and update the JCL to match local site standards. The SHARED parameter is specified by default. The SHARED.IDS permission is required to run the JCL.

Submit the JCL.

Step 9: Customize the Model9 agent start procedure

Copy the sample JCL M9AGENT from the SAMPLIB PDS to a local PROCLIB member. Update the JCL:

Update

Description

DD STEPLIB

Model9 installation LOADLIB

PWD environment variable

Model9 agent’s installation path

CONF_HOME environment variable*

Model9 agent’s configuration directory path

*CONF_HOME can be used for activating more than one agent in the same LPAR. The parameter will allow the agents to use the same Model9 installation files and libraries, but have a different configuration directory. The recommendation is to have one agent per LPAR, and to have all agents in the same GRS-complex point to the same Model9 complex. However, additional agents in the same LPAR may be required if using a sub-plex, having both development and production environments or pointing the agents to different cloud storage. CONF_HOME must precede the stdenv-main.sh statement. The following is a sample use of the parameter:

//STDENV DD *
export PWD=/usr/lpp/model9/agent
export CONF_HOME=$PWD/../conf
export ENV=agent
. $PWD/scripts/stdenv-main.sh
//

Note

Add the agent procedure to the system’s startup process.

Step 10: Update the Model9 agent configuration

Before starting the agent, update the following configuration files at /usr/lpp/model9/conf:

model9-stdenv.sh

Parameter

Description

Required

Default

JAVA_HOME

64-bit java home path

Yes

None

TIME_ZONE

Java time zone, e.g. America/New_York

Yes

None

TCPIP_NAME

TCP/IP stack used by the agent *

No

TCPIP

AGENT_MEMORY

Allocated Java heap space for the agent. 1 GB is the minimum value.

No

1g

LIFE_CYCLE_MEMORY

Allocated Java heap space for life cycle management. 256 MB is the minimum value.

No

256m

SAPI_MEMORY

Allocated Java heap space for the server API (M9SAPI). 56 MB is the minimum value.

No

56m

SAPI_NETWORK_DEBUG

Enable network level debug for server API (M9SAPI). Expected values: “true” | “false”

No

false

SAPI_KEYRING_NAME

The SAF keyring name to be used as a truststore during server API certificate validation. if specified, the keyring will be searched at: &SYSUID/<KeyRingName>

No

None

* If running in an environment with multiple TCP/IP stacks, use the DISPLAY TCPIP operator command to display the TCP/IP stack name as shown in the following example:

image9.png

agent.yml

Parameter

Description

Required

Default

port

The agent’s listening port for server HTTPS communication

No

9999

keystore

An agent keystore in JKS format which contains only one key entry to be used by the agent for incoming HTTPS requests. See Optional: Using the organization's certification authority.

No

agent.jks

keystore_password

The provided keystore password

No

agent

truststore

An agent truststore in JKS format which the agent will use to trust incoming HTTPS requests. See Optional: Using the organization's certification authority.

No

keystore value

truststore_password

The provided truststore password

No

keystore value

Configure the object storage settings by providing the following parameters:

Parameter

Description

Required

Value

objstore.endpoint.url

The URL of the object storage (Should start with “http://” or “https://”)

yes

default: none

objstore.endpoint.userid

Access key to object storage

yes

default: none

objstore.endpoint.password

Secret key to object storage

yes

default: none

objstore.resources.container.name

Container/bucket name

yes

default: model9-data

objstore.endpoint.api.id

The object storage API name

no

default: s3

Amazon AWS: aws-s3

Microsoft Azure: azureblob

objstore.endpoint.api.s3.v4signatures

When using object storage that uses V4 signatures, set this parameter to ‘true’, in addition to api.id: s3

no

default: false Cohesity: true HCP-CS: true

objstore.endpoint.no.verify.ssl

When using the HTTPS protocol, set to ‘true’ to avoid SSL certificate verifications*

no

default: true

* In order to enable trust validation on the object storage certificate use the objstore.endpoint.no.verify.ssl parameter and set it to ‘false’.

The agent.yml file may contain additional parameters, setting them will override the defaults:

Parameter Name

Description

Default value

objstore.endpoint.shared.archive.volume

Archived volume serial name

M9ARCH

objstore.functionality.backup. threadpool.size

Number of parallel threads for backup and archive runs. Increasing this parameter requires verifying that sufficient java heap space is defined for the agent .

20

load.balancing.group.name

A group of agents that can share the work of a single policy.

&SYSPLEX

restore.progress.interval.

percentage

Progress interval by percentage

10

restore.progress.minimum.size.mb

The minimum size in MB of a backup/archive action at which point the Agent starts reporting recall/restore progress

150

objstore.resources.complex.id

Defines the resources the agent can process.

“group- &SYSPLEX”

Lifecycle.deleteExpired.gdg.minimumDays

The minimum number of days that a data set will be kept after being archived. After the number of minimum days has passed, the data set will be checked whether it is eligible for deletion.

7

Lifecycle.deleteExpired.gdg.intervalDays

After the minimum number of days has passed, the data set will be checked whether it is eligible for deletion. The data set will be checked every X days.

4

cli.functionality.archive.compression

The compression type used for CLI archive execution.

gzip

Resource complex and load balancing parameters

Agents sharing the same resource complex can all interact with the same resources. Actions such as backup/archive/recall/restore/list are all dependent on the resource complex parameter. Agents not sharing the same resource complex will not be able to list/restore/recall backups/archives created on a different resource complex. By default, all agents in the same Sysplex share the same resource complex. To update the default value of the resource complex, add the parameter to your agent.yml configuration file with a new value. Within a resource complex, you can define one or more load balancing groups. See the Administrator and User Guide for more details.

Note

An agent can only access resources backed up or archived to the complex ID defined in its configuration.

Step 11: Update the Model9 agent license

Update the license file at /usr/lpp/model9/conf/model9-licenses.txt. The file can accommodate several licenses, for example, the license of all the LPARS in the Sysplex and the Disaster Recovery site’s license.

  • Each license should be written on a separate line.

  • Comments should be preceded with a “#” sign.

When multiple licenses are specified in the file, the most fitting license will be used, according to the following rules:

  • The CPUID correlates with the CPUID of the executing LPAR. This is mandatory.

  • From the licenses that correlate with the CPUID of the executing LPAR, the one with the latest expiration date will be used.

The license details to be used will be displayed in the agent’s log and in the UI. In case the agent could not find a valid license, an error message will be displayed as a WTO. The full error message will be available in STDOUT.

Optional: Secure the agent - server communication

By default, the agent accepts requests from trusted clients using a self-signed certificate generated by Model9. In a production environment, it is recommended to replace the certificate with a signed organizational CA certificate.

Before you start, verify that the server has:

  1. A JKS type truststore containing the CA chain certificates.

  2. A PKCS12 type keystore containing the server’s private key and its certificate, signed by the root CA.

Verify that the agent has:

  1. A JKS type truststore containing the CA chain certificates (same as the server’s).

  2. A JKS type keystore containing the agent’s private key and its certificate, signed by the root CA.

To replace the default certificates, run the following commands from $MODEL9_HOME/keys path:

  1. Create the certificate requests: - for the server’s certificate:

    openssl genrsa -out server.key 2048
    openssl req -new -key server.key -out server.csr

    - for the agent’s certificate:

    openssl genrsa -out agent.key 2048
    openssl req -new -key agent.key -out agent.csr
  2. Send the two certificate request files to your local CA for signing.

  3. Request the CA chain certificate as well.

  4. Import the certificates on the server / any Linux:

    For both the server and the agent: import the CA key into the JKS - Agent/Server truststore:

    keytool -keystore rootCA.jks -import -v -alias root -file rootCA.cer -keypass model9

    For the server: import the certificate and private key into the PKCS12 store - Server keystore:

    #OpenSSL to create p12 store
    openssl pkcs12 -export -in server.cer -inkey server.key -name agent -out server.p12

    For the agent: import the certificate and private key into the Agent keystore:

    #OpenSSL to create p12 store
    openssl pkcs12 -export -in agent.cer -inkey agent.key -name agent -out agent.p12
    #Import the p12 store into JKS
    keytool -importkeystore -deststorepass model9 -destkeystore agent.jks -srckeystore agent.p12 -srcstoretype PKCS12
  5. The server:

    1. Upload the server.p12 file to the server’s $MODEL9_HOME/keys path.

    2. Update the server’s configuration file on the Linux system (model9-local.yml file):

      • keystore -> filename – should point to the server.p12 (for example: MODEL9_HOME/keys/server.p12)

      • truststore -> filename – should point to the rootCA.jks (for example: MODEL9_HOME/keys/rootCA.jks)

  6. The agent:

    1. upload the agent.jks and rootCA.jks files to the Agent’s conf path on the z/OS (binary mode).

    2. Update the agent’s configuration file on the z/OS (agent.yml file):

      • keystore – should point to the agent.jks file (for example: ../conf/agent.jks)

      • truststore – should point to the rootCA.jks file (for example: ../conf/rootCA.jks)

Step 12: Install automatic recall

The Model9 Automatic recall Hook allows recall of archived files from within z/OS in a transparent manner. To install the automatic recall hook:

Update the PROGxx configuration by adding the following statements:

APF ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) SMS
LPA ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) MOD(ZM9CPTN)
LPA ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) MOD(ZM9S26X)
EXIT ADD EXITNAME(ZM9P_S026) MOD(ZM9S26X) PARM('M9ARCH')

Apply the changes using the SET PROG=xx operator command from any console:

SET PROG=xx

Note

The hook and exit must be loaded to the Dynamic LPA in order for them to function correctly, do not use MLPA or PLPA to load the modules.

Copy the M9HOOK and M9UNHOOK from the SAMPLIB PDS to a local PROCLIB member. Customize M9HOOK accordingly and activate it using the following command:

S M9HOOK

Note

Add the M9HOOK JCL to your standard IPL process.

If an existing Model9 Hook has been uninstalled, then a new copy of the ZM9CPTN module must be loaded to the Dynamic LPA using the following command:

SETPROG LPA ADD DSNAME=SYS2.MODEL9.V170.LOADLIB MOD=ZM9CPTN

When using another data management product together with Model9 Cloud Data Manager, add the following DD statement to the other product’s procedure to avoid collisions:

//ZM9$NORC DD DUMMY

Restart the address space after applying the DD.

More information may be found at Model 9 online knowledge base here.

Step 13: Life cycle management

The Model9 life cycle management JCL is responsible for automatically deleting archived data sets that have expired. A sample JCL, M9LIFECY, can be found in the Model9 SAMPLIB PDS.

  1. Grant your site scheduler permission for surrogate access to the M9USER:

    RDEFINE SURROGAT M9USER.SUBMIT UACC(NONE)
    PERMIT M9USER.SUBMIT CLASS(SURROGAT) ID(<scheduler>) ACC(READ)
    SETR REFRESH RACLIST(SURROGAT)
  2. Update the life cycle management JCL:

    Update

    Value description

    DD STEPLIB

    Model9 installation LOADLIB

    PWD environment variable

    Model9 agent’s installation path

  3. Submit the life cycle management JCL in the simulate mode: Set --simulate to ‘yes’. The life cycle management JCL will list all the data sets that would have been deleted without actually deleting them.

  4. Submit the life cycle management JCL in the initial mode: Set --simulate to ‘no’ or remove. The life cycle management JCL will scan all archived data sets with expiration dates in the past - up until today’s date. The next time the life cycle will run, it will resume where the previous process ended.

Note

Life cycle management JCL should be scheduled daily via your site scheduler.

Optional: Configure the Remote Server API M9SAPI

The optional Remote Server API feature allows running of Model9 policies from within z/OS using standard JCL. To enable the Remote Server API feature:

  1. Copy the sample JCL PROC M9SAPI from the SAMPLIB PDS to a local PROCLIB file and edit the procedure according to the instructions in the file. The JCL M9SAPIJ sample can be used to run a policy.

  2. The Remote Server API uses HTTPS to communicate with the server. If a trusted certificate is not defined for the server, set VRFYCERT=NO in the M9SAPI procedure to skip certificate validations.

  3. If a valid certificate is defined, make sure the user running the M9APIJ has the correct SAF keyring defined to enable validation of the server certificate. The SAPI_KEYRING_NAME setting defined in the sample model9-stdenv.sh configuratiuon can be used to specify the keyring name.

    Note

    Communications between the Remote Server API and the Model9 Server are encrypted regardless of the VRFYCERT value.

  4. When verifying the certificate used by the Remote Server API, it must be defined for a Resource Access Control Facility (RACF) using a keyring.

    1. Add a digital certificate to RACF, and create a keyring using the following commands:

      RACDCERT ADD('p12-cert-dataset-name') CERTAUTH TRUST WITHLABEL('label') PASSWORD('pkcs12-password')
      RACDCERT ADDRING(ring-name) ID(username)
      SETROPTS RACLIST(DIGTRING) REFRESH //if DIGTRING class is RACFLISTed
      RACDCERT CONNECT(CERTAUTH LABEL('label') RING(ring-name)
      SETROPTS RACLIST(DIGTCERT, DIGTRING) REFRESH //if DIGTRING & DIGTRING class is RACFLISTed
    2. If a keyring is used, verify that the user running the Remote Server API JCL has the following permissions:

      IRR.DIGTCERT.LISTRING CLASS(FACILITY) ACCESS(READ)

Optional: Configure ISPF to use M9ARCH as ML2 default volume

ISPF allows the definition of only one Level 2 migration volume. The default migration volume is MIGRAT. In order to use Model9 automatic recall function, the default volume must be set to M9ARCH, using the ISPF Configuration utility. Failing to change the migration volume will result in the following errors in ISPF screen 3.4:

Command

Situation

Resolution

Browse / Edit / View

Automatic recall will be rightfully triggered but the screen will continue to display M9ARCH as the VOLSER, instead of the DASD. The message "Tape not supported" will appear on the top right of the ISPF screen.

Enter REFRESH on the command line. The screen will be refreshed, showing the correct VOLSER, and the file will be available.

LISTC or performing a REXX

Automatic recall will be unnecessarily triggered and the data set will be recalled

Use “TSO LISTC” instead of typing LISTC next to the data set name

Note

Performing this step will cause any other auto-recall software to suffer from these errors

To change the default ISPF settings:

  1. Start the ISPF Configuration utility (TSO ISPCCONF) and select the site defined in the configuration table.

  2. Select Option 1 to modify the existing table.

  3. Change the “Volume for Migrated Data Sets name” to M9ARCH - the default Model9 archive volume name.

  4. Save the settings.

  5. Review the newly created configuration table file.

Either create and install the SMP/E USERMOD containing the new configuration table or create a load module that resides on a shared library containing the ISPF settings.

Step 14: Start the Model9 agent

Start the agent from any console by issuing the following command:

S M9AGENT

Verify that the agent was started successfully. The following messages should appear:

ZM91002I MODEL9 BACKUP AGENT VERSION 1.7.0 INITIALIZING
ZM91000I MODEL9 BACKUP AGENT INITIALIZED

Upgrading the Model9 management server

Prerequisites

  1. Ensure that there are no policies scheduled to run during the upgrade operation.

  2. The only supported upgrade path is from release 1.6.0 to 1.7. This documentation is relevant for upgrading from release 1.6.0 only. If the installed release is older than that, please refer to previous installation guides for upgrade instructions.

Step 1: Upload the zip files

Set the default MODEL9_HOME environment variable using the following command:

export MODEL9_HOME=<model9 home>

Upload the zip installation file model9-v1.7.0_build_666ff604-server.zip to the designated server in binary mode.

Note

If installing the s390x version for Linux on z, use the file: model9-v1.7.0_build_666ff604-server-s390x.zip

Step 2: Backup the server before the upgrade

  1. Stop the server and remove the Model9 docker containers that are running using the following commands:

    sudo su -
    docker stop model9-v1.6.0
    docker rm model9-v1.6.0
  2. Verify that the docker container is not running using the following command:

    docker ps -a
  3. Backup the local configuration and database:

    cd $MODEL9_HOME
    
    fileStamp=$(date +%Y-%m-%d)
    tar -czf conf-$fileStamp.tar.gz conf
    docker exec -it model9db pg_dump -p 5432 -U postgres -d model9 --compress=9 -f /tmp/model9db-$fileStamp.dump.gz
    docker cp model9db:/tmp/model9db-$fileStamp.dump.gz $MODEL9_HOME/model9db-$fileStamp.dump.gz
    docker exec -ti model9db rm /tmp/model9db-$fileStamp.dump.gz
  4. Backup the HTTPS connector settings:

    cd $MODEL9_HOME/conf
    cp connectorHttpsModel9.xml connectorHttpsModel9.xml.backup

Step 3: Unzip the installation files

The configuration file structure has been changed in this release and should be backed up before upgrading the server, as shown in the following example. Unzip the installation file to $MODEL9_HOME:

# The path to model9 installation zip uploaded
export M9INSTALL=/<path>
cd $MODEL9_HOME
# Backup current configuration file
cp conf/model9-local.yml conf/model9-local.yml.backup
# On Linux on z issue:
unzip -o $M9INSTALL/model9-v1.7.0_build_666ff604-server-s390x.zip 'model9*' \
'postgres*' 'conf/connectorHttpsModel9.xml'
# On Linux issue:
unzip -o $M9INSTALL/model9-v1.7.0_build_666ff604-server.zip 'model9*' 'postgres*' \
'conf/connectorHttpsModel9.xml'

Step 4: Prepare for Model9 DB upgrade

  1. Backup the current database to an SQL dump:

    docker exec -it model9db pg_dump -p 5432 -U postgres -d model9 --compress=9 -f /tmp/model9db-v12.dump.gz
    docker cp model9db:/tmp/model9db-v12.dump.gz $MODEL9_HOME/model9db-v12.dump.gz
  1. Stop the Model9 PostgreSQL database and remove the current run:

    docker stop model9db
    docker rm model9db
    #on Linux on z issue:
    docker rmi s390x/postgres:latest
    
    #on Linux issue:
    docker rmi postgres:latest
  1. Load and start the new version of PostgreSQL database:

    #on Linux on z issue:
    docker load -i $MODEL9_HOME/postgres-s390x-12.3.docker.gz
    #on Linux issue:
    docker load -i $MODEL9_HOME/postgres-x86-12.3.docker.gz
  1. When upgrading the Model9 PostgreSQL, start the database using the following command:

    #on Linux on z issue:
    docker run -p 127.0.0.1:5432:5432 \
    -v $MODEL9_HOME/db/data:/var/lib/postgresql/data:z \
    --name model9db --restart unless-stopped \
    -e POSTGRES_PASSWORD=model9 -e POSTGRES_DB=model9 -d s390x/postgres
    #on Linux issue:
    docker run -p 127.0.0.1:5432:5432 \
    -v $MODEL9_HOME/db/data:/var/lib/postgresql/data:z \
    --name model9db --restart unless-stopped \
    -e POSTGRES_PASSWORD=model9 -e POSTGRES_DB=model9 -d postgres

Step 5: Deploy the Model9 container

Deploy the new Model9 release container using the following command:

#on Linux on z issue:
docker load -i $MODEL9_HOME/model9-v1.7.0_build_666ff604-s390x.docker

#on Linux issue:
docker load -i $MODEL9_HOME/model9-v1.7.0_build_666ff604.docker

Step 6: Update the Model9 management server configuration

Reapply local HTTPS certificate settings to: $MODEL9_HOME/conf/connectorHttpsModel9.xml

Step 7: Update the Model9 management server log configuration file

Reapply the logback.groovy logging configuration file to: $MODEL9_HOME/conf/logback.groovy

Step 8: Start the Model9 management server

Note

The first Model9 management server startup following an upgrade may take longer than usual due to internal migration processes. Subsequent startups will not be affected.

The previous release agent(s) are not compatible with the new release of the server, complete the agent(s) upgrade before starting to use the UI.

Once the object storage provider is available and PostgreSQL is running, start the Model9 management server using the following commands:

#on Linux on z issue:
docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
-v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
-e "TZ=America/New_York" \
-e "CATALINA_OPTS=-Xmx2048m -Djdk.nativeCBC=false -Xjit:maxOnsiteCacheSlotForInstanceOf=0" \
--link minio:minio --link model9db:model9db \
--name model9-v1.7.0 model9:v1.7.0.666ff604

#on Linux issue:
docker run -d -p 0.0.0.0:443:443 -p 0.0.0.0:80:80 \
-v $MODEL9_HOME:/model9:z -h $(hostname) --restart unless-stopped \
-e "TZ=America/New_York" -e "CATALINA_OPTS=-Xmx2048m -Djdk.nativeCBC=false" \
--link minio:minio --link model9db:model9db \
--name model9-v1.7.0 model9:v1.7.0.666ff604
  1. If running backup and archive policies containing over 100k objects per night, update the Model9 server heap size to Xmx4096.

  2. Edit the time zone (TZ) setting to ensure proper scheduling.

  3. When using an external object storage provider other than MinIO, remove the “--link minio:minio” definition from the command

  4. For a full description of all Docker run parameters, see the following URL: https://docs.docker.com/engine/reference/commandline/run/.

Upgrading the Model9 agent

Prerequisites

  1. The supported upgrade path is from release 1.6.0 to 1.7. This documentation is relevant for upgrading from release 1.6.0 only. If the installed release is older than that, refer to previous installation guides.

  2. Verify that the agent’s installation prerequisites are met, see Prerequisites for details.

Note

Ensure that there are no policies scheduled to run during the upgrade operation.

Step 1: Upload the agent TAR File to the installation directory

Use an FTP or similar utility to upload the Model9 agent’s installation tar file to the Model9 agent’s installation directory. Use Passive Mode if supported by the FTP client. The tar file must be uploaded in binary mode, as shown in the following example:

$ ftp mf-lp1
Connected to mf-lp1.
220-FTPD1 IBM FTP CS V2R2 at mf-lp1, 06:20:40 on 2017-02-23.
220 Connection will not timeout.
Name (mf-lp1:m9user): m9u
331 Send password please.
Password:
230 M9U is logged on. Working directory is "M9U.".
Remote system type is MVS.
ftp> cd /usr/lpp/model9/
250 HFS directory /usr/lpp/model9/ is the current working directory
ftp> bin
200 Representation type is Image
ftp> put model9-v1.7.0_build_666ff604-agent.tar
local: model9-v1.7.0_build_666ff604-agent.tar remote: model9-v1.7.0_build_666ff604-agent.tar
229 Entering Extended Passive Mode (|||1026|)
125 Storing data set /usr/lpp/model9/model9-v1.7.0_build_666ff604-agent.tar
250 Transfer completed successfully.
xxxxxx bytes sent in 00:02 (1.95 MiB/s)
ftp> quit
221 Quit command received. Goodbye.

Step 2: Upgrade the agent binaries

In OMVS, extract the tar file and replace the agent symbolic link with a reference to the new directory, as shown in the following example:

su
cd /usr/lpp/model9/
tar -xpf model9-v1.7.0_build_666ff604-agent.tar
rm agent
ln -s model9-v1.7.0_build_666ff604-agent agent

Step 3: Copy the Model9 libraries from USS to PDS and modify

Edit and submit the JCL CPY#PDS located in /usr/lpp/model9/agent/installation/ to create the Model9 LOADLIB, SAMPLIB and EXEC PDS files.

After successful completion of CPY#PDS, update the following SAMPLIB PDS members:

Modify M9AGENT:

Update

Description

DD STEPLIB

Model9 installation LOADLIB

PWD environment variable

Model9 agent’s installation path

Optional parameters for M9AGENT:

Update

Description

CONF_HOME environment variable

Model9 agent’s configuration directory path

Use the CONF_HOME parameter for activating more than one agent in the same LPAR. The parameter will allow the agents to use the same Model9 installation files and libraries, but each will have a different configuration directory. The recommendation is to have one agent per LPAR, while all agents in the same GRS-complex point to the same Model9 complex. However, additional agents in the same LPAR may be required if:

  • Using a sub-plex

  • Running both development and production environments

  • Pointing different agents to different cloud storage.

The CONF_HOME parameter must precede the stdenv-main.sh statement. The following is an example of using the parameter:

//STDENV DD *
export PWD=/usr/lpp/model9/agent
export CONF_HOME=$PWD/../conf
export ENV=agent
. $PWD/scripts/stdenv-main.sh
//

Modify M9SAPI:

Update

Description

PROC parameter KEYRING

If using the KEYRING keyword: the parameter was replaced with the model9-stdenv.sh config file parameter SAPI_KEYRING_NAME, specifying the keyring name

M9PATH

Model9 agent’s installation path

Modify M9LIFECY:

Update

Description

DD STEPLIB

Model9 installation LOADLIB

PWD environment variable

Model9 agent’s installation path

Copy M9AGENT, M9SAPI, M9LIFECY to your local libraries and reapply site modifications.

Step 4: Upgrade the Model9 Command Line Interface

Customize the M9CLI rexx in the EXEC PDS to match installation standards:

fifodir = "/usr/lpp/model9/listener"
loaddir = "SYS2.MODEL9.V170.LOADLIB"

Copy the M9CLI EXEC to a site standard local EXEC library concatenated in the logon procedure.

Step 5: Update the Model9 agent configuration

The discovery.skip_volsers configuration has been migrated from the server’s model9-local.yml file to the agent’s agent.yml. If this configuration is specified in model9-local.yml, it should be removed and added with the same value to the agent.yml.

Step 6: Upgrade automatic recall

  1. Use the sample JCL M9UNHOOK to uninstall the previous hook. The expected RC should be 0:

    S M9UNHOOK

    If another hook was installed on top of the Model9 hook, the uninstall process finishes with RC=4. In this case, the Model9 hook is not removed but rather logically disabled, to prevent harming the subsequent hook. This is a valid situation that will be corrected by the next IPL. It is also possible to remove the top hook and then remove the Model9 hook again.

  2. Replace the previously installed release and update the PROGxx configuration by adding the following statements:

    APF ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) SMS
    LPA ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) MOD(ZM9CPTN)
    LPA ADD DSNAME(SYS2.MODEL9.V170.LOADLIB) MOD(ZM9S26X)
    1. Starting with this release, there is only one EXIT, and the EXITNAME name has changed.

    2. Use the SET PROG=XX operator command to apply the changes.

    3. Verify that the command has ended successfully.

    4. The hook and exit must be loaded to the Dynamic LPA in order for them to function correctly. Do not use MLPA or PLPA to load the modules.

  3. Install the feature by copying M9HOOK from the SAMPLIB PDS to a local PROCLIB member. Customize and activate the hook using the following command. The expected RC should be 0:

    S M9HOOK
  4. If you are using another data management product together with Model9 Cloud Data Manager, add the following DD statement to the other product procedure to avoid collisions:

    //ZM9$NORC DD DUMMY
  5. Restart the address space after applying the DD.

Step 7: Define the Model9 loadlib as program controlled

If you are using the PROGRAM class, define the Model9 loadlib as program controlled.

#Only if class PROGRAM is active
RALT PROGRAM * ADDMEM('SYS2.MODEL9.V170.LOADLIB'//NOPADCHK)
PERMIT * CLASS(PROGRAM) ID(M9USER) ACC(READ)
SETR WHEN(PROGRAM) REFRESH

Step 8: Permit the agent

The Model9 agent is now using the IDCAMS DCOLLECT command and would also require an OMVS max threads setting increase. Permit the agent with the following:

PERMIT STGADMIN.IDC.DCOLLECT CL(FACILITY) ID(M9USER) ACC(READ)
ALTUSER M9USER OMVS(THREADSMAX(500))
SETROPTS RACLIST(FACILITY) REFRESH

Step 9: CLI Archive permission

Model9 now supports archiving data sets directly from z/OS using a CLI command. Permit the users allowed with the following:

#Define the XFACILIT profiles
RDEFINE XFACILIT M9.CLI.ARCHIVE UACC(NONE)
RDEFINE XFACILIT M9.CLI.ARCHIVE.NOBCK UACC(NONE)
RDEFINE XFACILIT M9.CLI.ARCHIVE.RETPD UACC(NONE)
#Replace ##YOUR_USER/GROUP## with proper user or group name
PERMIT M9.CLI.ARCHIVE CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.ARCHIVE.NOBCK CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.ARCHIVE.RETPD CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
SETROPTS RACLIST(XFACILIT) REFRESH

Step 10: CLI Backup permission

Model9 now supports backing up data sets directly from z/OS using a CLI command. Permit the users allowed with the following:

#Define the XFACILIT profiles
RDEFINE XFACILIT M9.CLI.BACKDSN UACC(NONE)
RDEFINE XFACILIT M9.CLI.BACKDSN.PERM UACC(NONE)
RDEFINE XFACILIT M9.CLI.BACKDSN.NEWNAME UACC(NONE)
RDEFINE XFACILIT M9.CLI.BACKDSN.NEWDATE UACC(NONE)
RDEFINE XFACILIT M9.CLI.BACKDSN.NEWTIME UACC(NONE)
RDEFINE XFACILIT M9.CLI.DELBACK UACC(NONE)
RDEFINE XFACILIT M9.CLI.DELBACK.PURGE UACC(NONE)

#Replace ##YOUR_USER/GROUP## with proper user or group name
PERMIT M9.CLI.BACKDSN CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.BACKDSN.PERM CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.BACKDSN.NEWNAME CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.BACKDSN.NEWDATE CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.BACKDSN.NEWTIME CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.DELBACK CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)
PERMIT M9.CLI.DELBACK.PURGE CL(XFACILIT) ID(##YOUR_USER/GROUP##) ACC(READ)

SETROPTS RACLIST(XFACILIT) REFRESH

Step 11: Start the agent

Stop the previous release agent:

P M9AGENT

Start the upgraded agent using the following command:

S M9AGENT

Verify that the agent was started successfully. The following messages should appear:

ZM91002I MODEL9 BACKUP AGENT VERSION 1.7.0 INITIALIZING
ZM91000I MODEL9 BACKUP AGENT INITIALIZED

Installing in production

Explore object storage options

Object storage provides functions that protect against data loss and data corruption. Review options such as:

  • WORM storage

  • Versioning - retaining deleted objects for an additional period of time

Verify network requirements

The Model9 agent will fully use the network resources within the available CPU and memory limits. Refrain from using the same OSA card for production workload and backup activities. If possible, use a dedicated OSA card for Model9. Otherwise, if network resources are a constraint, activate your network QOS to manage packet priorities.

Secure network communications

The agent installation specifies how to secure agent - server and agent - object storage communications. Perform these steps If you have not yet done so.

Associate a WLM service class to Model9 components

The agent’s performance is mainly affected by WLM definitions. When associating a WLM service class, consider its effect on policy elapsed time and on the agent’s CPU consumption. See Performance considerations for additional information.

Back up the management server and DB

Back up the management server regularly, excluding the following directories from the backup:

$MODEL9_HOME/db
$MODEL9_HOME/SAbackups
# if MinIO is used, its directory should be excluded from the Linux file level backup
/minio

Back up the management server DB using the following command:

export MODEL9_HOME=<model9 home>
fileStamp=$(date +%Y-%m-%d)
docker exec -it model9db pg_dump -p 5432 -U postgres -d model9 --compress=9 -f /tmp/model9db-$fileStamp.dump.gz
docker cp model9db:/tmp/model9db-$fileStamp.dump.gz $MODEL9_HOME/model9db-$fileStamp.dump.gz
docker exec -ti model9db rm /tmp/model9db-$fileStamp.dump.gz

Schedule life cycle management regularly

Schedule life cycle management for execution on a daily basis.

Monitor policy execution

When scheduling policies using the mainframe’s automatic scheduler, monitor the M9SAPI return code.

Problem determination

Output DD description

DD Card

Description

SYSOUT

JZOS related information during startup

STDOUT

Agent: All messages, according to the debug level defined in the relevant logging.properties configuration file. The default level is ‘info’.

Life cycle management: Model9 messages

UTILMSGS

Life cycle management: utility messages

Where to look for error messages

Operation

Where to look

Notes

Automatic recall

Agent’s STDOUT

When automatic recall is initiated, the requesting address space waits for up to 25 seconds (can be configured) for the recall to complete. If the recall does not complete within this time limit, a message is issued and the request is cancelled. The messages in the agent’s log specify the reason for failure.

CLI

Agent’s STDOUT DD

When encountering a problem in the CLI, full error messages will appear in the agent’s log.

Backup/archive/ restore/recall

UI Backup log

The backup log contains the original output of the utilities invoked by Model9.

Policy run

Policy run log Server’s log

Agent’s STDOUT

The policy run log is accessible from the UI

The server’s log is the docker log: model9-v<v.r.m>

Agent startup

SYSOUT DD

STDOUT DD

If the agent ends with RC=102, it might suggest an issue with the model9-stdenv.sh file. Add the LOGLVL='+T' parameter to the agent PROC, restart the agent and search for errors under the JVMJZBL1005I message in the SYSOUT DD.

If the problem requires diagnosis by the Model9 support team and you’ve been requested to supply the logs, please use the utility described in the Model9 logs collector section to extract the information and send it to the local distributor of Model9 or to [email protected] ().

Model9 logs collector

When reporting a Model9 issue, attach the following logs:

Log

Collection method

Server

Use root or sudo to run the shell script located in the following path: $MODEL9_HOME/Utilities/CustomerLogsCollector.sh

The script will create a tar.gz file in $MODEL9_HOME with the current date, for example: logs.2018-10-16-11.10.06.rhel73-server1.tar.gz

Agent

Extract the M9AGENT JOB output

Performance considerations

Parallelism-related parameters in the Model9 management server

The model9-local.yml file residing in the $MODEL9_HOME/conf/ path contains all default parameters. The main section is ‘model9’ (lower-case letters), and all parameters should be indented under the model9 title as shown in the following example:

model9.parallelism.datasets.numberOfThreads: 10
model9.parallelism.volumes.numberOfThreads: 10
model9.parallelism.unix.numberOfThreads: 10
model9.parallelism.numOfFailuresPerAgent: 5

Parameter

Description

Default

model9.parallelism.dataset.numberOfThreads

Number of parallel threads running during dataset backup or archive

10

model9.parallelism.volumes.numberOfThreads

Number of parallel threads running during volume full dumps

10

model9.parallelism.unix.numberOfThreads

Number of parallel threads running during z/OS UNIX files backups

10

model9.parallelism.numOfFailuresPerAgent

Number of tolerated failures before removing an Agent from a policy run

5

Improving zIIP utilization

Simultaneous Multithreading (SMT)

Working in multithreading (MT) mode allows you to run multiple threads per zIIP, where thread is comparable to the definition of a CP core in a pre-multithreading environment, resulting in increased zIIP processing capacity. To enable zIIP MT mode, define the PROCVIEW parameter of the LOADxx member of SYS1.IPLPARM in order to utilize the SMT function of z/OS. It defines a processor view of the core, which supports from 1 to ‘n’ threads. Related parameters are MT_ZIIP_MODE and HIPERDISPATCH in IEAOPTxx. See z/OS MVS Initialization and Tuning Reference for more information.

WLM service class considerations

  • The agent utilizes zIIP engines. If the production workload also utilizes zIIP, associate the agent with a service class of a lower priority than the production workload service class, to avoid slowing down the production workload.

  • When issuing CLI commands in a highly-constrained CPU environment, verify that the issuer - whether it is a TSO userid or a batch job - has at least the same priority as the agent.

zIIP-eligible work running on CP

zIIP on CP reporting

Turning zIIP on CP monitoring provides information on zIIP-eligible work that was overflowed to CP. The monitoring is enabled by default only when zIIP processors are configured to the system. If no zIIP processors are configured and you would like to see how much CP would be saved by configuring zIIP processors in the system, you can set the PROJECTCPU‌ parameter to YES in IEAOPTxx. This would enable monitoring and cause the zIIP on CP chart to be displayed in the agent screen. See z/OS MVS Initialization and Tuning Reference for more information.

System wide settings

The system-wide settings of whether to allow spill of zIIP-eligible work to CP is defined in the IIPHONORPRIORITY parameter of IEAOPTxx. The default is YES, allowing standard CPs to execute zIIP and non-zIIP-eligible work in priority order. See z/OS MVS Initialization and Tuning Reference for more information.

Individual service class settings

The "honor priority" parameter allows limiting individual work from overflowing to CP regardless of the system-wide settings. Using the parameter may result in degradation in response time. See z/OS MVS Planning: Workload Management for more information.

Improving TCPIP CPU usage and throughput

Segmentation offloading

TCPIP supports offloading the work of segmentation to the OSA Express card. This feature reduces CPU usage and increases network throughput. It can be enabled via the IPCONFIG SEGMENTATIONOFFLOAD on the TCPIP profile.

MTU Maximum Transmission Unit size

Every TCPIP frame is broken down into the MTU defined by the system. The z/OS default MTU value of 512 is very small and introduces unnecessary TCPIP CPU overhead. The minimum value to be used as MTU when writing to object storage should be 1492.

Check with your network administrator whether jumbo frames can be utilized to further reduce the CPU overhead and improve throughput. Display the current MTU value using the commands:

Command

Description

TSO NETSTAT GATE

The “Pkt Sz” column represents the MTU size for each configured route. Verify the MTU size used by the route to the object storage. If no specific route to your object storage exists, the “Default” route value is used. This value should be equal or greater than 1492.

TSO PING <object-storage-ip> (PMTU YES LENGTH 1400

This command verifies whether the entire path from this TCPIP stack to the object storage supports at least 1400 sized frames. If the output of this command includes “Ping #1 needs fragmentation”, contact your network administrator in order to resolve this issue.

Appendix A: Install MinIO S3-Proxy

When using a local storage solution, S3-Proxy software is required to handle the Model9 S3 requests to the storage. The following steps describe the installation of MinIO as the S3-Proxy. Mount the MinIO filesystem on a directory different from the Model9 home directory. For example:

# Use a separate filesystem mount on /minio
/minio

Make sure there is enough free space to accommodate the expected number of backups and archives. It is recommended to use the ‘xfs’ filesystem type. Contact your Linux administrator to allocate adequate space and ensure it is mounted.

Warning

This procedure is intended for new and unmounted block devices only, it will overwrite any data that might already exist on the device.

For <block device> it is recommended to use the partition type and not the disk, as shown in the following example:

sudo su –
export MINIO_HOME=/minio
mkdir -p $MINIO_HOME
mkfs.xfs /dev/<block device>
# add the following line to your /etc/fstab
/dev/<block device> /minio xfs defaults 0 0
mount $MINIO_HOME

UTC Date and Time

The object storage protocol requires the z/OS USS and object storage UTC times to match.

When using MinIO as object storage, the Linux server UTC must match the z/OS USS UTC. Use the following command to verify the MF UTC date/time on USS:

date -u

Run the same command on the management server Linux system to verify that the date and time match. If they do not, contact the Linux administrator to update the Linux UTC date and time.

Optional - Enabling data in flight encryption

Enable the optional Data-In-Flight encryption between the mainframe and storage system by following these steps:

Warning

Enabling Data-In-Flight encryption requires excessive ZIIP CPU usage.

The default Model9 installation provides a self-signed certificate. For production-level workloads it is strongly advised to generate a site-defined certificate. Copy the default keys to the certificates directory using the following commands:

cp $MODEL9_HOME/keys/minio_private.key $MODEL9_HOME/conf/minio/certs/private.key
cp $MODEL9_HOME/keys/minio_public.crt $MODEL9_HOME/conf/minio/certs/public.crt

MinIO deployment

Deploy the MinIO application components using the following commands:

cd $MODEL9_HOME
# On Linux on z issue:
sudo docker load -i $MODEL9_HOME/minio-s390x-2018-01-02T23-07-00Z.docker
# On Linux issue:
sudo docker load -i $MODEL9_HOME/minio-x86-2018-01-02T23-07-00Z.docker

Starting the MinIO container

  1. Start the MinIO container using the following command:

    sudo docker run -d -p 0.0.0.0:9000:9000 -v $MINIO_HOME:/export:z \
    -v $MODEL9_HOME/conf/minio:/root/.minio:z --restart unless-stopped \
    --name minio minio/minio server /export

    Note

    Make sure to edit time zone settings to ensure proper scheduling.

  2. Determine the health status of the container using the following command:

    docker ps -a
  3. Verify the command returns a value of “(healthy)” as shown in the following example:

    image1.png
  4. Extract the Access Key and Secret from MinIO using the following command:

    docker logs minio
  5. Verify that the command returns the AccessKey and SecretKey as shown in the following example:

    image7.png

    The displayed details are required for completion of the installation process.

  6. Once the MinIO installation is complete, use the following commands to stop, start, or restart the container as required:

    docker stop|start|restart minio
  7. Display the Model9 Server’s resource consumption using the following command:

    docker stats minio

Appendix B: Secure web communication

The default Model9 installation provides a self-signed web certificate. This certificate is used to encrypt the web information passed between your browser and the Model9 management server.

It is strongly recommended to generate a site-defined certificate to accommodate production-level workloads. Contact your security administrator if you wish to generate such a certificate.

You can also generate your own self-signed certificate to avoid browser security notifications using the following commands:

  1. Verify that the server has a valid hostname by issuing:

    hostname -s
  2. Generate self-signed keys by editing the following parameters:

    Parameter

    Description

    <password>

    The keystore password

    <server_dns>

    The server DNS name (optional)

    <server_ip>

    The server IP address

    <BackupServer>

    The certificate common name: edit according to site standards

    cd $MODEL9_HOME/keys
    keytool -genkey -alias tomcat -keystore $(hostname -s)_web_self_signed_keystore.p12 -storetype pkcs12 -storepass <password> -keyalg RSA -ext SAN=dns:<server_dns>,ip:<server_ip> -dname "cn=<BackupServer>, ou=Java, o=Model9, c=IL" -validity 3650
    chown root:root $(hostname -s)_web_self_signed_keystore.p12
    chmod 600 $(hostname -s)_web_self_signed_keystore.p12
    keytool -exportcert -alias tomcat -keystore $(hostname -s)_web_self_signed_keystore.p12 -storetype pkcs12 -storepass <password> -file $(hostname -s)_web_self_signed.cer

    Note

    When not specifying <server_dns>, remove the dns: section from the command.

  3. Add the exported certificate (.cer file) to your local workstation trusted CA according to site standards and security policies.

  4. If a site certificate or new self-signed certificate was created, update the server configuration file by adding the following line:

    vi $MODEL9_HOME/conf/connectorHttpsModel9.xml
  5. Update the keystoreFile, keystorePass, keyAlias and keyPass settings to match the information provided by the security administrator, as shown in the following example:

    <Connector port="443" protocol="org.apache.coyote.http11.Http11Protocol"
         maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
         keystoreFile="/model9/keys/web_self_signed_keystore.p12"
         keystoreType="PKCS12" keystorePass="changeit" keyAlias="tomcat"
         clientAuth="false" sslProtocol="TLS" />

    Java strictly follows the HTTPS specification for server identity (RFC 2818, Section 3.1) and IP address verification. When using a host name, it is possible to fall back to the Common Name in the Subject DN of the server certificate instead of using the Subject Alternative Name. However, when using an IP address, there must be a Subject Alternative Name entry - IP address (and not a DNS name) - in the certificate.

Appendix C: AWS S3 bucket permissions

Permit the following actions for both the bucket and all objects:

s3:PutObject

s3:GetObject

s3:ListBucketByTags

s3:ListBucketVersions

s3:ListBucket

s3:DeleteObject

s3:GetBucketLocation

Permit the following actions for the bucket:

s3:HeadBucket

The following is an example of a JSON policy:

{
"Action": [
    "s3:PutObject",
    "s3:GetObject",
    "s3:ListBucketVersions",
    "s3:ListBucket",
    "s3:DeleteObject",
    "s3:GetBucketLocation"
],
"Resource": [
    "arn:aws:s3:::<BUCKET_NAME>",
    "arn:aws:s3:::<BUCKET_NAME>/model9/*",
    "arn:aws:s3:::<BUCKET_NAME>/agents/*"
],
"Sid": "ObjectAccess",
"Effect": "Allow"
},
{
    "Action": "s3:HeadBucket",
    "Resource": "*",
    "Sid": "BucketAccess",
    "Effect": "Allow"
} 

Administrator and User Guide

Release 1.7.0

Introduction

Model9 Cloud Data Manager for Mainframe consolidates the functionality of multiple storage, backup and tape management products into a single, software-defined secondary data management solution that eliminates the need for physical and virtual tape libraries. The solution dramatically reduces costs and complexity by enabling you to leverage a hybrid multi-cloud environment for all your mainframe secondary data needs, including storage, backup, long-term archive, disaster recovery, lifecycle management, and analytics.

image38.png

Cloud Data Manager consists of one or more agents running on z/OS and transferring data to and from a public or private cloud, or on-premises NAS or SAN. A management server is also connected to the agents and object storage, providing management and reporting tools.

Getting started

Before you start

Log in to the Model9 management server requires SAF permission to: M9.UI.LOGIN

The following administrator tasks require SAF permission to M9.UI.ROLE.ADMIN:

  • Managing policies

  • Managing agents

  • Deleting resources

For both users and administrators:

Restore, recall and export actions require permissions to the corresponding profile:

M9.UI.&lt;command&gt; - e.g. M9.UI.RECALL for recalling a data set using the UI.

M9.UI.<command>.<keyword> - e.g. M9.UI.RESTORE.BYPASSACS for restoring a data set while bypassing ACS routines and M9.UI.EXPORT.BYPASSACS for restoring a data set while bypassing ACS routines

Log in to the management server

From your browser, enter the management server’s URL. Log in using your SAF user ID and password as defined in the LPAR where the “master agent” is active. The server can connect to multiple agents, but the login is controlled by the one that is defined in the “master agent” section of the server configuration file.

image92.png

On first login, the dashboard is empty:

image43.png

Add a new agent

Select the AGENTS tab from the top menu:

image9.png

Initially, there are no agents defined into the management server:

image12.png

Add a new agent as instructed in Defining a new agentDefining a new agent. Any agent can be added, as long as it is first created and activated in the mainframe.

Define your first policy

Select the POLICIES tab from the top menu. When logging in for the first time, no policies exist:

image39.png

Define a new policy as instructed in Creating a new policy to be executed at the agent that you previously defined. Running a policy will create an activity. To view the activity, select the ACTIVITIES tab from the top menu and view the activity status, as described in Monitoring activities.

Log out of the management server

Select the “Log Out” option from the top right 3-dot menu:

image59.png

Looking at the dashboard

The dashboard displays information on backup, archive, full dump and import activities for one of 3 possible time periods:

24 hours

Calculated as the last 24 hours up to the current time. For example, when looking at the dashboard on Tuesday at 8:30 in the morning, the data displayed covers all the activities that were executed between 8:30 AM Monday and 8:30 AM Tuesday.

3 days

Calculated as the last full 3 days plus the time that has passed since then. For example, when looking at the dashboard on Monday morning at 9:00 AM, the data displayed will cover all the activities that were executed since Thursday at midnight, including 3 full days: Friday, Saturday, Sunday, as well as the 9 hours that have passed since Sunday at midnight.

1 week

Calculated the same way as the 3-day period, but for 7 days.

The displayed data is divided into 3 sections:

  1. Aggregated data for the activities executed during the selected time period.

  2. The overall cloud space occupied by policies run by this server, unrelated to the chosen time period.

  3. A detailed list of the activities executed during the selected time period. By default, the activities are sorted by time in descending order. Each column can be used for sorting, the sort icon is visible when clicking the column name.

image89.png

To return to the Dashboard from anywhere in the UI, select the DASHBOARD tab or the Model9 logo from the top menu.

Managing agents

To manage the agents, use the top menu to navigate to the AGENTS tab.

Defining a new agent

image70.png

Enter the required input as shown in the example above:

Name

The agent name to be displayed in the management server

Hostname/IP

The IP or DNS of the target z/OS

Port

The port used by the agent, as set during the agent installation

Test the connection from the management server to the new agent:

image24.png

An error message is displayed if an error is detected when attempting to communicate with the agent:

image6.png

Viewing agent details and status

image47.jpg

Using agent group for sysplex support and load balancing

An agent is associated with an agent group in the agent configuration file. By default, each agent is part of an agent group in the name of the Sysplex. When associating a policy with an agent group, the tasks are executed by the agents in the group in round-robin fashion. By default, the server sends 10 tasks per agent simultaneously.

image4.png

At the beginning of an activity, the run log identifies all the agents that are available for executing tasks. Upon completion of the task, the specific executing agent for every backup/archive operation is identified.

If an agent has more than 5 communications failures, the server stops sending it tasks during that activity. The number of allowed failures can be configured on the server.

The first phase of identifying the resources for processing is performed by a single agent.

Agent group load balancing can be used for sharing work, but also for limiting it to specific LPARs. It may be defined, for example, that all LPARs in the same Sysplex share the same resource complex, allowing them to recall and restore the same resources. However, the archive and backup policies are defined to work with an agent group containing only a subset of the LPARs.

Restore, recall and export behaviour when using an agent group policy

Recall / Restore from CLI

Recall and Restore can be performed by any agent sharing the same resource complex as the agent that performed the original backup, dump or archive. By default, the resource complex is shared by all agents in the same Sysplex, but note that only agents on LPARs that also share the same z/OS Unix System Services can be candidates for this action.

Automatic recall

Automatic recall is executed on the same LPAR that initiated the request. If there are several agents on the same LPAR, any of them may pick up the request.

Recall from the UI The server will attempt to use the original agent that executed the archive operation. If that specific agent is not available, the action will fail.

Restore from the UI The server will search for the first available agent from the agent group that was used for the archive. If no agents are available, the action will fail. If there are no agents connected to the agent group, the action will also fail.

Export from the UI The server will search for the first available agent from the agent group that was used for the archive. If no agents are available, the action will fail. If there are no agents connected to the agent group, the action will also fail.

Updating existing agent details

image56.png
image46.png

When done editing, click “Save” to update the agent settings.

Deleting an agent

Deleting an agent requires that it will not be associated with any policy.

image33.png

Confirm:

image69.png

Defining policies

Policies describe what activities need to be performed and at what frequency. A policy can be created, edited and deleted. When running a policy, whether automatic or manually initiated, an activity is created. Deleting a policy does not delete the resources previously created by that policy.

Use the top menu to select the POLICIES tab. Displayed are all the defined policies in the management server, their type, status and their scheduling. By default, the policies are displayed in ascending alphabetical order. Each column can be used to sort the list, the sort icon is visible when clicking the column name. Additionally, a “search within results” box is available for further filtering. Click a policy name to navigate to the Policy activities page.

image21.png

Creating a new policy

To create a new policy, click the “CREATE POLICY” button on the top right. The “CREATE A NEW POLICY” page appears:

image79.png

Enter the required input as shown in the example above:

Name

Select a name for the policy. If you plan to use the M9SAPI JCL interface for running policies from the mainframe, refer to Running M9SAPI to execute the policy for naming considerations.

Description

Optional

Execution type

Indicate the z/OS environment on which this policy runs, either an agent or an agent group. The agent must be pre-defined to the management server, see Defining a new agent for more details. Assigning an agent means this policy will be executed on the LPAR in which the agent is active.

Agent grouping allows you to span work across several agents, see Using agent group for sysplex support and load balancing for more details.

Status

When ENABLED, the policy can be run automatically or manually. When DISABLED, the policy can be run manually only. Before deleting a policy, it must be set to a DISABLED status.

# of generations

This option is available for policies that create multiple backups of data sets or full volume dumps.

It affects policies defined without checking the “Use SMS policy” checkbox on the “Options” tab, by setting the number of generations that will be managed and kept in object storage, regardless of SMS attributes.

For policies defined when checking the “Use SMS policy” checkbox on the “Options” tab, this option has no effect and will be ignored.

Backing up data sets

To define a policy to backup data sets, select DATA SET BACKUP in the “Type” field.

image74.png

See Creating a new policy for a detailed explanation of the fields.

Schedule

For a description of how to use the “Schedule” tab, refer to Schedule.

Options

image41.png

Common options

For description of the “Use SMS policy”, “Incremental backup” and “Compression” options, refer to Options.

Policy-specific options

Reset change bit

When checking this checkbox, the policy will reset the change bit after backing up the data set. The checkbox is checked by default.

Filters

For a description of the “Filters” tab and how to select resources for processing, refer to Filters.

Policy-specific filter considerations

During activity, the policy uses the filters and policy options to select resources, excluding:

  • Temporary data sets

  • ALIASes

  • VTOC Indexes and VVDSs

ZFS backup limitations

When backing up ZFS data sets that are mounted in R/W mode, the agent quiesces the file system data set for the duration of the backup in order to create a consistent copy. Applications that try to access that file system while it is quiesced will hang for the duration of the backup.

Choose one of the following options to back up your ZFS:

ZFS quiesce

If the ZFS quiesce is acceptable during a specific time window, schedule the ZFS backup to that window

Mount the file system in R/O mode

If the ZFS quiesce is not acceptable, consider remounting the file system in R/O mode if possible.

Schedule a UNIX FILES BACKUP

If you cannot change the file system to R/O mode, schedule a UNIX FILES BACKUP for that ZFS instead of a DATA SET BACKUP.

Use the ZFS filter to exclude all or include only ZFS data sets in the policy:

image11.png

A common use case is creating 2 policies selecting the same data set name pattern, where the difference is in the ZFS filter. One policy excludes the ZFS data sets, to be run during the week, when ZFS data sets are in use. The other policy includes only the ZFS data sets, to be run during the weekend, when the ZFS data sets are not in use.

Dumping full volumes

To define a policy for full volume dumps, select FULL DUMP in the “Type” field:

image87.png

See Creating a new policy for a detailed explanation of the fields.

Schedule

For a description of how to use the “Schedule” tab, refer to Schedule.

Options

image7.png

Common options

For description of the “Use SMS policy” option, refer to Options..

Policy-specific options

Is z/VM CP Volume Check this checkbox to have the policy perform a dump of z/VM CP volumes only. A policy can process either z/OS volumes or z/VM CP volumes.

Filters

For a description of how to use the “Filters” tab to select resources for processing, refer to Filters.

Backing up z/OS UNIX files

To define a policy for z/OS UNIX files backup, select UNIX FILE BACKUP in the “Type” field. image23.png

See Creating a new policy for a detailed explanation of the fields.

Schedule

For a description of how to use the “Schedule” tab, refer to Schedule.

Options

image27.png

Common options

For description of the “Incremental backup” and “Compression” options, refer to Options.

Policy-specific options

Follow symbolic links

Check this checkbox to backup directories and files referenced by symbolic links. Symbolic links are commonly used to redirect access from one directory to another. By default, the checkbox is not checked to avoid backing up additional directories that the user might not intend to backup.

Filters

For a description of how to use the “Filters” tab to select resources for processing, refer to Filters.

Archiving data sets

To define a policy for data set archiving (migrating data sets from disk in order to free disk space), select DATA SET ARCHIVE in the “Type” field.

image2.png

See Creating a new policy for a detailed explanation of the fields.

Options

image15.png

Common options

For description of the “Use SMS policy” and “Compression” options, refer to Options.

Policy-specific options

Primary days non-usage

This option is not available when “Use SMS policy” is checked. When it is available, it applies to all data sets, overwriting SMS attributes of SMS-managed data sets.

Allow archive only if backup exists Check this checkbox (default) to ensure that only data sets with a current backup copy will be archived. The data set will not be archived if:

  • The data set’s change bit is on, meaning it has changed since the last backup.

  • The data set’s change bit is off, but there is no backup copy in the system.

The policy will archive the data set only if the change bit is off, and there is a valid backup copy. The policy does not check whether the backup copy is the most current one, and there is no protection from manually deleting a backup of an archived data set.

Filters

For a description of how to use the “Filters” tab to select resources for processing, refer to Filters.

Policy-specific filtering considerations

During an activity run, the policy uses the defined filters and policy options to select resources, excluding the following:

  • Non-SMS managed data sets with unmovable or absolute DSORG

  • Non-SMS managed multi-volume data sets

  • Catalogs

  • Temporary data sets

  • Data sets using symbols to specify a VOLSER (Indirect Volume Serial Support)

  • All uncataloged data sets including VTOC Indexes

  • SYS1.** data sets including VVDSs

  • Generation Data Sets that are either without a catalog entry, non-SMS managed, non-active, or whose GDG base is defined with NOSCRATCH

  • ALIASes

The catalog entry of the archived data set is not SMS-managed.

Importing data sets from tapes

The “Data Set Import'' policy copies cataloged data sets from tape to object storage. The copy is identical to the original data set on tape and it is not cataloged in z/OS. Importing the same data set again will create an additional copy in the object storage. Each copy can be identified by the date and time of the import, or by a unique ID.

image57.png

See Creating a new policy for a detailed explanation of the fields.

Limiting the number of parallel tape mounts used by the policy

The “Data Set Import” policy requires tape mounts for reading the data sets from tape. The default is 3 mounts per agent; to set the maximum tape mounts allowed in parallel to a different number, change the parameter in the server configuration file:

model9.parallelism.tapeMounts.numberOfThreads: <number>

When using an agent group, each agent in the group will use up to the number specified.

Schedule

For a description of how to use the “Schedule” tab, refer to Schedule.

Options Common options

The available compression types for “Data Set Import” are lz4, gzip or none. For the full description of the “Compression type” option, refer to Options.

Policy-specific options

Retention

The imported data sets will be candidates for deletion by the lifecycle management process according to the retention time period specified.

Forever

image36.png

Period

image61.png

Date

image63.png

Filters

For a description of how to use the “Filters” tab to select resources for processing, refer to Filters.

Policy-specific filters considerations

The “Data Set Import” policy supports cataloged data sets that reside on tapes only. Data sets that do not reside on tapes will be automatically excluded. The resources can be defined using the “Data Set Name” selection criteria, and can be further filtered by either a “Data Set Name” or a “Volume Name” filter:

image86.png

General policy definitions

Schedule

Scheduling a policy to run automatically is performed on the “Schedule” tab, enabling selection of daily, weekly, monthly or yearly scheduling:

image5.png
Options

Each policy type supports additional options. The following is a description of the options that are supported by more than one policy. Policy-specific options are described in the section describing that policy.

Use SMS policy

When checking this checkbox, the policy selects and processes only SMS-managed data sets and volumes, according to SMS attributes.

When not checking this option, the policy selects all data sets and volumes according to the policy selection criteria, and processes them according to options set by the user in the policy definition.The default is not to use SMS policy.

Incremental backup

The policy will only backup data sets that have changed since the last backup. The policy determines whether a data set has been changed by examining the change bit. Incremental backup is the default behavior.

Compression

The compression to be used for the policy should be chosen according to the available components in the system. The available compression types are:

Lz4

Most suitable where zIIP / zAAP processors are available. This type usually achieves a good compression ratio and is lightweight in its zIIP / zAAP consumption.

Gzip

Suitable for both zIIP / zAAP and zEDC, when zEDC is available and defined in the agent. See the Installation Guide for more details.

DFDSS-gzip

Suitable for zEDC only. zEDC should be available and defined in the agent in order to use this compression type, see the Installation Guide for more details. Note that scheduling the IO on the zEDC compression card consumes more GCP. This compression generates the best throughput but it does not utilize zIIP engines.

DFDSS-compress

Most suitable where zIIP / zAAP / zEDC are not available, it achieves a medium compression ratio and consumes GCP.

Some policies do not support all the compression types.

Filters

When the policy is running, resources that are included in the policy selection criteria become candidates for processing. The selection criteria is defined by either a data set name pattern, volume name pattern, SMS storage group name pattern, or z/OS UNIX file name pattern. For detailed explanation of pattern rules, click the question mark (?) icon next to “Specify pattern(s)” field header.

image1.png

To exclude resources from the selection criteria, use the filter criteria. The filter criteria defines an additional set of rules that the candidate resource must fulfill in order to be eligible for processing by the policy.

image17.png

For example, the selection criteria above includes all data set names that begin with SYS2., excludes data sets that have an .OLD suffix.

Certain policies have additional restrictions on the type of resources that can be selected for processing, see the section describing each policy for more details.

To save the new policy, click Finish.

Managing policies

Simulating policy activity

Test-run the policy by selecting the “Simulate” option from the 3-dot menu:

image75.png

The policy will run the discovery phase to list the resources that were selected according to the “Filters” criteria.

Manually running a policy

Run the policy by selecting the “Run now” option from the 3-dot menu:

image16.png

The policy will run regardless of whether its status is enabled or disabled and regardless of its “Schedule” settings.

Viewing existing policy definitions

View the policy by selecting the “More details” option from the 3-dot menu:

image52.png

The “More details” popup will show up:

image31.png

Click “OK” to return to the previous page.

Updating an existing policy

Edit the policy by selecting the “Edit” option from the 3-dot menu:

image20.png

The “Edit policy” page appears:

image76.png
Preventing a policy from being automatically scheduled

Go to the EDIT POLICY screen as described in Updating an existing policy. To prevent the policy from being automatically scheduled, you can change the policy schedule to NONE. However, if you want to keep the automatic scheduling definitions, instead of changing the schedule, change the policy status from ENABLED to DISABLED. Click the Finish button to save your changes. The policy will not be automatically scheduled, but will still be eligible to run manually as described in Manually running a policy.

image66.png
Deleting a policy

Delete the policy by selecting the “Delete” option from the 3-dot menu:

image81.png

Deleting a policy requires it to be in a DISABLED state. See Preventing a policy from being automatically scheduledfor details.

Confirm the policy deletion by clicking the “CONFIRM DELETE” button.

image82.png

Deleting a policy has no effect on the resources that were created by it. The resources are visible and available for normal processing in the UI and CLI. When deleting a backup policy and defining a new policy for the same resources, each policy manages its own generations and the new backup generations will not be linked to the previous ones.

Monitoring activities

Use the top menu to navigate to the ACTIVITIES tab. A list of the latest activities and actions are displayed. By default, the activities are displayed in a descending running time order (most recent first). Each column can be used for sorting, the sort icon is visible when clicking the column name. Additionally, a “search within results” box is available for further filtering.

The colored circles on each row indicate how many resources were created successfully, and the activity’s status is displayed on the right. The activity status can be higher than the resources status in the event that there were problems in the discovery or the post processing phase.

image77.png

Viewing the activity log

View the activity log by selecting the “View Run Log” option from the 3-dot menu:

image37.png

The Activity log popup will show. The log is scrollable - large log files will dynamically load as you scroll down. To download the complete log file to your workstation, click “DOWNLOAD”.

image40.png

Viewing resources that were created by an activity

To view the resources created by a specific activity, click the “ACTIVITY TIME”:

image30.png

For each resource, use the 3-dot menu on the right to view additional information and a list of available actions. The activity log can be viewed from this screen by selecting “View Run Log” from the top right 3-dot menu.

Viewing policy activities

To view all the activities associated with a policy, click the policy name in the SOURCE column.

image48.png

The Policy activities page is displayed. You can also reach this page by clicking the policy name in the Policies page. The page is divided into two sections:

  1. Policy trends - graphical statistics

  2. Policy activities detailed list

image26.png
Policy trends - graphical statistics
image53.png

Activity duration

Elapsed time of the policy’s activities.

Resource status breakdown

  1. The gray line represents the z/OS resources that are scanned by the policy. A change in this line might suggest an unusual allocation or deletion of a large number of resources. The line’s values are on the right-hand vertical axis of the graph.

  2. The bars represent the number of resources that were created. The color indicates the status of the action. Its values are on the left-hand vertical axis of the graph. A significant change in the height of a bar might suggest unusual activity such as updating a large number of data sets, causing them to be backed up.

In the example above, the activities are consistent except for the third activity from the right: in the middle graph it shows a red bar, indicating that an error occurred while processing. This correlates with a shorter run time as shown on the left-side graph, and less storage as shown on the right-side graph.

Mainframe Vs.cloud storage size. This graph compares the original mainframe-allocated size of the data sets / volumes / z/OS UNIX files to their size in the target object storage, indicating the level of efficiency of the compression method used.

Policy activities detailed list

The activities are displayed in a descending order of running (most recent first). Each column can be used for sorting, the sort icon is visible when clicking the column name. Additionally, a “search within results” box is available for further filtering.

Working with resources

Searching a resource

Search resources are available on the upper right of any page.

image13.png

Use a full name or a pattern:

Pattern symbol

Description

%

Replacing one character, e.g. M9RES%

*

Replacing one qualifier, e.g. SYS1.*.PROCLIB

**

Replacing none or several qualifiers, e.g. PROD.**.LOADLIB

A search can be invoked for a data set or a volume by using the full name or a pattern. A “**” prefix and suffix is automatically added to every searched term. When searching for a data set, the search displays available backup copies from incremental backup, full dumps, archived data sets and imported data sets. The search results page is divided into tabs:

  1. The first tab displays data sets

  2. The second tab displays full volumes

  3. The third tab displays z/OS UNIX files

image29.png

The results are extracted for each type, and the tab displays the first 350 results that were found. By default, the results are displayed in ascending name order. Each column can be used for sorting, the sort icon is visible when clicking the column name. Additionally, a “search within results” box is available for further filtering.

It is possible to perform immediate actions on the displayed items. Actions can be performed on an individual item using the 3-dot menu, or on multiple items using the actions listed on the upper right side of the page.

Column “complex”

This value indicates the resource complex and can be used to distinguish between data sets or volumes with the same name in different complexes. The resource complex is defined in the agent.yml, with a default of the group-&SYSPLEX. An agent can be associated with one resource complex, several agents can be associated with the same resource complex. Agents sharing the same resource complex can all interact with the same resources, actions such as backup/archive/recall/restore/list are all dependent on the resource complex parameter.

Viewing resource information

View resource details by selecting the “More details” option from the 3-dot menu:

image42.png

A popup window showing the data set’s details is displayed.

image14.png

View the resource activity log by selecting the “View backup log” option from the 3-dot menu:

image67.png

A popup window showing the resource backup log is displayed.

image62.png

Restoring a resource

Select the “Restore” option from the 3-dot menu:

image34.png
Restoring a data set
image88.png

Target data set name

The operation can allocate the restored data set using a different name than the original backed up data set name. Whether the original name is changed or not, the user must have SAF alter access permission for this data set name in the target system, in addition to Model9 permission to perform the UI restore action.

Restore volume name

The VOLSER name to use for the allocated data set name. The VOLSER can be different from the original VOLSER.

Overwrite target data set?

Whether or not to replace the target data set if it already exists.

Catalog restored data set

Whether or not to catalog the restored data set. SMS-managed data sets will be cataloged, regardless of this parameter setting.

Use the “Advanced SMS parameters” tab to set the SMS attributes of the restored data set. These parameters will be passed to the ACS routines, unless “Bypass ACS routines” is specified. For more information, see the IBM DFSMS documentation.

The parameters are:

Storage class

Specify the storage class to be assigned.

Management class

Specify the management class to be assigned.

Null storage class

Set the storage class to null.

Null management class

Set the management class to null.

Bypass ACS routines

Whether or not to give control to the SMS ACS routines during data set allocation. Using this option requires SAF permission to M9.UI.RESTDSN.BYPASSACS.

When done, click “Restore” to restore the data set.

Restoring a full volume dump

Restoring a full dump disk requires both admin and a restore permission:

image73.png

Target volume name

The target online VOLSER on which the full dump volume will be restored. The restore will erase all previous data on the VOLSER. When restoring a disk, only the data on the disk changes, the restore does not update the catalog.

Overwrite target volume name?

Whether or not to copy the original VOLSER to the target volume. If there is another disk with the same name as the original VOLSER, the other disk will be offlined by the system, as 2 disks cannot exist with the same name.

When done, click “Restore” to restore the volume.

Restoring a z/OS UNIX file
image60.png

Target file

Overwrite target file?

Whether or not to override the target file if it already exists.

When done, click “Restore” to restore the file.

Recalling a data set

Select the “Recall” option from the 3-dot menu:

image19.png

A popup window with recall parameters is displayed.

image58.png

Recall volume name

The operation can allocate the restored data set in a different name than the original backed-up data set. Whether the original name is changed or not, the user must have SAF permissions for this data set name in the target system, in addition to Model9 permissions to perform the UI restore action.

Use the “Advanced SMS parameters” tab to set the SMS attributes of the recalled data set. These parameters will be passed to the ACS routines, unless specified “Bypass ACS routines”. For more information, see the IBM DFSMS documentation.

The parameters are:

Storage class

Specify the storage class to be assigned.

Management class

Specify the management class to be assigned.

Null storage class

Set the storage class to null.

Null management class

Set the management class to null.

Bypass ACS routines

Whether or not to give control to the SMS ACS routines during data set allocation. Using this option requires SAF permission to M9.UI.RESTDSN.BYPASSACS.

When done, click “Recall” to recall the data set.

Exporting an imported data set

An imported data set is a data set that was imported from tape to cloud by the DATA SET IMPORT policy. The “Export” option provides several methods for exporting the data set for use by z/OS applications. Select the “Export” option from the 3-dot menu:

image91.png

Since tapes work differently than DASD, data sets that were originally allocated on tapes can have attributes that are not always applicable for DASD allocation.

Also, data sets on tape do not have the associated space attributes required for DASD allocation, meaning that the allocation parameters that would be used are the ones specified during the “Export” action.

image93.png

Specify the following parameters:

Target Data Set Name

The name of the data set to be allocated in z/OS. By default, the UI displays the imported data set name. Mandatory field.

Export Volume Name

The name of the VOLSER on which the data set is to be allocated in z/OS. Specifying this parameter means the data set will be allocated on a single volume (The “Volume Count” parameter of the “allocation parameters” will be ignored). For SMS-managed data sets, the target VOLSER will be determined by the ACS routines.

Export to z/OS in format DATA SET

This option will create a z/OS data set.

Export to z/OS in format AWS TAPE

This option will create a z/OS data set, containing the exported tape data in an AWS virtual tape format that can be read by distributed platforms. AWS TAPE format is suitable for data sets with a block size not applicable for DASD allocation.

By default, the exported data set will be allocated in a DATA SET in z/OS format, on DASD, with size as automatically calculated based on the imported data set size. The UI provides additional parameters to set the allocation parameters and overwrite SMS parameters.

When choosing the parameters, consider the following:

Size

Many tape data sets are very large in size and may require allocation as large, extended or multi volume data sets on DASD. When exporting a large data set:

  • For SMS-managed data sets, leave the “Export Volume Name” parameter empty, the ACS routines will determine the storage group. Use the ”Allocation parameters” to set the allocation attributes. If you’d like to overwrite the ACS routines, use the “SMS parameters” section.

  • Non-SMS managed data sets can be allocated on a single volume only, use the ”Allocation parameters” to change additional attributes.

Original block size

Cloud storage can accommodate any block size, however, tape data sets can have a block size larger than 32760, that is not eligible for DASD allocation. In this case, allocate the exported data set using one of the following options:

  • AWSTAPE format.

  • DATASET format on a TAPE unit - choose DATA SET and specify tape attributes for the “Export Volume Name” and the “Allocation parameters”.

image94.png

Use the “Allocation parameters” tab to provide the allocation attributes of the exported data set:

Unit

Any applicable z/OS value. Default: 3390

Space in CYLs

By default, the UI will display an estimated size, as automatically calculated based on the imported data set size.

DSNTYPE

Default: NONE.

Volume Count

Default: 5. This parameter is relevant for SMS-managed data sets only, and is ignored if specifying a value in “Export Volume Name”.

Use the “Advanced SMS parameters” tab to set the SMS attributes of the exported data set:

Storage class

Specify the storage class to be assigned.

Management class

Specify the management class to be assigned.

Data class

Specify the data class to be assigned.

When done, click “Export” to export the data set.

Deleting a resource

Select the "Delete" option from the 3-dot menu:

image45.png

Confirm:

image51.png

Click "Delete" to delete the data set.

Deleting multiple resources

Select the resources to be deleted and click “Delete” in the upper right corner. Check the checkbox next to NAME in the heading bar to select all the resources that are currently displayed on the screen.

image68.png

Preparing a full volume dump for stand-alone restore

The “Prepare Stand-Alone Copy” option in the 3-dot menu is part of the stand-alone restore process. The action is available only to Model9 administrators and will not be visible to non-admin users. If the original full dump was created using DFDSS_GZIP, it will not be possible to create a stand-alone copy of it.

image44.png

After the user confirms the action, an asynchronous activity is created in the management server.

image55.png

Once the stand-alone copy is created, select the “View Log” option from the 3-dot menu to display information pertaining to the stand-alone copy, including:

  • Source volume, backup date and size

  • Output directory path, name and size of each dump section

  • Restore command required for restoring data from the Stand-Alone copy

image72.png

Document the restore command in a place that will be accessible during a stand-alone process. See Performing stand-alone restore (bare-metal recovery) for more details.

Initiating management server policies from z/OS

Management server policies can be executed from z/OS using the M9SAPI JCL procedure.

Customizing the M9SAPI procedure

The procedure and a sample job are located in the Model9 SAMPLIB data set, see the Installation guide for more details.

Creating an API key

Select the “Manage API Keys” from the top left 3-dot menu:

image78.png

Create a new API key using the “GENERATE NEW API KEY” button:

image10.png

A popup window is displayed, enter the Delegated User name that is to be used by the job running the M9SAPI procedure:

image8.png

Copy the API key to a sequential data set in the LPAR from which you intend to run M9SAPI:

image84.png

The API key can be updated by selecting the “Edit” option of the 3-dot menu:

image64.png

Update and save:

image22.png

The API key can be deleted by selecting the “Delete” option of the 3-dot menu.

Running M9SAPI to execute the policy

Customize the M9SAPI parameters:

WAIT

The default is ZOSONLY, meaning that the job will end as soon as z/OS processing ends. Additional values are:

YES - Wait until the policy run ends, including the processing in the management server

NO - End immediately after the policy is scheduled for processing

TOKEN

Name of the data set created in Creating an API key that contains the key details.

HOST

The management server IP address or host name

PORT

The management server port number

INTERVAL

The interval in seconds between activity status checks performed by the M9SAPI procedure

VERBOSE

The messages can be displayed in INFO or DEBUG mode. The default is INFO.

VRFYCERT

The default is YES, meaning that a trusted certificate is defined for the server.

If you wish to use a trusted certificate, see Configure the Remote Server API section in the Installation guide for details. Note that communications between the M9SAPI and the management server are always encrypted, regardless of the VRFYCERT value.

RUNLOG

The default is YES, meaning that the procedure will attempt to retrieve the policy’s run log at the job’s end and write it to the RUNLOG DD:

  • When specifying WAIT=NO, no log will be delivered, no matter what was specified for RUNLOG.

  • When specifying WAIT=ZOSONLY, all log records until that point will be delivered, without the run log’s summary and without any details written later than the job’s end.

  • When specifying WAIT=YES, the full run log will be delivered.

POLICY

The policy name in quotes. This parameter is mutually exclusive with POLICYID.

POLICYID

Can be used when the POLICY name is too long or contains non-MF characters. The POLICYID can be found in the policy’s “More Details” accessed via the 3-dot menu. This parameter is mutually exclusive with POLICY

image85.png

Managing the life cycle of resources in object storage

Life cycle management is a batch process that runs on z/OS and performs life cycle operations on resources. Expired data set backups and full volume dumps are deleted during the policy run according to the number of generations specified by the policy or SMS. Expired archived data sets and their catalog entries, and expired exported data sets are deleted by the life cycle management process.

Running life cycle management

Before running for the first time:

  • Associate the same user ID with the Model9 z/OS agent and the life cycle management process. Running life cycle management requires the user ID to have READ access to: M9.LIFECYCLE.CLOUD.DELETEEXPIRED.

  • A life cycle management process can be associated with one resource complex only. Define a life cycle management process for each resource complex.

  • Only one life cycle management process can run simultaneously for the same resource process.

  • By default, all LPARs in a Sysplex are associated with the same resource complex, named “group-<sysplex-name>”. Running life cycle management in any LPAR will delete the expired archived data sets in the entire Sysplex.

Running for the first time:

  • The life cycle management process can be activated in “simulate” mode, meaning it will report the data sets that would have been deleted without actually deleting them.

  • During the first run, the life cycle management process will scan all archived data sets with an expiration date.

Running regularly:

  • The next time the life cycle management process will run, it will begin where the previous process ended.

  • It is recommended to run a life cycle management process every day.

Life cycle management job parameters

A sample JCL, M9LIFECY, can be found in the Model9 SAMPLIB PDS, see the installation guide for more details. The PWD parameter should point to the agent installation directory.

image32.png
Life cycle management syntax

The syntax is specified in the MAINARGS DD card.

Simulate - Default is no. When specifying yes, the job will report the data sets that should be deleted but will not delete them.

Target - The one possible value is cloud, meaning the process will be performed on the storage specified in the “objstore” parameter of the agent configuration file.

Action - The one possible action is delete-expired, meaning the process will delete expired archived data sets.

Action delete-expired

Deleting imported data sets

An Imported data set is a data set that was migrated from another media, and is a copy of the original data set. The expiration date of imported data sets can be set in the import policy. See Importing data sets from tapes for more details.

Deleting archived data sets

The recorded expiration date of an archived data set is established upon archive. On the due day, life cycle management verifies that the expiration date has not changed.

Determining the expiration date for archived data sets

For all data sets, life cycle management first explores the following data set attributes:

1. Expiration date in the data set’s VTOC entry at time of archive

2. Expiration date in the data set’s catalog entry at time of archive

  • For VSAM data sets, the catalog entry of the cluster is searched for a valid expiration date.

  • For non-VSAM data sets, both the VTOC entry and the catalog entry are searched for a valid expiration date. The latest date among the two is considered to be the determinative expiration date.

If a valid expiration date is found and it is earlier than the current date, the data set and its catalog entries are deleted. If no valid expiration date was found, the data set is not deleted. If the expiration date is later than the current date, the data set expiration date is updated.

Determining the expiration date for SMS-managed archived data sets

For SMS-managed data sets, the process continues and searches the data set’s Management Class expiration attributes. The data set’s Management Class is determined according to SMS rules:

1. The data set’s Management Class, if one is assigned.

2. If not assigned, the SMS base configuration default Management Class, if one is assigned

3. If not assigned, the Management Class default values

The Management Class expiration attributes are compared to the data set’s VTOC attributes:

  • The data set’s creation date is compared to EXPIRE AFTER DATE/DAYS

  • The data set’s last reference date is compared to PRIMARY DAYS NON-USAGE

If both Management Class expiration attributes have values that express a valid expiration date, the latest date of the two is considered to be the determinative expiration date. If a valid expiration date is determined and it is earlier than the current date, the data set and its catalog entry are deleted. If no valid expiration date was found, the data set is not deleted and will be kept until manually deleted. If the date has not passed yet, the data set is not deleted and its expiration date is updated to the new date.

If the data set has an assigned Management Class that does not exist, an error message is displayed. In this case it is possible to delete it using the DELARC command, or to redefine the Management Class.

Archived Generation Data Sets

The decision to check whether the archived GDS is eligible for deletion is determined by parameters in the agent’s configuration file. See the installation guide for more details. A GDS will be deleted according to expiration date and catalog entries, in the following manner:

  • An archived GDS with an explicit expiration date will be deleted according to the expiration date, regardless of whether it is active or rolled off.

  • An archived GDS without an explicit expiration date will be deleted once it is rolled off. Life cycle management treats a GDS without a catalog entry as rolled off.

  • An archived GDS that has no expiration date, no catalog entry and no GDG base catalog entry will not be deleted at all, to avoid accidental deletes due to problems in the z/OS catalog.

  • An archived GDS with a GDG base catalog entry defined with the PURGE parameter will be deleted when rolled off - even if its expiration date has not been reached yet.

  • An archived GDS whose GDG base catalog entry was defined with the NOPURGE parameter will not be deleted when rolled off, as long as its expiration date has not been reached. The expiration date is determined either in the VTOC, Catalog or Management class.

  • Changing the SCRATCH attribute of an existing GDG base catalog entry from SCRATCH to NOSCRATCH will not affect an already archived GDS - the archived GDS will be handled according to the GDG base catalog entry at the time of archive. New GDSs of the same GDG base catalog entry will not be archived.

Archived permanent data sets

Data sets with the following expiration dates are considered permanent and will not be deleted by life cycle management if archived:

Catalog PERM dates

9999.999

1999.999

1999.365

1999.366

Catalog EMPTY dates

0000.000

No expiration date

VTOC PERM dates

1999.999

1999.365

1999.366

VTOC EMPTY dates

0000.000

No expiration date

Deleting archives manually vs. an automated process

The Management Class default expiration attribute value is “NO LIMIT”, meaning that by default, an archived data set will not be deleted by automatic processes, but can be deleted by a manual action such as the DELARC CLI command.

HSM SETSYS EXPIREDATASETS for archived data sets compatibility

The default value of the HSM EXPIREDATASETS parameter is NO, meaning that data sets that were created with an explicit expiration date in the catalog or VTOC, will not be deleted by an automatic process. When setting the HSM EXPIREDATASETS parameter to SCRATCH, HSM will delete the expired data sets during space management. Model9 life cycle management is compatible to EXPIREDATASETS(SCRATCH) only.

The default value of the HSM EXPIREDATASETS parameter is NO, meaning that data sets that were created with an explicit expiration date in the catalog or VTOC, will not be deleted by an automatic process. When setting the HSM EXPIREDATASETS parameter to SCRATCH, HSM will delete the expired data sets during space management. Model9 life cycle management is compatible to EXPIREDATASETS(SCRATCH) only.

Sync with storage

When deleting archived and imported data sets using Life cycle management or the DELARC CLI command, the data set might still be visible in the web UI. To update the web UI, the management server performs an automatic synchronization with the storage device every 5 minutes. A Sync With Storage button is available in the AGENTS screen to initiate the process on demand. This action requires “admin” permission. A message is displayed in the AGENTS page when the action is completed.

image50.png

Life cycle output

SUMMARY DD - List of the deleted resources. For each data set the following details are displayed:

image65.png

Data set name

name

Type

ARCHIVE / IMPORT

Cloud creation time

The date and time at which the resource was archived / imported

Expiration

The determined expiration date

MC

The SMS Management Class for SMS-managed data sets

SUMMARY - In simulate mode, each line of output is preceded with a *SIMULATE* comment.

image3.png

STDOUT - Life cycle log, including a recap report

image83.png

UTILMSGS - z/OS utilities log, e.g. IDCAMS. image80.png

SYSOUT - Java log

image90.png

Life cycle WTO messages

Msgid

Text

ZM9L001I

MODEL9 LIFE CYCLE MANAGEMENT STARTED

ZM9L002I

MODEL9 LIFE CYCLE MANAGEMENT ENDED SUCCESSFULLY

ZM9L006W

MODEL9 LIFE CYCLE MANAGEMENT ENDED WITH WARNINGS, RC=rc

ZM9L007E

MODEL9 LIFE CYCLE MANAGEMENT ENDED WITH ERRORS, RC=rc

Life cycle return codes

RC

Explanation

Notes

0

Ended OK

4

Ended with warning

Reflects warnings in the processing of individual resources

8

Ended with errors

Reflects errors in the processing of individual resources

16

Severe error

Reflects errors that forced an end to life cycle management processing

Performing stand-alone restore (bare-metal recovery)

To perform a stand-alone restore of Model9 full volume dumps, a recovery program can be IPLed over the network using a standard feature of the system zHMC and restore volumes located in the network-attached storage or detachable media.

The stand-alone restore process is usually used to recover a one-pack system (light-weight IPLable z/OS system on one volume) that has a Model9 agent installed. Once the one-pack system is up and running, it is easy to start the agent and restore the rest of the data using the Model9 web UI or CLI commands.

In order to perform stand-alone restore, IBM DFSMSdss can be IPLed over the network using three special files that can be obtained from IBM shopz by choosing z/OS Driving Systems in the “Create new order” main page. The files should be saved under the management server $MODEL9_HOME/SAbackup directory as part of the Model9 installation:

  • DFSMSDSS.ins

  • DFSMSDSS.IMAGE

  • DFSMSDSS.PREFIX

See the Installation guide for more details.

Creating a full dump of your one pack system

Create a standard full dump of the one-pack system disk as part of the current volume full dump policy. It is possible to use any of the following compression types:

  1. DFDSS_COMPRESS

  2. JAVA_LZ4

  3. JAVA_GZIP

If the original full dump was created using DFDSS_GZIP, it will not be possible to create a stand-alone copy of it.

Preparing a stand-alone copy

A stand-alone copy can be created manually by selecting the “Prepare Stand-Alone Copy” option from the 3-dot menu in the UI, as mentioned in Preparing a full volume dump for stand-alone restore.

The activity’s log displays information pertaining to the stand-alone copy, including:

  • Source volume, backup date and size

  • Output directory path, name and size of each dump section

  • Restore command required for restoring data from the stand-alone copy

image71.png

Performing stand-alone restore from an FTP source

The stand-alone restore from an FTP source can be performed using a server with an FTP service that can be reached from the HMC. Copy the 3 DFDSS files and the stand-alone copy to an accessible directory on that server. If using the Model 9 management server, verify that you have an FTP service installed. The stand-alone restore is performed from the HMC by choosing recovery -> load from removable media or server:

image28.png

Choose the media that holds the stand alone copy

image35.png

IPL the system

image18.png

Updating product license

Usually, the product license is granted for a full year, and is renewed until the end of the contract or a mainframe upgrade. Upgrading to a newer Model9 release does not require a new license. Changing the CPU serial number or model of the mainframe server does require a new license.

The license should be updated in the license file of each active z/OS agent and the management server, see the installation guide for more details.

Usually, the output of the z/OS command “D M=CPU” is required for obtaining a new license. Requests for a new license are available through the Model9 portal, or your Model9 sales representative.

License enforcement

The license is enforced by every action that is performed under the z/OS agent, the z/OS CLI and the management server web UI. When the license expires, only data retrieval actions are allowed. The following table lists the return code and messages for every action in accordance with the license status:

License

Details

Agent

CLI

Web UI

Valid

All actions

N/A

RC 0

RC 0

Expires within 15 days

All actions

A message is displayed at startup and in the UI agent details

RC 0

A message is not displayed, so that automation is not disrupted

An orange message is displayed at the top of all pages

Expired

LIST

RESTORE

RECALL

A message is displayed at startup and in the UI agent details

RC 04

A message is displayed for every command

A red message is displayed at the top of all pages

Expired

BACKUP

ARCHIVE

FULLDUMP

DELETE

DELARC

A message is displayed at startup and in the UI agent details

RC 08

A message is displayed for every command

A red message is displayed at the top of all pages

Invalid

Missing Wrong CPUID

Wrong format

The agent would not start

RC 16

CLI will not work

A red message is displayed at the top of all pages

Command Line Interface User Guide

Release 1.7.0

CLI - Overview

The Model9 MF Command Line Interface (CLI) is a set of commands that perform Model9 actions directly from TSO/E or in batch operations, without dependency on the Model9 Management Server. The CLI facilitates everyday operations such as listing and restoring data sets and volumes. The CLI can be executed from the TSO/E command line for ad-hoc operations, be embedded in REXX for internal interfaces, or be executed using batch jobs for maintenance and business resumption activities.

Installation considerations

Using the CLI requires:

  • A listener that is installed as part of the agent’s installation procedure.

  • The M9CLI REXX to be concatenated to TSO.

For more information, see the Model9 installation guide.

Security considerations

  • Any user running the CLI must have an OMVS UID defined in RACF.

  • On RESTORE and ARCHIVE, the CLI validates that the user has the required permissions to process the data set.

  • Individual permissions are required per command, in the following pattern:

    Action

    Class

    Profile

    Access

    Enable a user to execute <command> from the CLI

    XFACILIT

    M9.CLI.<command>

    READ

  • Some keywords require individual permissions:

    Action

    Class

    Profile

    Access

    Enable a user to recall a data set while bypassing the ACS routines option

    XFACILIT

    M9.CLI.RECALL.BYPASSACS

    READ

    Enable a user to restore a data set while bypassing the ACS routines option

    XFACILIT

    M9.CLI.RESTDSN.BYPASSACS

    READ

    Enable a user to delete an archived data set although it has a valid date that has not expired

    XFACILIT

    M9.CLI.DELARC.PURGE

    READ

    Enable a user to archive a data set without checking whether the data set has a backup

    XFACILIT

    M9.CLI.ARCHIVE.NOBCK

    READ

    Enable a user to specify a retention period and set an expiration date while archiving a data set. The user-specified retention period overrides all other definitions, including DFSMS Management Class attributes.

    XFACILIT

    M9.CLI.ARCHIVE.RETPD

    READ

CLI Commands

The supported commands are shown in the following table:

Command

Description

LISTDSN

List all archives, backup copies and imported data sets that exist for the specified data set or pattern

LISTVOL

List volume full dump copies that exist for the specified volume or pattern

RESTDSN

Restore a data set from a backup copy

RESTVOL

Restore a volume from a volume full dump copy

ARCHIVE

Archive a data set

RECALL

Recall a data set from archive, allocate and recatalog it back on disk

DELARC

Delete an archived data set without recalling it first

BACKDSN

Backup a data set

DELBACK

Delete a backup created by the BACKDSN command

Identifying the data set / volume for the command

The LISTDSN and LISTVOL commands must include either a specific name or a pattern as a parameter. The issued command will display all the backup copies / archives / volume full dump copies that meet the criteria. The accepted pattern adheres to the z/OS ISPF conventions.

Data set name pattern:

Description

Example

Selects

%

Single-character variable

SYS%.PROCLIB

SYS1.PROCLIB SYS2.PROCLIB

*

1-8 character variable

SYS2.*.PROCLIB

SYS2.DBA.PROCLIB SYS2.USER.PROCLIB

SYS2.*

SYS2.PROCLIB

SYS2.PARMLIB

SYS*.PROCLIB

SYS2.PROCLIB

SYSDB.PROCLIB

**

Any number (including zero) of 0-8 character variables

SYS2.**.PROCLIB

SYS2.PROCLIB

SYS2.DBA.PROCLIB SYS2.DBA.DEV.PROCLIB SYS2.USER.PROCLIB

SYS2.**

SYS2.PROCLIB

SYS2.PARMLIB

SYS2.DBA.PROCLIB SYS2.DBA.DEV.PROCLIB SYS2.USER.PROCLIB

SYS2.USER.LOAD

Note

When specifying a data set name without a pattern suffix such as * or **, it will be treated as a specific name and not a prefix. This is intended to simplify the CLI interface by allowing the use of data set names and volumes without quotation marks.

Volume serial pattern

Description

Example

Selects

%

Single-character variable

M%INST

M1INST

M2INST

*

**

0-8 character variable. When selecting volumes, ‘**’ behaves the same as ‘*’

M*ST

M1INST

M2INST

M2ZZST

M3ZZST

M2*

M2INST

M2ZZST

The RESTDSN and RESTVOL commands operate on a single data set / volume. The input can be a specific data set / volume name, or a pattern, as long as it matches one data set / volume. A user can use the LISTDSN / LISTVOL command to identify a data set / volume to be restored, and then change the command name to RESTDSN / RESTVOL while leaving the rest of the parameters the same in order to perform the restore. The command will be executed for the latest backup copy / archive / volume full dump copy that meets the criteria, unless specified otherwise. For these commands, further filtering is possible by using:

Option

Description

DATE

The date on which the backup copy / archive / volume full dump copy was created

DATERANGE

A range of dates on which the backup copy / archive / volume full dump copy was created

ENTRY

A positive sequential number that represents the sequence of listed backup copies / volume full dump copies in a given command - “0” being the latest, “1” being the previous, and so on. It allows selection of a backup copy / volume full dump copy other than the latest. For example, when listing using DATE, the ENTRY will list copies created on the same day, at different times of the day.

UNIQUEID

A sequence of characters, that when combined with the data set/volume name, uniquely identifies the backup copy / archive / volume full dump copy

The ARCHIVE and RECALL commands must receive a specific data set name, no further filtering is available.

Output

The command is processed synchronously and the output is redirected to STDOUT. When invoked in TSO, the output will be displayed on the screen. When invoked in a batch job, it will be redirected to a SYSPRINT DD card. If the requester of the command is not available, in the event that the job or TSO session was cancelled, the output will be printed in the executing agent's log. The exception is LISTDSN and LISTVOL outputs, which will not be printed to the agent's log as their outputs may be very large.

Return codes

The following table lists the possible return codes from CLI commands.

RC

Description

0

The command was executed successfully.

4

The command ended with a warning / no records were selected. An appropriate message will be displayed

8

The command ended in an error. An appropriate message will be displayed.

10

Syntax error

12

A timeout occurred while the request was being processed/queued at the agent

14

No agent available - Unable to send request to agent

16

Unexpected error

CLI - Getting Started

TSO/E

The CLI can be executed from TSO/E, the output is displayed on the screen. The general syntax of CLI commands is as follows:

M9CLI <command> <required-parameter> [optional parameters]
To execute the CLI from the TSO/E command line
TSO M9CLI LISTDSN M9.SYSTEM.**
To execute the CLI from REXX exec
/* REXX */
Address TSO "M9CLI LISTDSN M9.SYSTEM.**"
To capture the CLI output from REXX exec

The M9CLI output is directed to stdout. To capture the M9CLI output, stdout redirection will be needed, for example:

/* REXX */
"Alloc f(CLIDD) lrecl(1024) recfm(V,B) new reuse"
Address tso "M9CLI LISTDSN M9.PROD.* 1>DD:CLIDD"
cliRc = rc

JCL

The CLI can be executed using a TSO/E batch program, the output is directed to the SYSPRINT DD card.

Using CLI from batch
//M9CICLI  JOB ACCT#,TIME=NOLIMIT,REGION=0M
//*
//*
RUN M9CLI COMMAND
//*
//M9CLI    EXEC PGM=IKJEFT01
//STEPLIB  DD DISP=SHR,DSN=SYS2.MODEL9.LOADLIB
//SYSEXEC  DD DISP=SHR,DSN=M9.ALL.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSIN  DD * 
 M9CLI LISTDSN M9.SYSTEM.** 
/*
//

Examples of using LISTDSN and RESTDSN

The CLI can be used to identify and restore data sets or voluinstancesmes, according to a specific search criteria. Following are some examples.

Scenario A: Comparing between different versions of a data set

A programmer wants to restore a previous version of a source member in order to compare between the current and the previous version of his code. The development data sets are backed up only if they were changed since the last backup. The backup is incremental, the last 3 changes are kept. The data set is restored with a new name and cataloged, to allow easy allocation for the compare operation.

CLI_Scenario_A.png

The RESTDSN command will be as follows:

TSO M9CLI RESTDSN M9.USER.SOURCE NEWNAME(M9.USER.SOURCE.PREVIOUS)

The command will restore the latest version of the data set, marked in the output as ENTRY(0).

Scenario B: Recovery from accidental deletion of a data set

A data set is accidently deleted. The system administrator needs the latest copy. He performs a LISTDSN to verify and then performs a quick RESTDSN. The latest copy is restored from a full volume dump.

CLI_Scenario_B.png

The RESTDSN command will be as follows:

TSO M9CLI RESTDSN SYS2.PROCLIB

The data set will be restored from the latest copy, in this case from a volume’s full dump copy. Because the origin was a volume full dump, the LISTDSN output did not show whether this data set was cataloged during backup. Nevertheless, the RESTDSN command will catalog it automatically, unless specified otherwise.

Scenario C: Recovery from a corrupted system data set during IPL

When IPL-ing the system, z/OS does not initialize correctly because of an invalid member of SYS1.PARMLIB. The data set is accessible from a different system, where it is not cataloged. It shows that the member in question was changed on the 28/06/2019. In order to restore the data set to a version before that specific change that corrupted it, LISTDSN is used to list all the backup copies of SYS1.PARMLIB on a specific volume, within a specific date range:

CLI_Scenario_C.png

The data set can be restored in one of several ways. Note that RESTDSN command catalogs the data set by default - no matter whether it was cataloged on the restoring system or not:

  1. Using the same command used for LISTDSN, adding the requested ENTRY:

    M9CLI RESTDSN SYS1.PARMLIB VOL(M9RES1) DATER(2019/06/30-2019/06/01) ENTRY(2) NOCAT
  2. Using the RESTDSN command with a specific DATE. The command will restore the latest backup copy created on that day:

    M9CLI RESTDSN SYS1.PARMLIB VOL(M9RES1) DATE(2019/06/26) NOCAT
  3. Using the RESTDSN with the UNIQUEID parameter:

    M9CLI RESTDSN SYS1.PARMLIB VOL(M9RES1) UNIQUEID(F9E717CE) NOCAT
Scenario D: Recovery at a third party’s remote site

The object storage containing the site’s backups is replicated daily to a remote site. The remote site belongs to a third party that supplies recovery services. In case of data corruption on the primary site, the corrupted data set can be recovered on the third party’s host and sent back to the primary site. There is no SMS active on the third party’s site, the data set is restored with NULLMC, NULLSC and BYPASSACS:

M9CLI RESTDSN M9.APPL.LOAD NEWVOL(OUTS01) NULLMC NULLSC BYPASSACS

Examples of using LISTVOL and RESTVOL

Scenario A: Fallback from an unsuccessful upgrade

A product was installed on a certain disk. Before the installation, the disk was fully dumped to preserve the previous installation. Following installation, the product appears to malfunction, requiring a rollback of the disk to its previous state. LISTVOL can be used to list the available volume full dumps:

CLI_Scenario_AA.png

To avoid mistakes, each volume is restored using a separate command. The target volume must be specified, even if the restore is for the same volume-id.

M9CLI RESTVOL INST01 NEWVOL(INST01)
M9CLI RESTVOL INST02 NEWVOL(INST02)
M9CLI RESTVOL INST03 NEWVOL(INST03)
Scenario B: Disaster Recovery

Restoring the system in a disaster recovery scenario can be performed using one of 3 methods:

  1. Bare-metal recovery program

  2. The Model9 management server UI

  3. The CLI

This example demonstrates the third option, assuming the Model9 Management server is unavailable and the system has a minimal z/OS running with an active Model9 agent. The system can be restored using multiple batch jobs, each job specifying a set of volumes to restore, or a REXX that LISTVOLs the volumes to restore and creates an individual RESTVOL statement for each volume. The restore operation itself applies internal parallelism as well.

Important

NEWVOL is the target volume, it is a required parameter used to ensure that no accidental override occurs during RESTVOL.

An example of how to RESTVOL all the volumes in a storage group, in REXX.

/* REXX                                                    */                                                                  
/* acquiring the storage group disks                       */      
targetSG = "SG1QBK"                                                         
volumeMark = "VOLUME   UNIT    SYSTEM"                                      
"CONSPROF SOLDISP(NO) SOLNUM(400)"                                          
"CONSOLE ACTIVATE NAME(M9CONS)"                                             
"CONSOLE SYSCMD(D SMS,SG("targetSG"),LISTVOL) CART(M9CART)"                 
getCode = getmsg('PRTMSG.','SOL',"M9CART",,60)                              
..                                                               
   /* Locate volumes in output                             */                                           
   ...                                         
      /* go over the volumes and extract online volumes        */                  
      ...                 
           /* RESTVOL each volume in the storage group     */
address tso "m9cli restvol" volser "newv("volser")"
         ...           
"CONSOLE DEACTIVATE"           

An example of using RESTVOL in a batch job

//M9CLIRST  JOB ACCT#,TIME=NOLIMIT,REGION=0M
//*
//* RESTORING THE SYSTEMS DISKS
//*
//M9CLI   EXEC PGM=IKJEFT01
//STEPLIB  DD DISP=SHR,DSN=SYS2.MODEL9.LOADLIB
//SYSEXEC  DD DISP=SHR,DSN=M9.ALL.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSIN  DD *
 M9CLI RESTVOL M9RES1 NEWVOL(M9RES1) 
 M9CLI RESTVOL M9RES2 NEWVOL(M9RES2) 
 M9CLI RESTVOL M9RES3 NEWVOL(M9RES3) 
 M9CLI RESTVOL M9RES4 NEWVOL(M9RES4) 
 M9CLI RESTVOL M9RES5 NEWVOL(M9RES5) 
 M9CLI RESTVOL M9RES6 NEWVOL(M9RES6) 
 M9CLI RESTVOL M9RES7 NEWVOL(M9RES7) 
 M9CLI RESTVOL M9RES8 NEWVOL(M9RES8) 
 M9CLI RESTVOL M9RES9 NEWVOL(M9RES9) 
/*
// 

Examples of using RECALL

The CLI can be used to recall data sets using the RECALL command. RECALL is allowed for cataloged data sets only, therefore the command accepts a specific data set name as input. For example, an SMS-managed data set is recalled with its original SMS attributes as input to the ACS routines. For a non-SMS managed data set, the recall is performed on the original volume that accommodated the data set.

Scenario A: Recalling a data set when there is not enough space on its original volume

A data set needs to be recalled, but the disk that originally accommodated the data set does not have enough free space. A manual recall operation is needed to recall the data set with a new volume parameter. The RECALL command will be as follows:

TSO M9CLI RECALL M9.APPL.SOURCE NEWV(M9AP01)
Scenario B: Ensuring valid execution of a job with archived data sets as input

A batch job requires several archived non-SMS data sets as input. Normally, automatic recall will be used during execution to transparently recall the data sets to their original volumes. But, since the data sets were archived a long time ago, their original volumes no longer exist. To prevent failures during the batch job execution, the system programmer uses the RECALL command in a preliminary job to recall the data sets to new volumes.

//M9CLISMF JOB ACCT#,TIME=NOLIMIT,REGION=0M
//*
//* RECALLING FILES PRIOR TO EXECUTION
//*
//M9CLI EXEC PGM=IKJEFT01
//STEPLIB DD DISP=SHR,DSN=SYS2.MODEL9.LOADLIB
//SYSEXEC DD DISP=SHR,DSN=M9.ALL.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSIN DD *
M9CLI RECALL M9.APPL.Y2015 NEWV(M9APL4)
M9CLI RECALL M9.APPL.Y2016 NEWV(M9APL3)
M9CLI RECALL M9.APPL.Y2017 NEWV(M9APL2)
M9CLI RECALL M9.APPL.Y2018 NEWV(M9APL1)
M9CLI RECALL M9.APPL.Y2019 NEWV(M9APL0)
/*
//

Example of using ARCHIVE and DELARC

The ARCHIVE command can be used to manually archive a data set regardless of its primary days non-usage attribute value, or for data sets that are not eligible for automatic archive in DFSMS. The DELARC command can be used to manually delete an archived data set that is no longer needed, without having to recall it.

Scenario A: Archiving and deleting large files that are no longer needed

Large dump files are archived after they are analyzed, in order to free up space on primary storage:

TSO M9CLI ARCHIVE M9.APPL.DUMP.PDB2.D22052019
TSO M9CLI ARCHIVE M9.APPL.DUMP.TCIC.D16062019

The raw dump files are kept in archive until the bug is solved, and are then deleted in a batch job, using the DELARC command:

//M9CLISMF JOB ACCT#,TIME=NOLIMIT,REGION=0M
//*
//* DELETING RAW DUMP FILES FROM ARCHIVE
//*
//M9CLI EXEC PGM=IKJEFT01
//STEPLIB DD DISP=SHR,DSN=SYS2.MODEL9.LOADLIB
//SYSEXEC DD DISP=SHR,DSN=M9.ALL.EXEC
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSTSIN DD *
M9CLI DELARC M9.APPL.DUMP.PDB2.D22052019
M9CLI DELARC M9.APPL.DUMP.TCIC.D16062019
/*
//

If the data sets are not yet expired, the DELARC command will notify - but not delete the files. In this case, the command should be used with the PURGE keyword and the SYSTSIN would be as follows:

//SYSTSIN DD *
M9CLI DELARC M9.APPL.DUMP.PDB2.D22052019 PURGE
M9CLI DELARC M9.APPL.DUMP.TCIC.D16062019 PURGE
//

Example of using BACKDSN and DELBACK

The BACKDSN command can be used to manually backup with a specified retention period regardless of its SMS or backup policy definitions, or for data sets that are not eligible for automatic backup in DFSMS. It also allows creating a backup of a data set with a new name, backup date and backup time.

The DELBACK command can be used to manually delete a backup created by the BACKDSN command that is no longer needed.

Scenario A: Backup and rename a data set that was backed up in the past

Moving backups from tape to cloud can be done by restoring the tape backup to DASD with a temporary name and backing them up again with Model9 with the original name and date.

TSO M9CLI BACKDSN M9.APPL.SOURCE.TEMP NEWNAME(M9.APPL.SOURCE) 
NEWDATE(2001/03/30) 
Scenario B: Backup a data set for a short period of time

Backing up a data set for a few days before an application upgrade for example can be done by the CLI.

TSO M9CLI BACKDSN M9.APPL.SOURCE RETPD(10D)

The backup will be deleted automatically after 10 days by the lifecycle process.

If the backup is not yet expired, the DELBACK command can be used to manually delete the backup.

First find the backup’s unique id using LISTDSN.

CLI_Scenario_BB.png

And then delete the backup using DELBACK command with PURGE:

TSO M9CLI DELBACK M9.APPL.SOURCE UNIQUEID(49CFIDK2)PURGE

CLI Command Reference

This section describes in detail each CLI command, along with its options and parameters.

LISTDSN

The LISTDSN command lists the archives / backup copies / imported data sets of a specified data set or data sets.

LISTDSN - Logic

The command lists all the relevant data set types:

  1. Backup copies created directly by a backup policy.

  2. Copies of the data set that were found on a volume full dump.

  3. Archive copy of the data set.

  4. Imported data sets.

When listing a pattern of data sets, the copies per each data set are displayed separately, from the most current copy to the oldest, regardless of its type.

The details displayed are:

  • The time and date of the creation of the copy

  • The name of the original volume

  • The data set type

  • Whether the data set is cataloged

  • Whether there were warnings in the creation of the copy

  • The expiration date at the time of the backup / archive / import

ENTRY and UNIQUEID are displayed to reference a specific copy in subsequent commands.

LISTDSN - Syntax
M9CLI LISTDSN <dsnamePattern> 
 [VOLume(<volumePattern>)]
 [DATE(yyyy/mm/dd) | DATERange(yyyy/mm/dd-yyyy/mm/dd)]
 [ENTry(<integer> | UNIQueid(<uniqueId>)]
 [NODUMP]
LISTDSN - Required parameters

dsnamePattern

A data set or a group of data sets. Specify a pattern using: %,*,**, e.g.:SYS2.PARMLIB, SYS2.PROC*, SYS%.PROC*. The pattern or data set name may be enclosed in apostrophes. A description of how to specify a pattern can be found here: Identifying the data set / volume for the command .

LISTDSN - Optional parameters

Option

[Short option]

Description

Format

Examples

VOLUME

[VOL]

The volume on which the data set resides

Full name or using a pattern

SYSRES PROD* SYS%RS

DATE

Requests backup copies / archives that were created on a specific date only. DATE and DATERANGE are mutually exclusive

yyyy/mm/dd

2019/08/15

DATERANGE

[DATER]

Requests backup copies / archives that were created on a specific date range. DATERANGE and DATE are mutually exclusive

yyyy/mm/dd-yyyy/mm/dd

2019/01/01-2019/08/15

ENTRY

[ENT]

A positive sequential number from 0, 1, 2, and so on, representing the available entries of a backup copy, according to the specified criteria. The entries are relative to the selection used in the LIST command. When listing a pattern, the ENTRY will be displayed per data set.

<integer>

0 - for the latest copy (the default).

1 - for the copy prior to the latest, and so on.

UNIQueid

[UNIQ]

An 8-character id that when combined with the data set name identifies each backup copy. Using UNIQueid, you can refer to a specific copy of a data set in other commands (e.g. RESTDSN) without having to specify any additional filters. UNIQueid and ENTRY are mutually exclusive.

<xxxxxxxx> 8 characters

UNIQueid can be taken from a previous LISTDSN command’s output, to be used in subsequent commands, such as RESTDSN.

NODUMP

Requests that LISTDSN will display backup copies only, without data set copies found in volume full dumps. The default behavior is to allow selection of backups from a volume full dump

-

-

LISTDSN - Output

The LISTDSN output displays the given LISTDSN command and parameters, followed by the requested information. For every filtered data set, a headline is displayed:

Listing Data Set <dsname>

The headline is followed by a list of the available backup copies / archives / imports. For each record on the list, the following details are displayed:

Column

Description

Available values

ENTRY

An integer representing the sequence of the displayed list. The entries are relative to the filter used in the LISTDSN command. When listing a pattern, the ENTRY will be displayed per data set.

0,1,2,... - a backup copy or a copy from full-dump, available for restore

‘-’ - an archive or an import copy, cannot be used for restore

DATE

The date of creation

yyyy/mm/dd

TIME

The local time when created

hh:mm:ss

VOLUME

The volume on which the data set resided

vvvvvv

TYPE

How the backup copy / archive / import was created

Archive - an archived data set.

Backup - a backup copy

Full Dump - a backup copy from a volume's full dump

Import - an imported data set

CMD

Whether the data set was created by a CLI command

Y - yes

N - no

-  - this indication is not available for this data set

C

Whether the data set was cataloged during backup

Y - yes

N - not cataloged

- - when listing a data set copy from a volume full dump, no catalog information is available

W

Whether the copy / archive creation ended with warnings (DFDSS RC=4). The warnings could be a result of numerous causes, depending on DFDSS execution

Y - yes. The copy / archive creation ended with RC=4. Please refer to the backup log in the UI to see DFDSS details.

N - no

EXPIRATION

The current expiration date of the backup / archive / import. The value “NONE” indicates that the expiration date was not specified by either the catalog, VTOC or SMS. The value “N/A” indicates that expiration date is not applicable for this type.

yyyy/mm/dd | NONE | N/A

UNIQUEID

An 8-character ID value to be used to uniquely identify the backup copy / archive

<xxxxxxxx> 8 characters

LISTDSN - Examples

The following examples demonstrate use of the command.

Listing all instances of a data set on a certain day

CLI_LISTDSN_1.png

In this example, backup copies exist for two different data sets with the same name. One is cataloged on the volume M9USER, the other is not cataloged and resides on volume N0QS01. On the specified day, the backup was performed twice on both data sets. One of the backup copies ended with a warning and appears as W=Y.

Listing all instances of a data set using a pattern

CLI_LISTDSN_2.png

There were 2 data sets selected by the specified pattern. The first one is the same as the one displayed in the previous example. The second one has 3 backup copies, from a cataloged data set on disk M9USER.

Listing all instances of a data set on a certain volume

CLI_LISTDSN_3.png

The display is the same as shown in the previous example, with the exception that the backup copies that are being displayed are copies of the data set that resides on volume NOQS01.

Listing a data set pattern from a specific set of disks

CLI_LISTDSN_4.png

Listing all instances of data sets starting with M9.*, created within a date range

CLI_LISTDSN_5.png

LISTVOL

The LISTVOL command lists the volume full dump copies of a specified volume or volumes.

LISTVOL - Logic

When listing a pattern of volumes, the copies in each volume are displayed separately, from the most current copy to the oldest.

The list includes the time and date of the full dump, and the original size of the volume in cylinders. In addition, ENTRY and UNIQUEID are specified to reference a specific volume full dump copy in subsequent commands.

LISTVOL - Syntax
M9CLI LISTVOL <volumePattern>
 [DATE(yyyy/mm/dd)|DATERange(yyyy/mm/dd-yyyy/mm/dd)]
 [UNIQueid(<uniqueId>|ENTry(<integer>)]
LISTVOL - Required parameters

volumePattern

A volume or a group of volumes, specified by a pattern. Specify a pattern using: %,*,**, e.g.:SYSRES, SYSRE*, DBA%LG. For an explanation on the use of patterns, see: Identifying the data set / volume for the command .

LISTVOL - Optional Parameters

Option

[Short option]

Description

Format

Examples

DATE

Requests volume full dump copies that were created on a specific date only. DATE and DATERANGE are mutually exclusive.

yyyy/mm/dd

2019/08/15

DATERANGE

[DATER]

Requests volume full dump copies that were created within a specific date range. DATERANGE and DATE are mutually exclusive.

yyyy/mm/dd-yyyy/mm/dd

2019/01/01-2019/08/15

ENTRY

[ENT]

A positive sequential number from 0, 1, 2, and so on, representing the available entries of a volume full dump copy, according to the specified criteria. The entries are relative to the selection used in the LISTVOL command. When listing a pattern, ENTRY will be displayed per volume.

<integer>

0 - for the latest copy (the default). 1 - for the copy prior to the latest, and so on.

UNIQueid

[UNIQ]

An 8-character that when combined with the volume name identifies each volume full dump copy. Using the UNIQueid, you can refer to a specific copy without having to specify any additional filters. UNIQueid and ENTRY are mutually exclusive.

<xxxxxxxx> 8 characters

UNIQueid can be taken from a previous LISTVOL command’s output to be used in subsequent commands, such as RESTVOL.

LISTVOL - Output

The LIST output displays the given LIST commands and parameters, followed by the requested information. For every filtered volume, a headline is displayed with the name of the volume, followed by a list of the available volume full dump copies.

For each record on the list, the following details are displayed:

Column

Description

Available values

ENTRY

An integer representing the sequence of the displayed list. The entries are relative to the filter used in the LISTVOL command.

0,1,2,... - a full dump copy, available for restore

DATE

The date on which the volume full dump copy was created

yyyy/mm/dd

TIME

The local time at which the volume full dump copy was created

hh:mm:ss

SIZE

Original size, in volume cylinders

<integer>

UNIQUEID

An 8-character value to be used to uniquely identify the volume full dump copy.

<xxxxxxxx>

LISTVOL - Examples

The following examples demonstrate use of the command.

Listing a specific volume’s full dump copies

CLI_LISTVOL_1.png

Listing a volume’s full dump copies created on a specific date

CLI_LISTVOL_2.png

Listing a volume’s full dump copies created within a date range

CLI_LISTVOL_3.png

BACKDSN

The BACKDSN command creates a backup of a single data set. This command applies to both SMS-managed and non-SMS-managed data sets and is intended to supplement the scheduled policies.

Several considerations pertain when backing up data sets using the BACKDSN command:

  1. When specifying RETENTIONPERIOD, the new expiration date will override previous definitions, including the DFSMS Management Class specification.

  2. NEWNAME supports VSAM data sets only if a data set with the new name exists in the catalog and only VSAM data sets without AIX nor PATH.

Once backed up, the backup is immediately available for LISTDSN and RESTDSN.

The backup will be available in the UI after a periodic synchronization, which by default, is performed automatically every 5 minutes. If needed, on-demand synchronization can be performed via the UI, by clicking the “SYNC WITH STORAGE” button in the “Agents” tab. See the Administrator and User Guide for more information.

The command supports the following additional keywords.

  • RETENTIONPERIOD - used to specify a retention period for the archived data set.

  • RESET- by default, the BACKDSN doesn’t reset the backup change bit. In order to reset it, RESET needs to be specified.(TBD)

  • NEWNAME, NEWDATE, NEWTIME -  optional parameters used to assign a new name to the backup that is created. NEWDATE and NEWTIME are optional parameters that are only valid with the NEWNAME parameter.

BACKDSN - Syntax
M9CLI BACKDSN <dsname> 
 [RETENTIONPERIOD|RETPD(<nnn>d|<nnn>w|<nnn>m|<nnn>y)]
 [RESET][NEWNAME<dsname> [NEWDATE<date> NEWTIME<time>]]

BACKDSN - Required Parameters

dsname

A specific data set name. The data set must be cataloged.

BACKDSN - Optional Parameters

Option

[Short option]

Description

Format

Examples

RETENTIONPERIOD

[RETPD]

Specifying RETENTIONPERIOD will set the retention period manually, regardless of any DFSMS Management Class.

Not specifying RETPD requires special permissions and implies that this backup doesn’t have any expiration date and will never be deleted automatically.

To omit the keyword, the user must have READ access to the Model9 Resource M9.CLI.BACKDSN.PERM

A number followed by a character to specify:

d - for days

w - weeks

m - months

y - years

3d - The data set will expire in 3 days

RESET

When specifying RESET, the backup will reset the change bit

NEWNAME

the user must have READ access to the Model9 Resource M9.CLI.BACKDSN.NEWNAME

<dsname>

NEWDATE

Specifies the date to assign to the backup. If NEWDATE is specified without the NEWNAME parameter, the BACKDSN command will fail. If NEWDATE is specified without NEWTIME, the current time will be used. If the NEWDATE is in the future date, the command will fail.

The user must have READ access to the Model9 Resource M9.CLI.BACKDSN.NEWDATE

DATE <yyyy/mm/dd>

NEWTIME

Specifies the time to assign to the new backup. If NEWTIME is specified without NEWDATE, the current date will be used. If the NEWDATE with the NEWTIME is in the future, the command will fail.

The user must have READ access to the Model9 Resource M9.CLI.BACKDSN.NEWTIME

TIME <HH:MM>

BACKDSN - Output

The output displays the given BACKDSN command and parameters, followed by the requested information:

ZM9I051I Data set <dsname> was backed up successfully with UNIQUEID<uniqueid>

If the data set could not be backed up, message ZM9I044E will be displayed, followed by the relevant error messages:

ZM9I050E An eligible data set was not found for the backup command
BACKDSN - Examples

Backing up a data set with no expiration date (requires permission):

TSO M9CLI BACKDSN M9.USER.SOURCE 

Backing up a data set to be deleted after a year:

TSO M9CLI ARCHIVE M9.USER.SOURCE RETPD(1Y) 

Backing up a data set with a new name to be deleted after a month:

TSO M9CLI ARCHIVE M9.USER.SOURCE NEWNAME(M9.USER.SOURCE.NEW) RETPD(1M) 

Backing up a data set with a new name, new date and new time to be deleted after 10 days:

TSO M9CLI ARCHIVE M9.USER.SOURCE NEWNAME(M9.USER.SOURCE.NEW)NEWDATE(2012/12/30)NEWTIME(14:00)RETPD(10D) 

RESTDSN

The RESTDSN command restores a data set backup copy, either from an incremental backup or from a volume full dump.

RESTDSN - Logic

Only one data set can be restored per request. Unless specified otherwise, the RESTDSN does not replace the source data set, and does catalog the restored target copy. By default, the most current backup copy is restored.

RESTDSN - Syntax
M9CLI RESTDSN <dsname> [VOLume(<volumePattern>)]
 [DATE(yyyy/mm/dd)|DATERange(yyyy/mm/dd-yyyy/mm/dd)]
 [ENTry(<integer>)|UNIQueid(<uniqueId>)] 
 [NEWVol(<volume>)] 
 [NEWName(<dsname>)] 
 [NEWMClass(<managementClass>)|NULLMC]
 [NEWSClass(<storageClass>)|NULLSC]
 [BYPASSACS]
 [NODUMP]
 [NOCATalog] 
 [REPLace]
RESTDSN - Required parameters

dsname

A specific data set name, which may be enclosed in apostrophes. The user must have READ authorization for the specified data set.

RESTDSN - Optional Parameters

Option

[Short option]

Description

Format

Examples

VOLUME

[VOL]

The volume on which the data set resides

Full name or using a pattern

SYSRES PROD* SYS%RS

DATE

Requests a backup copy that was created on a specific date only. DATE and DATERANGE are mutually exclusive.

yyyy/mm/dd

2019/08/15

DATERANGE

[DATER]

Requests a backup copy that was created within a specific date range. DATERANGE and DATE are mutually exclusive.

yyyy/mm/dd-yyyy/mm/dd

2019/01/01-2019/08/15

ENTRY

[ENT]

A positive sequential number from 0, 1, 2, and so on, representing the available entries of a backup copy, according to the specified criteria.

<integer>

0 - for the latest copy (the default).

1 - for the copy prior to the latest, and so on.

UNIQueid

[UNIQ]

An 8-byte ID that when combined with the data set name identifies each backup copy. Use UNIQueid to refer to a specific copy of a data set without having to specify any additional filters. UNIQueid and ENTRY are mutually exclusive.

<xxxxxxxx> 8 bytes

UNIQueid can be taken from a previous LISTDSN command’s output, to be used in RESTDSN.

NEWVOL

[NEWV]

The command will attempt to restore the data set to the specified volume. For SMS-managed data sets, the decision whether to honor the request is done by ACS routines. For non-SMS managed data sets, the decision whether to honor the request is made by the operating system according to the availability of the volume.

<volume>

PROD01

NEWNAME

[NEWN]

Restore the data set with a new name. The user must have ALTER authorization for the new data set name. The new name may be enclosed in apostrophes.

<dsname>

M9.P.JOBS

NEWMCLASS

[NEWMC]

Restore the data set with the specified management class as input to the ACS routines instead of the management class the data set originally had when it was archived. NEWMCLASS and NULLMC are mutually exclusive

<management-Class>

M9APPLMC

NULLMC

Restore the data set with null management class as input to the ACS routines instead of the management class the data set originally had when it was archived. NEWMCLASS and NULLMC are mutually exclusive

-

-

NEWSCLASS

[NEWSC]

Restore the data set with the specified storage class as input to the ACS routines instead of the storage class the data set originally had when it was backed up. NEWSCLASS and NULLSC are mutually exclusive

<storage- Class>

M9APPLSC

NULLSC

Restore the data set with null storage class as input to the ACS routines instead of the storage class the data set originally had when it was backed up. NEWSCLASS and NULLSC are mutually exclusive

-

-

BYPASSACS

Restore the data set with bypassing the ACS routines option. The user must have READ access to the Model9 Resource M9.CLI.RESTDSN.BYPASSACS

-

-

NODUMP

Requesting that RESTDSN select backup copies only, without considering data set copies found in a volume full dump. The default is to allow selection of backups from a volume full dump

-

-

NOCAT

allow restore without cataloging. If the parameter is not specified, the default is to catalog the data set upon restore

-

Note: The default is to catalog the data set, whether it was cataloged or not when backed up

REPLACE

Restore the backup copy onto the target data set and replace it. If the parameter is not specified, the default is not to replace the current data set

-

-

RESTDSN - Output

The output displays the given RESTDSN command and parameters, followed by the requested information:

DATA SET <dsname> WAS RESTORED FROM A BACKUP COPY MADE ON <yyyy/mm/dd> at <hh:mm:ss>

DATA SET <dsname> WAS RESTORED FROM A FULL VOLUME DUMP MADE ON <yyyy/mm/dd> at <hh:mm:ss>

If the request fails, the complete log will be displayed. If the requesting user is not available, the first 100 records of the output will be printed in the executing agent's log.

RESTDSN - Examples

The following examples demonstrate various ways to use the command.

Restoring the latest copy of a data set with a new name

TSO M9CLI RESTDSN M9.USER.SOURCE NEWNAME(M9.USER.SOURCE.PREVIOUS)

Restoring the latest copy of a data set with replace

TSO M9CLI RESTDSN M9.USER.SOURCE REPLACE

Restoring the latest copy of a data set, requesting to use only incremental backup as the input

TSO M9CLI RESTDSN M9.USER.SOURCE NEWNAME(M9.USER.SOURCE.PREVIOUS) NODUMP

Restoring the latest copy of a data set with the same name, on a new volume, without cataloging

TSO M9CLI RESTDSN M9.USER.SOURCE NEWVOL(M9USER) NOCAT

Restoring the latest copy of a data set with specific management class and storage class as input to the ACS routines

TSO M9CLI RESTDSN M9.USER.SOURCE NEWMC(M9PRODMC) NEWSC(M9PRODSC)

Restoring a non-cataloged data set within a date range using ENTRY or UNIQUEID

When restoring a data set other than the latest, LISTDSN the volume first for the available copies:

CLI_RESTDSN.png

The restore can be executed using the same search criteria specified on the LIST command, where the specific copy is selected using ENTRY:

M9CLI RESTDSN SYS1.PARMLIB VOL(M9RES1) DATER(2019/06/30-2019/06/01) ENTRY(2) NOCAT

There is no need to specify any additional search criteria when using the UNIQUEID parameter:

M9CLI RESTDSN SYS1.PARMLIB VOL(M9RES1) UNIQUEID(F9E717CE) NOCAT

RESTVOL

The RESTVOL command restores a volume from a volume full dump, either to the same target volume name or to a different target volume.

RESTVOL - Logic

Only one volume can be restored per request. The NEWVOL parameter must always be specified, to avoid accidental override of the source volume.

RESTVOL - Syntax
M9CLI RESTVOL <volume> NEWVol(<volume>)
 [DATE(yyyy/mm/dd)|DATERange(yyyy/mm/dd-yyyy/mm/dd)]
 [ENTry(<integer>)|UNIQueid(<uniqueId>)] 
 [COPYVolid]
RESTVOL - Required parameters

volume

A specific volume serial to be restored.

NEWVol(<volume>)

The target volume serial. The target volume has to be specified even when it’s the same as the source, to avoid accidental overriding a live volume. When the target volume is different from the source volume, COPYVOLID has to be specified.

RESTVOL - Optional Parameters

Option

[Short option]

Description

Format

Examples

DATE

Requesting for volume full dump copy that was created on a specific date only. DATE and DATERANGE are mutually exclusive

yyyy/mm/dd

2019/08/15

DATERANGE

[DATER]

Requesting for volume full dump copy that was created on a specific date range. DATERANGE and DATE are mutually exclusive

yyyy/mm/dd-yyyy/mm/dd

2019/01/01-2019/08/15

ENTRY

[ENT]

A positive sequential number from 0, 1, 2 and so on, representing the available entries of a volume full dump copy, according to the specified criteria.

<integer>

0 - for the latest copy (the default). 1 - for the copy prior to the latest, and so on.

UNIQueid

[UNIQ]

An 8-character id that when combined with the volume name identifies each volume full dump copy. Using UNIQueid, you can refer to a specific copy without having to specify any additional filters. UNIQueid and ENTRY are mutually exclusive.

<xxxxxxxx> 8 characters

UNIQueid can be taken from a previous LISTVOL command’s output, to be used in RESTVOL.

COPYVOLID

[COPYV]

Specifies that the volume serial number (VOLID) from the input volume is to be copied to the output volume. This parameter is needed when restoring a source volume to a different target volume. The target volume is specified in the required parameter NEWVOL.

-

RESTVOL - Output

The output displays the given RESTVOL command and parameters, followed by the requested information:

VOLUME <volume> WAS RESTORED FROM A FULL VOLUME DUMP MADE ON <yyyy/mm/dd> at <hh:mm:ss>

If the request fails, the complete DFDSS log will be displayed. If the requesting user is not available, the first 100 records of the output will be printed in the executing agent's log.

RESTVOL - Examples

The following examples demonstrate use of the command.

Restoring the latest available volume full dump copy, without replace

M9CLI RESTVOL M9RES1 NEWV(M9TST1)

By default, the restore takes the latest copy and restores it to the target volume, as specified by the NEWVOL parameter.

Restoring a volume full dump copy, with replace

M9CLI RESTVOL M9RES1 NEWV(M9RES1) COPYVOLID

When replacing the source volume with the volume full dump copy, use the COPYVOLID parameter.

Restoring a volume full dump copy from a specific date

M9CLI RESTVOL M9RES1 NEWV(M9TST1) DATE(2020/04/24)

The selected copy for restore will be the latest copy created on that date, unless specified otherwise (see examples using ENTRY or UNIQUEID).

Restoring a volume full dump copy from a date range

M9CLI RESTVOL M9RES1 NEWV(M9TST1) DATER(2020/04/24-2020/04/01)

The selected copy for restore will be the latest copy created within the date range, unless specified otherwise (see examples using ENTRY or UNIQUEID).

Restoring a volume full dump copy using ENTRY or UNIQUEID

When restoring a volume full dump copy other than the latest, it is recommended to LISTVOL the volume first for the available copies:

CLI_RESTVOL.png

The restore can be executed using the same search criteria specified on the LIST command, where the specific copy is selected using ENTRY:

M9CLI RESTVOL M9RES1 NEWV(M9TST1) DATE(2020/04/24) ENTRY(2)

There is no need to specify any additional search criteria when using the UNIQUEID parameter:

M9CLI RESTVOL M9RES1 NEWV(M9TST1) UNIQUEID(F2065KSF)

ARCHIVE

The ARCHIVE command archives a single data set. This command applies to both SMS-managed and non-SMS-managed data sets and is intended to supplement the scheduled policies. Several considerations pertain when archiving SMS-managed data sets using the ARCHIVE command:

  1. The command verifies that the DFSMS Management Class parameter, “Command or Auto Migrate” is set to either COMMAND or BOTH.

  2. The command will archive the data set regardless of the DFSMS Management Class “primary days non-usage” parameter’s value.

  3. When specifying RETENTIONPERIOD, the new expiration date will override previous definitions, including the DFSMS Management Class specification.

Once archived, the archived data set is immediately available for LISTDSN and RECALL.

The archived data set will be available in the UI after a periodic synchronization, which by default, is performed automatically every 5 minutes. If needed, on-demand synchronization can be performed via the UI, by clicking the “SYNC WITH STORAGE” button in the “Agents” tab. See the Administrator and User Guide for more information.

The command supports the following additional keywords. Use of the keywords requires permission.

  • NOBACKUP - allows archive without a backup copy

  • RETENTIONPERIOD - used to specify a retention period for the archived data set.

ARCHIVE - Syntax
M9CLI ARCHIVE <dsname> 
 [NOBACKUP|NOBCK]
 [RETENTIONPERIOD|RETPD(<nnn>d|<nnn>w|<nnn>m|<nnn>y)] 
ARCHIVE - Required parameters

dsname

A specific data set name. The data set must be cataloged.

ARCHIVE - Optional Parameters

Option

[Short option]

Description

Format

Examples

NOBACKUP

[NOBCK]

By default, the command will not archive a data set that is marked as changed according to the change bit, and does not have a backup copy. The command does not check whether the backup copy is the most current one, and there is no protection from manually deleting a backup of an archived data set. Specifying NOBCK will perform ARCHIVE regardless of the change bit and backup copy status. To use the keyword, the user must have READ access to the Model9 Resource M9.CLI.ARCHIVE.NOBCK

-

-

RETENTIONPERIOD

[RETPD]

By default, the expiration date of the data set will be set according to the logic described in Determining a data set expiration date. Specifying RETENTIONPERIOD will set the retention period manually, regardless of previous dates in the VTOC, CATALOG or DFSMS Management Class. To use the keyword, the user must have READ access to the Model9 Resource M9.CLI.ARCHIVE.RETPD

A number followed by a character to specify:

d - days

w - weeks

m - months

y - years

3d - The data set will expire in 3 days

ARCHIVE - Output

The output displays the given ARCHIVE command and parameters, followed by the requested information:

ZM9I043I Data set <dsname> was archived successfully

If the data set could not be archived, message ZM9I044E will be displayed, followed by the relevant error messages:

ZM9I044E An eligible data set was not found for the archive command
ARCHIVE - Examples

Archiving a data set

TSO M9CLI ARCHIVE M9.APPL.DUMP.D082020

Archiving a data set without a backup, to be deleted within a year

TSO M9CLI ARCHIVE M9.APPL.DUMP.D082020 NOBCK RETPD(1Y)

RECALL

The RECALL command recalls an archived data set. The data set is re-allocated to the disk.

RECALL - Logic

Only one data set can be recalled per request. By default, the target volume is determined according to:

  1. For non-SMS data sets, the command searches for the original disk of the data set.

  2. For SMS-managed data sets, the volume is determined using the ACS routines and the current SMS definitions that are assigned to the data set.

The recalled data set will always be recataloged, except in one situation: when recalling a non SMS-managed rolled-off GDS, it will be recalled but not cataloged. An SMS-managed rolled off GDS will be cataloged and marked as rolled off. In some cases, the command is unable to re-allocate the data set, for example, when the original disk no longer exists, or if the storage group does not have enough space. In these cases, it is useful to use the NEWVOL parameter.

RECALL - Syntax
M9CLI RECALL <dsname> 
 [NEWVol(<volume>)] 
 [NEWMClass(<managementClass>)|NULLMC]
 [NEWSClass(<storageClass>)|NULLSC]
 [BYPASSACS] 
RECALL - Required parameters

dsname

A specific data set name.

RECALL - Optional Parameters

Option

[Short option]

Description

Format

Examples

NEWVOL

[NEWV]

Recalls the data set to the specified volume. For SMS-managed data sets, the decision whether to fulfill the request is made by the ACS routines. For non-SMS managed data sets, the decision whether to fulfill the request is made by the operating system.

<volume>

6 characters

PROD01

NEWMCLASS

[NEWMC]

Recalls the data set with the specified management class as input to the ACS routines, instead of the management class originally associated with the data set when it was archived. NEWMCLASS and NULLMC are mutually exclusive.

<management-Class>

8 characters

M9APPLMC

NULLMC

Recalls the data set with a null management class as input to the ACS routines, instead of the management class originally associated with the data set when it was archived. NEWMCLASS and NULLMC are mutually exclusive.

-

-

NEWSCLASS

[NEWSC]

Recalls the data set with the specified storage class as input to the ACS routines, instead of the storage class originally associated with the data set when it was archived. NEWSCLASS and NULLSC are mutually exclusive.

<storage-Class>

8 characters

M9APPLSC

NULLSC

Recall the data set with a null storage class as input to the ACS routines, instead of the storage class originally associated with the data set when it was archived. NEWSCLASS and NULLSC are mutually exclusive.

-

-

BYPASSACS

Recalls the data set while bypassing the ACS routines option. The user will be verified for having READ access to the Model9 Resource M9.CLI.RESTDSN.BYPASSACS

-

-

RECALL - Output

The output displays the given RECALL command and parameters, followed by the requested information:

Data set <dsname> was recalled
RECALL - Examples

The following examples demonstrate use of the command.

Recalling a data set to a different volume

TSO M9CLI RECALL M9.USER.LOAD NEWV(M9TST1)

Recalling to a new volume can be useful when the original volume is no longer available in the system, or, for example, does not have enough space to accommodate the recalled data set.

Recalling an SMS-managed data set with specific management class and storage class as input to the ACS routines

TSO M9CLI RECALL M9.USER.LOAD NEWMC(M9APPLMC)NEWSC(M9APPLSC)

Recall can be used to change the original SMS parameters of the data set. The decision whether to fulfill the request is made by ACS routines.

DELBACK

The DELBACK command is used to delete non-expired backups that were created by the BACKDSN command. Expired backups are automatically deleted by the life cycle process and backups created by a policy are deleted by the policy as part of the generation management process.

The user must have READ access to the Model9 Resource M9.CLI.DELBACK.

The deleted data set will be visible in the UI until a periodic synchronization, which by default, is performed automatically every 5 minutes. If needed, on-demand synchronization can be performed via the UI, by clicking the “SYNC WITH STORAGE” button in the “Agents” tab. See the Administration and User Guide for more information.

Warning

The DELBACK action cannot be undone.

DELBACK - Syntax
M9CLI DELBACK <dsname> UNIQUEID<uniqueId> [PURGE]

DELBACK - Required parameters

dsname

A specific data set name.

UNIQUEID

The unique id or a partial prefix of the unique id of the deleted backup.

DELBACK - Optional parameters

Option

[Short option]

Description

Format

Examples

PURGE

Deletes backup of a data set, although it has not yet expired. The user must have READ access to the Model9 Resource M9.CLI.DELBACK.PURGE.

DELBACK - Output

The output displays the given DELBACK command and parameters, followed by the requested information:

ZM9I055I Backup of data set <data set> with UNIQUEID <id> was deleted successfully

If the data set has not yet expired, it will not be deleted and message M9I057E will be displayed:

ZM9I057E Backup of data set <data set> has not yet expired, use the PURGE keyword to force delete regardless of expiration
DELBACK - Examples

Deleting an backup that has not yet expired or with no expiration date:

TSO M9CLI DELBACK M9.USER.SOURCE UNIQ 9A09F2C3 PURGE

DELARC

The DELARC command is used to delete an expired archived data set and its catalog entry - without having to recall it first. Use DELARC instead of an IDCAMS DELETE to delete an archived data set without causing an automatic recall. The deleted data set will be visible in the UI until a periodic synchronization, which by default, is performed automatically every 5 minutes. If needed, on-demand synchronization can be performed via the UI, by clicking the “SYNC WITH STORAGE” button in the “Agents” tab. See the Administration and User Guide for more information.

The command supports additional keywords to allow deletion of an archived data set that has not yet expired or a data set that does not have a valid expiration date. To determine whether a data set has expired, the command follows the logic described in Determining a data set expiration date.

Warning

The DELARC action cannot be undone.

DELARC - Syntax
M9CLI DELARC <dsname>
 [PURGE]
RECALL - Required parameters

dsname

A specific data set name.

DELARC - Optional Parameters

Option

Description

Format

Examples

PURGE

Deletes the archived data set, although it has not yet expired. The user must have READ access to the Model9 Resource M9.CLI.DELARC.PURGE

-

-

DELARC - Output

The output displays the given DELARC command and parameters, followed by the requested information:

Archived data set deleted: <dsname>

If the data set has not yet expired, it will not be deleted and message ZM9I038E will be displayed:

ZM9I038E Archive data set has not yet expired, use the PURGE keyword to delete data set <data set>
DELARC - Examples

Deleting an expired archived data set

TSO M9CLI DELARC M9.APPL.SMF.TEMP

Deleting an archived data set that has not yet expired

TSO M9CLI DELARC M9.APPL.SMF.PROD PURGE

Determining a data set expiration date

The process of determining a data set expiration date is used in the following commands:

BACKDSN

The command determines the expiration date of the data set according to the RETENTIONPERIOD parameter to allow the life cycle management to delete the data set after its expiration date has passed. If the parameter is not specified, the data set will not have an expiration date at all and will not be deleted by the lifecycle.

ARCHIVE

The command determines the expiration date of the data set according to the description listed below and logs it internally to allow the life cycle management to delete the data set after its expiration date has passed. This process can be skipped by specifying the RETENTIONPERIOD parameter to set the retention period manually.

DELARC

The command determines the expiration date of an archived data set in order to decide whether it is eligible for immediate deletion. Specifying the PURGE keyword will delete the archived data set regardless of the determined expiration date. It can be used to delete data sets that have not yet expired or data sets that do not have a valid expiration date.

To calculate the data set expiration date, the commands explore the following:

1. Expiration date in the data set’s VTOC entry at the time of archive

2. Expiration date in the data set’s catalog entry at the time of archive

For VSAM data sets, the catalog entry of the cluster is inspected for a valid expiration date. For non-VSAM data sets, both the VTOC entry and the catalog entry are searched for a valid expiration date. The latest date of the two is considered to be the determinative expiration date.

DELARC only: If a valid expiration date is found and it is earlier than the current date, the data set and its catalog entry are deleted. If no valid expiration date was found, or if the expiration date is later than the current date, the data set is not deleted.

For SMS-managed data sets, if no valid expiration date was found, the command continues to search the data set’s Management Class expiration attributes. The data set’s Management Class is determined according to SMS rules:

1. The data set’s Management Class, if one is assigned

2. If not, the default Management Class, if one is assigned

3. If not, the Management Class default values expiration attributes are compared to the data set’s VTOC attributes:

  • The data set’s creation date is compared to EXPIRE AFTER DATE/DAYS

  • The data set’s last reference date is compared to PRIMARY DAYS NON-USAGE

If both Management Class attributes have values and can be used to calculate a valid expiration date, the latest of the two is considered to be the determinative expiration date.

DELARC only: If a valid expiration date is determined and is earlier than the current date, the data set and its catalog entry are deleted. If no valid expiration date was found, or if the expiration date has not passed yet, the data set is not deleted. Note that the default value for both Management Class’s attributes is “NO LIMIT”, meaning that the data set will not be deleted by automatic processes, but will be deleted using a manual action, such as the DELARC command.

Special Cases

Directing tape allocations to disk using ACS Routines

This section describes how to use SMS support to enable operations with no dependency on virtual or physical tape devices and without requiring a change to existing applications and JCL jobs. Once data sets have been redirected to disk, Model9 Data Manager for Mainframe is able to migrate them to its target storage.

Automatic Class Selection (ACS) routines can be used to determine the SMS classes and storage groups for data sets and objects in an SMS complex. For storage administrators, ACS routines automate and centralize the process of determining SMS classes and storage groups.

Through ISMF, it is possible to create four ACS routines in an SCDS: one for each type of SMS class and one for storage groups. After activating an SMS configuration, SMS executes ACS routines for the following operations:

  • JCL DD statements (DISP=NEW, DISP=MOD)

  • Dynamic allocation requests (DISP=NEW, DISP=MOD for a nonexistent data set)

  • DFSMSdss COPY, RESTORE, and CONVERTV commands

  • DFSMShsm RECALL, RECOVER, and class transitions

  • Access method services ALLOCATE, DEFINE, and IMPORT commands

  • OAM processing for STORE, CHANGE, and class transition

  • Local data set creation by remote application through Distributed FileManager/MVS

  • MVS data sets or z/OS UNIX files created by remote application through the z/OS Network File System server

When ACS routines are executed, they can be used to examine existing allocation attributes and provide new allocation attributes if needed.

Implementation steps

  1. Identify device generic and esoteric unit names to be automatically directed to disk. For example, those could be TAPE, CART, 3480, 3490, 3590, etc.

  2. Define a storage group of volumes to which data sets allocating tape devices should be directed to, e.g. storage group M9POOL.

  3. Define other SMS constructs to be assigned to data sets allocated in the M9POOL such as Data Class, Storage Class and Management class. For considerations and examples, see Sample SMS construct definitions below.

  4. Update the existing ACS routines to redirect data sets allocated with unit names identified in step 1 to the storage group defined in step 2. See Sample ACS-routine definitions below.

  5. Test the updated ACS routines. See JCL examples and test cases below.

  6. Activate new ACS routines into SMS configuration.

Determining the size of the M9POOL storage group

The M9POOL should have enough space to store data sets from allocation time and until a data set archive policy is run, and to archive the data sets to object storage. It is recommended to run an archive policy at least once a day.

A starting point for sizing the M9POOL would be 1% of the total existing VTL capacity (or total physical tape capacity). For example, for a 100TB VTL, a pool of 1TB should be used. SMS storage groups capacity can be increased or decreased dynamically without interrupting data set allocations and without changing ACS routines.

Handling unit affinity

Unit affinity allocations are ones specifying the UNIT=AFF= parameter of the DD statement. In tape processing concepts, unit affinity is used to minimize the number of tape drives used by a job step. The system attempts to use the same tape drive for a request that specifies UNIT=AFF for both the referenced and referencing DD statements. When allocating data sets on disk in an SMS-managed storage group, using the same device is meaningless. It is sufficient that the data sets will be allocated in the same storage group of volumes. Specifying unit affinity will have no effect. For more information, see Using tape mount management techniques in z/OS DFSMS Implementing System-Managed Storage.

Handling volume reference

Volume referencing allocations are ones specifying the VOL=REF= parameter of the DD statement. In tape processing concepts, volume reference is used to allocate a data set on the same tape volume as a previously allocated data set. When allocating data sets on disks in an SMS-managed storage group, it is sufficient that the data sets be allocated in the same storage group of volumes. Specifying volume reference will have no effect. For more information, see Using Volume Reference to System-Managed Data Sets in z/OS DFSMS Implementing System-Managed Storage.

Modifying ALLOCxx parameter

The ALLOCxx parmlib member contains installation-wide allocation defaults. The parameter TAPE REDIRECTED_TAPE(TAPE|DASD)allows to specify whether unopened batch-allocated DASD data sets that were redirected from tape should be treated as DASD or TAPE:

  • REDIRECTED_TAPE(TAPE)is the default behavior, and causes unopened batch allocated data sets that have been redirected from TAPE to DASD to be deleted during final disposition processing. These unopened redirected data sets are deleted regardless of the disposition requested.

  • REDIRECTED_TAPE(DASD) causes unopened batch-allocated data sets that have been redirected from TAPE to DASD to be processed according to the original disposition, as they would have been if they had been directed to DASD and not redirected to DASD from TAPE.

Note: Dynamic allocation of SMS DASD data sets that were redirected from TAPE will continue to be treated as DASD during dynamic allocation.

Sample ACS-routine definitions

In the samples below, all allocations for the TAPE and 3490 esoteric devices are assigned the M9POOL SMS constructs. Other device types can also be tested for.

Data Class

The following code demonstrates how to assign the M9POOL data class construct to all allocations for UNIT=TAPE, UNIT=CART or UNIT=3490. Allocations specifying affinity, using the UNIT=AFF= keyword are also assigned the data class construct.

WHEN (&UNIT = 'TAPE' OR &UNIT = 'CART' OR &UNIT = '3490' OR
 &UNIT = 'AFF=SMSD') 
SET &DATACLAS = 'DCM9POOL'

Management Class

The following code demonstrates how to assign the M9POOL management class construct to all allocations for UNIT=TAPE, UNIT=CART or UNIT=3490. Allocations specifying affinity, using the UNIT=AFF= keyword are also assigned the management class construct.

WHEN (&UNIT = 'TAPE' OR &UNIT = 'CART' OR &UNIT = '3490' OR
 &UNIT = 'AFF=SMSD')
SET &MGMTCLAS = 'MCM9POOL'

Storage Class

The following code demonstrates how to assign the M9POOL storage class construct to all allocations for UNIT=TAPE, UNIT=CART or UNIT=3490. Allocations specifying affinity, using the UNIT=AFF= keyword are also assigned the storage class construct.

WHEN (&UNIT = 'TAPE' OR &UNIT = 'CART' OR &UNIT = '3490' OR
 &UNIT = 'AFF=SMSD')
SET &STORCLAS = 'SCM9POOL'

Storage Group

The following code demonstrates how to assign the M9POOL storage group construct to all allocations for UNIT=TAPE, UNIT=CART or UNIT=3490. Allocations specifying affinity, using the UNIT=AFF= keyword are also assigned the storage group construct.

WHEN (&UNIT = 'TAPE' OR &UNIT = 'CART' OR &UNIT = '3490' OR
 &MGMTCLAS='MCM9POOL' OR &UNIT = 'AFF=SMSD')
SET &STORGRP = 'SGM9POOL'

Sample SMS construct definitions

Data Class

In order to minimize the occurrence of an X37 ABEND, the data class requests extended format non-VSAM data sets or extended addressability (EA) type for VSAM data sets. A large volume count is also recommended to make sure the allocated data sets can expand on multiple volumes.

Recfm . . . . . . . . . :

Lrecl . . . . . . . . . :

Override Space . . . . . : NO

Space Avgrec . . . . . . :

Avg Value . . . . : 32720

Primary . . . . . : 100

Secondary . . . . : 200

Directory . . . . :

Retpd Or Expdt . . . . . :

Volume Count . . . . . . : 255

Add'l Volume Amount . . :

Data Set Name Type . . . . . : EXT

If Extended . . . . . . . . : P

Extended Addressability . . : YES

Record Access Bias . . . . : U

RMODE31 . . . . . . . . . . :

Space Constraint Relief . . . : YES

Reduce Space Up To (%) . . : 30

Guaranteed Space Reduction : NO

Dynamic Volume Count . . . :

Compaction . . . . . . . . . :

Spanned / Nonspanned . . . . :

System Determined Blocksize . : YES

EATTR . . . . . . . . . . . . : O

Media Interchange

Media Type . . . . . . . . :

Recording Technology . . . :

Performance Scaling . . . . :

Performance Segmentation . :

Encryption Management

Key Label 1:

Encoding for Key Label 1 :

Key Label 2:

Encoding for Key Label 2 :

System Managed Buffer . . . :

System Determined Blocksize : NO

Block Size Limit . . . . . . :

EATTR . . . . . . . . . . . :

Recorg . . . . . . . . . . . :

Keylen . . . . . . . . . . . :

Keyoff . . . . . . . . . . . :

CIsize Data . . . . . . . . :

% Freespace CI . . . . . . . :

CA . . . . . . . :

Shareoptions Xregion . . . . :

Xsystem . . . . :

Reuse . . . . . . . . . . . : NO

Initial Load . . . . . . . . : RECOVERY

BWO . . . . . . . . . . . . :

Log . . . . . . . . . . . . :

Logstream Id . . . . . . . . :

FRlog . . . . . . . . . . . :

RLS CF Cache Value . . . . . : ALL

RLS Above the 2-GB Bar . . . : NO

Extent Constraint Removal . : NO

CA Reclaim . . . . . . . . . : YES

Log Replicate . . . . . . . : NO

Management Class

Verify that the data sets are assigned a management class that allows Auto Migrate.

Expiration Attributes

Expire after Days Non-usage . : NOLIMIT

Expire after Date/Days . . . . : NOLIMIT

Retention Limit . . . . . . . : NOLIMIT

Partial Release . . . . . . : YES

Migration Attributes

Primary Days Non-usage . : 0

Level 1 Days Date/Days . : 0

Level 2 Days Non-usage . : 0

Command or Auto Migrate . : BOTH

Size Less Than or Equal to:

Action . . . . . . . . . :

Size Greater than . . . . :

Action . . . . . . . . . :

GDG Management Attributes

# GDG Elements on Primary :

Rolled-off GDS Action . . :

Backup Attributes

Backup frequency . . . . . . . . . . . :

Number of backup versions . . . . . . . :

(Data Set Exists)

Number of backup versions . . . . . . . :

(Data Set Deleted)

Retain days only backup version . . . . : 180

(Data Set Deleted)

Retain days extra backup versions . . . : 30

Admin or User Command Backup . . . . . : NONE

Auto Backup . . . . . . . . . . . . . . : NO

Backup copy technique . . . . . . . . . : STANDARD

Class Transition Criteria

Transition Copy Technique . . . : STANDARD

Serialization Error Exit . . . : NONE

Cloud Name . . . . . . . . : MODEL9

AGGREGATE Backup Attributes:

# Versions . . . . . . :

Retain only Version . . :

Unit . . . . . . . . :

Retain extra Version . :

Unit . . . . . . . . :

Copy Serialization . . :

ABackup Copy Technique : STANDARD

Tape Volume Attributes

Retention Method . . . . . . :

Volume Set Management Level . :

Tape Data Set Attributes

Exclude from VRSEL . . . . . :

Retain While Cataloged . . . :

Storage Class

There are no special considerations for the storage class construct. By assigning a storage class to the data sets we ensure that the data sets are SMS-managed.

Performance Objectives

Direct Millisecond Response . . . :

Direct Bias . . . . . . . . . . . :

Sequential Millisecond Response . :

Sequential Bias . . . . . . . . . :

Initial Access Response Seconds . :

Sustained Data Rate (MB/sec) . . . :

OAM Sublevel . . . . . . . . . . . :

Availability . . . . . . . . . . . . : NOPREF

Accessibility . . . . . . . . . . . : NOPREF

Backup . . . . . . . . . . . . . . :

Versioning . . . . . . . . . . . . :

Guaranteed Space . . . . . . . . : NO

Guaranteed Synchronous Write . . : NO

Multi-Tiered SGs . . . . . . . . :

Parallel Access Volume Capability : NOPREF

Cache Set Name . . . . . . . . . :

CF Direct Weight . . . . . . . . :

CF Sequential Weight . . . . . . :

Lock Set Name . . . . . . . . . . :

Disconnect Sphere at CLOSE . . . : NO

Storage Group

The storage group should be defined with AutoMigrate=Yes to allow Model9 to migrate the data sets in the storage group according to the defined policy.

Auto Migrate . . . . . . . . : YES

Auto Backup . . . . . . . . : NO

Auto Dump . . . . . . . . . : NO

Overflow . . . . . . . . . . : NO

Migrate Sys/Sys Group Name . :

Backup Sys/Sys Group Name . :

Dump Sys/Sys Group Name . . :

Extend SG Name . . . . . . . :

Copy Pool Backup SG Name . . :

Dump Class . . . . . . . . . :

Dump Class . . . . . . . . . :

Dump Class . . . . . . . . . . . . . . . :

Dump Class . . . . . . . . . . . . . . . :

Dump Class . . . . . . . . . . . . . . . :

Allocation/Migration Threshold - High . : 85

Low . . : 1

Alloc/Migr Threshold Track-Managed - High: 85

Low : 1

Total Space Alert Threshold % . . . . . : 0

Track-Managed Space Alert Threshold % . : 0

Guaranteed Backup Frequency . . . . . . :

BreakPointValue . . . . . . . . . . . . :

Processing Priority . . . . . . . . . . : 50

JCL examples and test cases

Allocating data sets with different attributes on tape

//*DATA SET NO LABEL

//CRE1 EXEC PGM=IEFBR14

//DD1 DD DSN=QA.TAPE.DVTS.TST.REG,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE)

//*DATA SET + LABEL FILE NO. 2

//CRE2 EXEC PGM=IEFBR14

//DD1 DD DSN=QA.TAPE.DVTS.TST.LBL2,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE),LABEL=(2,,)

//*DATA SET + LABEL FILE NO. 2 + RETPD=5

//CRE3 EXEC PGM=IEFBR14

//DD1 DD DSN=QA.TAPE.DVTS.TST.LBLRTPD,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE),LABEL=(2,,RETPD=5)

//*DATA SET + BLKSIZE 50K

//CRE4 EXEC PGM=IEFBR14

//DD1 DD DSN=QA.TAPE.DVTS.TST.LRGBLKZ,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE),BLKSIZE=50000

//*DATA SET NEW + VOL=REF

//CRE5 EXEC PGM=IEFBR14

//DDREF DD DSN=QA.TAPE.DVTS.TST.VOLREF1,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE),VOL=REF=*.CRE1.DD1

//*DATA SET NEW + UNIT AFF

//CRE6 EXEC PGM=IEFBR14

//DD1 DD DSN=QA.TAPE.DVTS.TST.BASEREF,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE)

//DDREF DD DSN=QA.TAPE.DVTS.TST.REF1,UNIT=AFF=DD1,

// DISP=(NEW,CATLG,DELETE)

Copying data sets to tape using IEBGENER and IEBCOPY

//*COPY DATA SET TO TAPE IEBGENER

//CPY1 EXEC PGM=IEBGENER

//SYSPRINT DD SYSOUT=*

//SYSIN DD DUMMY

//SYSUT1 DD *,LRECL=80,BLKSIZE=32720

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DSN=QA.TAPE.DVTS.TST.GNR,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE)

//*COPY PDS TO TAPE IEBGENER

//CPY2 EXEC PGM=IEBCOPY

//SYSPRINT DD SYSOUT=*

//SYSUT1 DD DSN=M9.QA.TST1,DISP=SHR

//SYSUT2 DD DSN=QA.TAPE.DVTS.TST.IEBCOPY.FLAT,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE)

//*COPY FLAT TO PDS

//CPY3 EXEC PGM=IEBCOPY

//SYSPRINT DD SYSOUT=*

//SYSUT1 DD DSN=QA.TAPE.DVTS.TST.IEBCOPY.FLAT,DISP=SHR

//SYSUT2 DD DSN=QA.TAPE.DVTS.TST.IEBCOPY.REG,

// DISP=(NEW,CATLG,DELETE),

// DCB=(LRECL=80,RECFM=FB,DSORG=PO),

// SPACE=(CYL,(10,1,5)),VOL=SER=C29GN3

//*COPY DATA SET TO AN ALREADY EXISTING TAPE

//CPY4 EXEC PGM=IEBGENER

//SYSPRINT DD SYSOUT=*

//SYSIN DD DUMMY

//SYSUT1 DD *,LRECL=80,BLKSIZE=32720

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DSN=QA.TAPE.DVTS.TST.LBL2,UNIT=TAPE,

// DISP=(OLD)

//*COPY DATA SET TO TAPE MOD IEBGENER

//CPY5 EXEC PGM=IEBGENER

//SYSPRINT DD SYSOUT=*

//SYSIN DD DUMMY

//SYSUT1 DD *,LRECL=80,BLKSIZE=32720

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DSN=QA.TAPE.DVTS.TST.LBL2,UNIT=TAPE,

// DISP=(MOD)

Performing DFDSS DUMP directly to tape

//DMP1 EXEC PGM=ADRDSSU

//SYSPRINT DD SYSOUT=*

//FLAT DD DSN=QA.TAPE.DVTS.TST.DFDSS.T1,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE)

//* REQUIRES: STGADMIN.ADR.DUMP.TOLERATE.ENQF

//SYSIN DD DATA,DLM=##,SYMBOLS=JCLONLY

DUMP DATASET +

(INCLUDE +

( QA.NONVSAM.TYPE1.TEMPLATE, +

QA.VSAM.TYPE1.TEMPLATE +

)) +

OUTDD(FLAT) +

ALLDATA(*) +

ALLEXCP +

OPTIMIZE(3) +

TOL(ENQF)

##

/*

//* DUMP WITH LARGE BLKSIZE

//DMP2 EXEC PGM=ADRDSSU

//SYSPRINT DD SYSOUT=*

//FLAT DD DSN=QA.TAPE.DVTS.TST.DFDSS.T2,UNIT=TAPE,

// DISP=(NEW,CATLG,DELETE),BLKSIZE=50000

//* REQUIRES:

//* STGADMIN.ADR.DUMP.TOLERATE.ENQF

//SYSIN DD DATA,DLM=##,SYMBOLS=JCLONLY

DUMP DATASET +

(INCLUDE +

(QA.NONVSAM.TYPE1.TEMPLATE, +

QA.VSAM.TYPE1.TEMPLATE +

)) +

OUTDD(FLAT) +

ALLDATA(*) +

ALLEXCP +

OPTIMIZE(3) +

TOL(ENQF)

##

/*

Using multiple IEBGENER steps and tape stacking

//*TAPE IEBGENER STACKING

//S1 EXEC PGM=IEBGENER

//SYSUT1 DD *,LRECL=80

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DISP=(,CATLG,DELETE),

// DSN=QA.TAPE.DVTS.GNR.FILE1,

// VOL=(,RETAIN,,20),

// LABEL=01,

// UNIT=(TAPE,,DEFER)

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

/*

//S2 EXEC PGM=IEBGENER

//SYSUT1 DD *,LRECL=80

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DISP=(,CATLG,DELETE),

// DSN=QA.TAPE.DVTS.GNR.FILE2,

// VOL=(,RETAIN,,20,

// REF=*.S1.SYSUT2),

// LABEL=02,

// UNIT=(TAPE,,DEFER)

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

/* //S3 EXEC PGM=IEBGENER

//SYSUT1 DD *,LRECL=80

SOME ARCHIVE TEXT FOR TESTING

/*

//SYSUT2 DD DISP=(,CATLG,DELETE),

// DSN=QA.TAPE.DVTS.GNR.FILE3,

// VOL=(,RETAIN,,20,

// REF=*.S2.SYSUT2),

// LABEL=03,

// UNIT=(TAPE,,DEFER)

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

/*

Using IEBGENER with tape unit affinity

//*TAPE IEBGENER UNIT AFF

//S1 EXEC PGM=IEBGENER

//SYSUT1 DD DISP=SHR,DSN=QA.TAPE.DVTS.GNR.FILE1

// DD DISP=SHR,DSN=QA.TAPE.DVTS.GNR.FILE2

// DD DISP=SHR,DSN=QA.TAPE.DVTS.GNR.FILE3

//SYSUT2 DD DISP=(,CATLG,DELETE),

// DSN=QA.TAPE.DVTS.GNR.FILEALL,

// VOL=(,RETAIN,,20),LABEL=01,

// UNIT=(TAPE,,DEFER)

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

/*

Restoring data sets or volumes using the z/OS DFSMSdss utility (native restore)

The process of restoring a data set backup or full volume dump from a Model9 file without using the Model9 agent is available for data set backup copies, archives and full volume dumps. It is not available for z/OS UNIX files backups. Once the Model9 data file is on mainframe storage, it is in DFDSS dump format and can be used as input for DFDSS restore.

To perform this process, follow these steps:

Step 1: Download the file

Download the file from the object storage to any Linux server using a supplied script that executes AWS CLI to list and download the file.

Step 2: Upload the file to Unix System Services on z/OS

Step 3: Allocate and convert the file from z/OS USS to mainframe DASD or tape

DFDSS cannot be used directly on the object storage.

  • When planning to allocate the file on tape, no special configuration is needed. If no tape drives are available, it is possible to use a third-party vendor for tape drives services.

  • When planning to allocate the file to DASD, the Model9 file should initially be created with a block size that is suitable for DASD. To do so, the agent running the policy should be configured accordingly. See Optional: additional prerequisites for DASD restore.

Once the file is on DASD or tape, in DFDSS dump format - DFDSS restore can be used to extract the requested data set / volume.

Prerequisites

z/OS

  1. Copy the sample rexx M9F2DSSR from Model9 SAMPLIB to your EXEC directory. The source for the REXX is also available here: M9F2DSSR REXX in Restoring the original data set or volume.

  2. Copy the sample job M9F2DSSJ from Model9 SAMPLIB to your JCL directory. The source for the JCL is also available here: M9F2DSSJ JCL in Allocate and convert the file from z/OS USS to DASD or tape.

Linux server

  1. FTP - verify that you have an FTP client on the server / desktop that is about to be used as the middleware between the object storage and the mainframe.

  2. Firewall settings - verify that you have access to both the object storage and the mainframe.

  3. AWS CLI - install the AWS command line on any Linux server, see Amazon installation instructions: AWS command line interface.

  4. Set the object storage access key and password using the following command:

aws configure

[Enter]

AWS Access Key ID [****************F848]: <userid>

[Enter]

AWS Secret Access Key [****************qOcf]: <password>

[Enter]

Default region name [None]:

[Enter]

Default output format [json]:

[Enter]

Enter the following parameters:

<userid>

Userid, can be obtained from the agent.yml

<password>

Password, can be obtained from the agent.yml

Optional: additional prerequisites for DASD restore

Agent

  1. Set the parameter in the agent.yml:

    Parameter

    Description

    Set to

    dfdss_dasd_compatible

    DFDSS TAPE or DASD block size

    true

    Restart the agent, look for the following message in the agent STDERR DD:

    Agent starting in DASD compatible mode

  2. The gzip utility should be installed in z/OS USS, in order to extract the .gz suffixed dump file. Download gzip from the Rocket Software “Open Source Tools” website: https://www.rocketsoftware.com/zos-open-source/tools

  3. Unpax to any directory, for example /usr/bin:

    pax -rvf gzip.pax.Z.bin

Model9 policy

  1. The policy must use an agent configured according to the above prerequisites.

  2. Any compression can be used except lz4.

Download the file from the object storage

Use the $MODEL9_HOME/Utilities/M9F2DssRetrieval.sh script to locate and download the file from the object storage:

M9F2DssRetrieval.sh <copyType> <copyName> <copyDate> <url> <bucket> <complex> [outpufFolder]

Update the parameters as follows:

Parameter

Description

<copyType>

Either 'dataset' or 'volume'

<copyName>

The data set or volume name to be retrieved

<copyDate>*

The date of the backup / archive / full volume dump in YYYY-MM-DD[_hhmmss] format or ‘-’ for all dates

<url>**

The object storage URL. For example: https://objectstorage.com:443

<bucket>

The bucket name

<complex>

The complex name, as obtained from the agent.yml. The default complex is group-&SYSPLEX

<outputFolder>

A local folder to download the file to. The default is /tmp

* When specifying ‘-’ for the date, the script will display the data sets / volumes that were found, with the associated creation date in object storage. Choose the desired copy and rerun the script with the appropriate date.

** The script assumes the AWS configure command was issued to set up the access key and password for the object storage, see the Prerequisites.

Example:

M9F2DssRetrieval.sh volume ANFD00 2019-09-16 http://objectstorage.com:80 model9-data PRDPLEX /model9_retrieve

FTP the file to z/OS Unix System Services

  1. FTP the file to the z/OS USS environment in binary mode to any directory.

  2. After the file has been transferred, if it is suffixed with “gz”, unzip the file using the gzip command:

    /usr/bin/gzip -d <TheTransferedFile>.

This will create an unzipped file, without the .gz suffix.

Allocate and convert the file from z/OS USS to DASD or tape

Use M9F2DSSJ JCL to convert the Model9 backup (m9b) file to mainframe DFDSS input dump file. Update the parameters in the job as follows:

Parameter

Description

<VOLSER>

The target volser for the DFDSS input dump file

<DATASET>

The target data set for the DFDSS input dump file

<SPACE>

The space needed for the DFDSS input dump file, e.g. CYL,(500,50)

<YOUR.REXX.DIR>

The REXX directory

<m9bFilePath>

The Model9 backup file suffixed with .bin

M9F2DSSJ JCL

//M9F2DSSJ JOB 'ACCT#',REGION=0M,NOTIFY=&SYSUID

//* CONVERT A Model9 .bin format DUMP FILE TO A DFDSS DUMP FILE

//TSO EXEC PGM=IKJEFT01

//DUMPFILE DD DISP=(NEW,CATLG),UNIT=3390,

// DSORG=PS,LRECL=0,RECFM=U,BLKSIZE=32760,SPACE=(<SPACE>),

// VOL=SER=<VOLSER>,DSN=<DATASET>

//SYSTSPRT DD SYSOUT=*

//SYSTSIN DD *

EX '<YOUR.REXX.DIR>(M9F2DSSR)' '<m9bFilePath>'

/*

Restoring the original data set or volume

Use the <DATASET> produced by the M9F2DSSJ JCL as input for DFDSS RESTORE.

Restoring a single data set:

//RESTORE JOB 'ACCT#',REGION=0M,NOTIFY=&SYSUID

//* DFDSS RESTORE SINGLE FILE

//DFDSS EXEC PGM=ADRDSSU

//SYSPRINT DD SYSOUT=*

//DUMPFILE DD DISP=SHR,DSN=<DATASET>

//SYSIN DD *

RESTORE DATASET(INC(**)) -

INDD(DUMPFILE) CATALOG SPHERE -

ADMIN TOL(ENQF) PROCESS(UNDEF) CANCELERROR

/*

Restoring a full volume:

//RESTORE JOB 'ACCT#',REGION=0M,NOTIFY=&SYSUID

//* DFDSS RESTORE FULL

//DFDSS EXEC PGM=ADRDSSU

//SYSPRINT DD SYSOUT=*

//DUMPFILE DD DISP=SHR,DSN=<DATASET>

//SYSIN DD *

RESTORE FULL -

OUTDY(<VOLSER>) -

INDD(DUMPFILE) -

ADMIN PURGE CANCELERROR

/*

M9F2DSSR REXX

/* Rexx *************************************************************/

/* Convert an unencrypted Model9 dump file to a DFDSS dump file */

/* Input: 1. m9bPath - a USS path to an unencrypted dump file */

/* 2. DUMPFILE - DD:DSORG=PS,RECFM=U,LRECL=0,BLKSIZE=32760 */ /********************************************************************/

parse arg m9bPath .

dumpDD="DUMPFILE"

call syscalls 'ON'

address syscall

'open (m9bPath)',

O_rdonly,

000

if retval=-1 then

do

say 'file <'m9bPath'> not opened, error codes' errno errnojr

return 8

end

fd=retval

address tso "EXECIO 0 DISKW" dumpDD "(OPEN"

retval=rc

say dumpDD "OPEN RC:" retval

if (rc > 0) then

do

say 'DD <'dumpDD'> not opened, error code' retval

return 8

end

'read' fd 'bytes 4'

if (retval <= 0) then

do

say dumpDD "READ RC:" retval

return 8

end

do while retval <> 0

readSize = c2d(bytes)

say "Bytes to read:" readSize

'read' fd 'bytes' readSize

if retval=-1 then

do

say 'bytes not read, error codes' errno errnojr

return 8

end

say "read bytes retval:" retval

queue bytes

address tso "EXECIO 1 DISKW" dumpDD

if (rc > 0) then

do

say dumpDD "WRITE RC:" retval

return 8

end

'read' fd 'bytes' 4

say "read length retval:" retval

end

say "Closing" dumpDD"..."

address tso "EXECIO 0 DISKW" dumpDD "(FINIS"

say "Closing" m9bPath"..."

'close' fd

return 0

Transform Services

Transform Services Overview

1808302198.png

Monetize unlocked mainframe data enriching business intelligence (BI), analytics and cloud applications

Leverage any disk data or historical tape data for use by analytics services and BI tools. Mainframe data migrated to object storage on-premises or in the cloud can be transformed to standard data formats without requiring any access to the mainframe, instantly providing it for use in cloud applications and analytics tools.

For any questions, contact us at [email protected]

website http://www.model9.io

Transform Services Installation guide

Step-by-step installation

This guide will instruct you how to set up and invoke the transform service of the Model 9 Cloud Data Gateway from the mainframe, using JCL. The service will transform a Model9 data set backup copy / archive into a readable file in the cloud. Once transformed, the readable file can be accessed directly or via data analytics tools.

If using the Model 9 Cloud Data Gateway as an on-premises service, refer to Step-by-step deploymentguide before continuing to the steps on this guide.

Note

The transform service is invoked as SaaS by using the url:

https://us-east-1.model9api.io/

Step 1: Set up
Verify Model 9 Cloud Backup and Recovery for z/OS is installed

Model9 is responsible to deliver the data set from the mainframe to the cloud / on-premises storage. The data set is delivered as a backup copy or an archive, and provides the input to the transform service.

Download z/OS cURL

This free tool will allow you to invoke the transform service from z/OS. If cURL is not installed under /usr/bin, edit line 4 and add the path where the cURL module resides.

Step 2: Copy the script

Copy the following script to /usr/lpp/model9/transformService.sh:

#!/bin/sh
json=$1
url='https://us-east-1.model9api.io/transform'
export PATH=/usr/bin:/bin
export _EDC_ADD_ERRNO2=1
cnvto="$(locale codeset)"
headers="Content-Type:application/json"
echo "Running Model9 transform service"
output=$(curl -H "$headers" -s -k -X POST --data "$json" $url)
if ! [ -z "$output" ]; then
   echo "Transform ended with the following output:"   
   # If the answer is in ASCII then convert to EBCDIC            
   firstChar=$(echo $output | cut -c1)                           
   if [ "$firstChar" = "#" ]; then                               
      convOutput="$(echo $output | iconv -f ISO8859-1 -t $cnvto)"
   else                                                          
      convOutput=$output                                         
   fi                                                            
   echo "$convOutput"                                            
fi
status=$(echo $convOutput | tr -s " " | cut -d, -f1 | cut -d" " -f3)
   echo "Transform ended with status: $status"
if [ "$status" = '"OK"' ];then
   exit 0
else if [ "$status" = '"WARNING"' ]; then
   exit 4
else
   exit 8
fi
fi
Step 3: Copy the JCL

Copy the following JCL to a local library, update the JOBCARD according to your site standards:

//M9TRNSFM JOB 'ACCT#',REGION=0M,CLASS=A,NOTIFY=&SYSUID
//EXTRACT  EXEC PGM=BPXBATCH
//STDOUT   DD SYSOUT=*
//STDERR   DD SYSOUT=*
//STDPARM  DD *
SH /usr/lpp/model9/transformService.sh
//         DD *,SYMBOLS=EXECSYS
'{
    "input": {
        "name"   : "<DATA-SET>",
        "complex": "group-&SYSPLEX",
        "archive": "<false|true>"
    },
    "output": {
        "prefix"      : "/transform/&LYR4/&LMON/&LDAY",
        "compression" : "none",
        "format"      : "text"
    },
    "source": {
        "url"     : "<URL>",
        "api"     : "<API>",
        "bucket"  : "<BUCKET>",
        "user"    : "<USER>",
        "password": "<PASSWORD>"
    }
}'
/*
//
Step 4: Customize the JCL
Update the object storage details

Copy the following object storage variables from the Model9 agent configuration file:

  • <URL>

  • <API>

  • <BUCKET>

  • <USER>

  • <PASSWORD>

Update the complex name

The “complex” name represents the group of resources that the Model9 agent can access. By default, this group is named group-<SYSPLEX> and it is shared by all the agents in the same sysplex. The transform JCL specifies the default, using the z/OS system symbol “&SYSPLEX”.

Note

If the default was kept for “complex” in the Model9 agent configuration file, no change is needed

If the “complex” name was changed in the Model9 agent configuration file, change the “complex” in the JCL accordingly.

Update the transform prefix

By default, the JCL will create a transformed copy of your input data set, in the same bucket, with the prefix: /transform/&LYR4/&LMON/&LDAY. The prefix is using the following z/OS system symbols:

  • &LYR4 - The year in 4 digits, e.g. 2019

  • &LMON - The month in 2 digits, e.g. 08

  • &LDAY - The day in the month in 2 digits, e.g. 10

You can change the prefix according to your needs.

Step 5: Choose the data set to be transformed

The data set to be transformed should be a backup copy or an archive, delivered by the Model9 agent:

  • <DATA-SET> - the name of the data set

  • <false|true> - if the data set is an Model9 archive, set to “true”. If the data set is a Model9 backup copy, set to “false”.

To change the attributes of the input and the output, and for a full description of the service parameters, see Service parameters.

Step 6: Run the JCL

Submit the job and view the output. See Service response and log samples for sample output.

Step 7: Access the transformed data

Based on the returned response, the outputName will point to the path inside the bucket where the transformed data resides. See Service response and log .

Supported storage platforms

The service works with S3 protocol and supports all object storage types. When connected to a NAS or SAN, it is possible to work using an open-source S3 proxy, provided by Model9.

1116733449.jpg

Service limitations

Data size

There is no limit for the input data set size, as long as the output data set size is under 20GB in size on storage.

Maximum elapsed time

The service is synchronous and can run for up to an hour.

Transform Services User guide

Service parameters

To run the service, specify the source object storage and identify the input data set.

REQUIRED: "source"

Note

Identify the transform source object storage, where the input resides. The source object storage details appear in the Model9 agent configuration file.

Required Keywords for "source"

{
"source": {
    "url":"<URL>",
    "api":"<API>",
    "bucket":"<USER_BUCKET>",
    "user":"<USERID>",
    "password":"<PASSWORD>",
  }
}

Optional Keywords for "source"

{
"source": {
    "useS3V4Signatures":"false"|"true"  
  }
}

Keyword

Description

Required

Default

url

The object storage / proxy url

YES

-

api

The api-protocol used by this object storage / proxy

YES

-

bucket

The bucket defined within the object storage / proxy

YES

-

user

The userid provided by the object storage / proxy

YES

-

password

The password provided by the object storage / proxy

YES

-

useS3V4Signatures

Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and cohesity. Relevant for api “S3” only.

NO

false

OPTIONAL: “target”

Note

Identify the transform target object storage. Values not specified will be taken from “source” parameter.

{
"target": {
    "url":"<URL>",
    "api":"<API>",
    "bucket"<USER-BUCKET>",
    "user":"<USERID>",
    "password":"<PASSWORD>",
    "useS3V4Signatures":"false"|"true"
  }
}

Keyword

Description

Required

Default

url

The object storage / proxy url

NO

Taken from “source”

api

The api-protocol used by this object storage / proxy

NO

Taken from “source”

bucket

The bucket defined within the object storage / proxy

NO

Taken from “source”

user

The userid provided by the object storage / proxy

NO

Taken from “source”

password

The password provided by the object storage / proxy

NO

Taken from “source”

useS3V4Signatures

Whether to use the V4 protocol of S3. Required for certain object storage providers, such as HCP Cloud Scale and cohesity. Relevant for api “S3” only.

NO

false

REQUIRED: “input”

Note

If you specify VSAM keywords for a sequential input data set, the transform will be performed and a warning message will be issued

Required Keywords for "input"

{
"input": {
    "name":"<DSN>",
    "complex":"<group-SYSPLEX>"
    }
}

Optional Keywords for "input"

{
"input": {
    "type":"backup"|"archive"|"import"|"cloudcopy",
    "entry":"0|<N>",
    "prefix":"model9|<USER-PREFIX>",
    "recordBinary":"false|true",
    "recordCharset":"<CHARSET>",
    "vsam":{
        "keyBinary":"false|true",
        "keyCharset":"<CHARSET>"
    }
}

Keyword

Description

Default

name

Name of the original data set

MF data set legal name, case insensitive

complex

The Model9 resource complex name as defined in the agent configuration file.

String representing the complex

type

The type of the input data set, according to the Model9 Cloud Data Manager policy that created it:

  • “backup” - A backup copy (default)

  • “archive” - An archived data set

  • “import” - A data set imported from tape

  • “cloudcopy” - A data set delivered using the Model9 Cloud Copy free utility.

“backup”

(case insensitive)

entry

When the type is “backup”, “entry” represents the generation. The default is “0”, meaning the latest backup copy. Entry “1” would be the backup copy that was taken prior to the latest copy, and so on.

“0”

prefix

The environment prefix as defined in the agent configuration file.

“model9”

recordBinary

Whether the record input is binary. Qualifies for all “record” input (PS, PDS, VSAM data)

“false”

(case insensitive)

recordCharset

If the record input is not binary, what will be the character set of the input. Qualifies for all “record” input (PS, PDS, VSAM data)

“IBM-1047”

keyBinary

In case the input is VSAM data set, whether the VSAM key is binary. The output is in base64 format

“false”

(case insensitive)

keyCharset

In case the input is VSAM data set and the key is not binary, the character set of the VSAM key

“IBM-1047”

OPTIONAL: "output"

Note

  • The output is the transformed data of the MF data set, accessible as S3 object

Note

  • When transforming a file with the same name as an existing file in the target, the existing file will be replaced by the newly transformed file.

    Note that the service does not delete previously transformed files but rather overwrites files with the same name, so when re-transforming a file using the “split” function, ensure to remove any previously transformed files to avoid having splitted files of different versions.

  • When splitting a file, wait for the successful completion of the transform function before continuing with the processing, to insure all the parts of a the file were created.

  • Specifying “text” format for a “binary” input will cause the transform to fail.

{
"output": { 
    "prefix":"model9|<USER-PREFIX>",
    "compression":"none|gzip",
    "format":"JSON|text|CSV",
    "charset":"UTF",
    "endWithNewLine":"false|true",
    "splitBySize":"< nnnnb/m/g>",
    "splitByRecords":"<n>"
  }
} 

Keyword

Description

Default

prefix

Prefix to be added to the object name:

”Prefix”/”object name”

“transform”

compression

Should the output be compressed: “gzip”|”no

“gzip”

(case insensitive)

format

The format of the output file: “JSON”|”Text”|”CSV”

“JSON”

(case insensitive)

charset

If the key input is not binary, this keyword specifies what will be the character set of the output. Currently only “UTF” is supported

“UTF”

endWithNewLine

A newline will be added at the end of the file, before end of file. This is required by some applications.

false

splitBySize

Whether to split the output files to several files by the requested size, for example, “3000b”, "1000m", "1g". The output files will be numbered <file-name>.1, 2, 3 and so on.

  • The keyword is mutually exclusive with splitByRecords

  • The minimum value for this parameter is 1024 bytes, it is not possible to specify a smaller size

  • When specifying a number without a unit, the service will use bytes, for example: splitBySize":"1024"

    The service will split the data set into files the size of 1024 bytes.

  • The function will not split a record in the middle

  • The last part can be smaller than the specified size

  • Specifying the value “0” indicates no split by size will be performed

0

No split by size will be performed

splitByRecords

Whether to split the output files to several files, according to output records. The output files will be numbered <file-name>.1, 2, 3 and so on.

  • The keyword is mutually exclusive with splitBySize

  • The function will not split a record in the middle

  • The last part can include less records than specified

  • Specifying the value “0” indicates no split by records will be performed

0

No split by records will be performed

Service parameters samples

“Hello world”

Transform the latest backup of a plain text data set, charset IBM-1047, converted to UTF and compressed.

{
"input"       : {
  "name"      : "SAMPLE.TEXT",
  "complex"   : "group-PLEX1"
  },
"output"      : {
  "format"    : "text"
  },
"source"      : {
  "url"       : "https://s3.amazonaws.com",
  "api"       : "aws-s3",
  "bucket"    : "prod-bucket",
  "user"      : "sdsdDVDCsxadA43TERVGFBSDSSDff",
  "password"  : "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  }
}
Transforming an unloaded DB2 table

Transform the latest backup of an unloaded DB2 table, charset IBM-1047, converted to UTF and compressed, located with a specific prefix:

{
"input"         : {
  "name"        : "DB2.UNLOADED.SEQ",
  "complex"     : "group-PLEX1"
  },
"output"        : {
  "format"      : "text",
  },
"source"        : {
  "url"         : "https://s3.amazonaws.com",
  "api"         : "aws-s3",
  "bucket"      : "prod-bucket",
  "user"        : "sdsdDVDCsxadA43TERVGFBSDSSDff",
  "password"    : "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
  },
"output"        :{
  "prefix"      : "DBprodCustomers"
  }
}
Transforming a VSAM file using the defaults

When transforming a vsam file, the defaults are a text key and binary data, transforming to a JSON output file:

{
  "input"       : {
    "name"      : "SAMPLE.VSAM",
    "complex"   : "group-PLEX1"
    },
"source"        : {
  "url"         : "https://s3.amazonaws.com",
  "api"         : "aws-s3",
  "bucket"      : "prod-bucket",
  "user"        : "sdsdDVDCsxadA43TERVGFBSDSSDff",
  "password"    : "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
    }
  }
Transforming a VSAM text file to CSV

Specify a text data, transforming to a CSV output file:

{
  "input"       : {
    "name"      : "SAMPLE.VSAM",
    "complex"   : "group-PLEX1"
    },
  "vsam"        :{
    "keyBinary" :"false|true",
    "keyCharset":"<CHARSET>"
    },
  "output"      : {
    "format"    : "CSV",
  },
  "source"      : {
    "url"       : "https://s3.amazonaws.com",
    "api"       : "aws-s3",
    "bucket"    : "prod-bucket",
    "user"      : "sdsdDVDCsxadA43TERVGFBSDSSDff",
    "password"  : "ddferdscsdW4REFEBA33DSffss344gbs4efe7"
    }
  }

Service response and log

The transform service is invoked as an HTTP request. It returns:

In case of a WARNING or an ERROR - the HTTP response will also contain messages in the Log below.

Note

Informational messages are printed only to service log and not to the HTTP response. The service log can be viewed on the AWS console when executing the service from AWS, or the docker log, when executing the service on-premises.

HTTP status

Code

Description

200

OK

400

Bad user input or unsupported data set

500

Unexpected error

HTTP response
{
  "status"            :“OK|WARNING|ERROR”,
  "outputName"        :“<OUTPUT-NAME>”,
  "inputName"         :”<DSN>”,
  "outputCompression" :”none|gzip”,
  "outputSizeInBytes" :”<SIZE-IN_BYTES>”,
  "outputFormat"      :”JSON|text|CSV”
}
Log
{
    "log": [                                                                                  
    "<INFO-MESSAGE>",
    "<WARNING-MESSAGE>",
    "<ERROR-MESSAGE>",
  ]
 }    

Output keyword

Description

status

  • OK - all is well, no log records

  • WARNING - minor problem e.g. specifying parameters that do not fit the input data set. The log is returned.

  • ERROR - major problem such e.g. unable to read the input data or problem in communication. The log is returned.

outputName

The object name as appears in the target object storage

inputName

The input data set name

outputCompression

The compression type as selected in the input parameters / default

outputSizeInBytes

The number of bytes on the output object

outputFormat

The format as selected in the input parameters / default

Service response and log samples

Status OK sample
{
    "status"           : "OK",
    "outputName"       : "transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=a641d670-2d05-41e7-9dd3-7815e1b2d4c4",
    "inputName"        : "QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS",
    "outputCompression": "NONE",
    "outputSizeInBytes": 97,
    "outputFormat"     : "JSON"
}
Status WARNING sample
{
    "log"       : [
        "ZM9K001I Transform service started",
        "ZM9K108W Specifying input parameter vsam is ignored for input data set with DSORG PS",
        "ZM9K002I Transform service completed successfully, output is transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=d779fbf9-da6b-495b-b6b9-de7583905f19"
    ],
    "status"           : "WARNING",
    "outputName"       : "transform/QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS!uuid=d779fbf9-da6b-495b-b6b9-de7583905f19",
    "inputName"        : "QA.SMS.MCBK.SG1QNOBK.DSERV.TXT.TMPPS",
    "outputCompression": "NONE",
    "outputSizeInBytes": 97,
    "outputFormat"     : "JSON"
}
Status ERROR sample
{
    "status": "ERROR",
    "log"   : [
        "ZM9K001I Transform service started",
        "ZM9K008E The input was not found: name QA.SMS.MCBK.DSERV.TXT.NON, archive false, entry (0)"
    ]
}

Input format support

Supported formats
  • SMS-managed data sets

  • Non-SMS managed data sets

  • Sequential and extended-sequential data sets with the following RECFM:

    • V

    • VB

    • F

    • FB

  • Non-extended VSAM KSDS data sets

Unsupported formats
  • RRDS, VRRDS, LINEAR, ESDS

  • Extended format data sets with compression or encryption

  • PDS data sets

  • RECFM not mentioned above (U, FBA…)

Output format support

Supported types
  • Text

  • JSON

  • CSV

On-premises deployment

Step-by-step deployment

This guide will instruct you how to set up the transform service of the Model 9 Cloud Data Gateway, as an extension to the Model 9 Cloud Data Manager server.

Once the service is up and running, see the Step-by-step installation for instructions on how to invoke the service from the mainframe, using JCL.

The service will transform a Model9 data set backup copy / archive into a readable file in the cloud. Once transformed, the readable file can be accessed directly or via data analytics tools.

Note

This guide describes how to implement the Model 9 Cloud Data Gateway on top of the Model 9 Cloud Data Manager installation only.

Step 1: Upload the Model 9 Cloud Data Gateway file

The installation file will be provided by your Model 9 representative according to your environment. Upload the relevant file to the server using binary mode. The available installation files are:

Environment

Installation file

x86

model9-app-transform_<release>_build_<id>.docker

Linux on z

model9-app-transform_<release>_build_<Id>-s390x.docker

<version> - represents the release number.

<id> - represent specific release id.

Create a work directory under $MODEL9_HOME. The directory should be able to hold at least 20GB of data:

# Change user to root
sudo su -
# If you haven’t done so already, set the model9 target installation path
export MODEL9_HOME=/data/model9
# Change the directory to $MODEL9_HOME
cd $MODEL9_HOME
mkdir $MODEL9_HOME/extract-work
Step 2: Deploy the Model9 management server’s components

Deploy the application components using the following commands:

#on Linux issue:
docker load -i $MODEL9_HOME/model9-app-transform_<release>_build_<id>.docker

#on Linux on z issue:
docker load -i $MODEL9_HOME/model9-app-transform_<release>_build_<id>-s390x.docker
Step 3: Start the service

Start the service using the following command:

docker run -d -p 8443:8443 -v $MODEL9_HOME/extract-work:/data/model9/extract-work:z \
-v $MODEL9_HOME/keys:/data/model9/keys:z \
-e "SECURE=true" \
--restart unless-stopped \
--name model9cg-v<release> model9/transform:<release>.<id>
Step 4: Verify the deployment

Make sure the service is ready to accept connections by issuing the following command:

docker logs model9cg-v<release>

The command output should be as listed below:

1353154590

Continue to Step-by-step installation for further instructions.

Cohesity Deployment

Cohesity Installation Guide

Step 1: Obtain a license key

Open a license request in the Model9 service portal.

The output of the z/OS command “D M=CPU” is required.

Step 2: Download the Model9 files

Create an NFS Cohesity view

The Model9 configuration and meta-data files should reside on a Cohesity view, defined as NFS only.

The name of this view must be set to model9home (case sensitive).

1543602338
Mount the NFS

Mount the nfs share on a Linux machine and configure initial settings:

# Change user to root
sudo su -
# Mount the model9home cohesity view
mkdir -p /data/model9/nfs
mount cohesity.ip.addr:/model9home /data/model9/nfs/
# Set the model9 target installation path
export MODEL9_HOME=/data/model9/nfs

Upload the Cohesity zip installation file to the nfs share mount point (for example: /data/model9/nfs) in binary mode:

model9-v1.5.4_build_6fa60a89-cohesity.zip

Unzip the installation file

Use a linux server to mount the newly created view and unzip the uploaded installation zip file:

# Change user to root
sudo su -
# Change the directory to $MODEL9_HOME
cd $MODEL9_HOME
# Unzip the server’s installation file, on Linux issue:
unzip model9-v1.5.4_build_6fa60a89-cohesity.zip
Optional: Replace the default self-signed certificate

The base installation provides a self-signed certificate for encrypting access to the Model9 user interface. See Generate a self-signed certificate on how to replace the default certificate for the WEB UI. Communications between the Model9 Server and the Model9 Agent are encrypted by default and further action should only be taken if site certificates are preferred.

Step 3: Edit the parameters file

The model9-local.yml file residing in the $MODEL9_HOME/conf/ path contains some of the default parameters. The main section is model9 (lower-case) and all parameters must be indented under the model9 title. Only hard spaces can be used to indent the hierarchies within the parameter file.

model9:
    licenseKey: null
    master_agent:
        name: "<ip_address>"
        port: <port>
    objstore:
#       resources.container.name: model9-data
        endpoint:
#           api.id: s3
            api.s3.v4signatures: true
#           no.verify.ssl: true
            url: https://cohesity:3000
            userid: <object store access key>
            password: <object store secret>
# The dataSource tag should start from first column and not under model9 tag
dataSource:
    user: postgres
    password: model9
    url: jdbc:postgresql://127.0.0.1:5432/model9

Parameter

Description

Mandatory

licenseKey

A valid Model9 license key as obtained in the prerequisites section. When using multiple keys for multiple CPCs, specify one of the keys in the server’s yml file. The server-initiated actions are carried out by the agent using its own defined license. The license key specified for the server is used for displaying a message regarding the upcoming expiration of the license

YES

master_agent

The agent running on z/OS which verifies the UI login credentials, hostname, IP address and port number. Specifying a distributed virtual IP address (Distributed VIPA) can provide high availability by allowing the use of agent groups and multiple agents. See the Administrator and User Guide for more details.

YES

resources.container.name

Container/bucket name

YES

url

URL address of local or remote object storage, both HTTP and HTTPS** are supported

YES

userid

Access key to object storage

YES

password

Secret key to object storage

YES

api.id

The object storage api name. Default: s3

NO

api.s3.v4signatures

Set this parameter to true in addition to specifying api.id: s3.

YES

no.verify.ssl

when using the https protocol, whether to avoid ssl certificate verifications. Using HTTPS for the object storage URL parameter enables Data-in-Flight encryption. Default: true

NO

datasource.url

Update the postgresql address to point to localhost (i.e. 127.0.0.1)

YES

Step 4: Edit the environment configuration file

The model9-stdenv.sh file residing in the $MODEL9_HOME/conf/ path contains some of the default parameters.

Update the timezone setting according to the server location.

Step 5: Start the Model9 management server

Go to the Apps section in the Cohesity UI, and click on the Run App button located next to the loaded application:

1543602344

Grant permission to access the NFS view created for Model9 (The view name is model9home).

1543602350

Click on Run App to start the application.

Generate a self-signed certificate

Note

This step is optional

The default Model9 installation provides a self-signed web certificate. This certificate is used to encrypt the web information passed between your browser and the Model9 management server.

It is strongly recommended to generate a site-defined certificate to accommodate production-level workloads. Contact your security administrator if you wish to generate such a certificate.

You can also generate your own self-signed certificate to avoid browser security notifications.

Verify the server has a valid hostname

Issue the following command:

hostname -s

Generate self-signed keys

Issue the following commands. The parameters are described below:

cd $MODEL9_HOME/keys
keytool -genkey -alias tomcat -keystore $(hostname -s)_web_self_signed_keystore.p12 -storetype pkcs12 -storepass <password> -keyalg RSA -ext SAN=dns:<server_dns>,ip:<server_ip> -dname "cn=<BackupServer>, ou=Java, o=Model9, c=IL" -validity 3650
chown root:root $(hostname -s)_web_self_signed_keystore.p12
chmod 600 $(hostname -s)_web_self_signed_keystore.p12
keytool -exportcert  -alias tomcat -keystore $(hostname -s)_web_self_signed_keystore.p12 -storetype pkcs12 -storepass <password> -file $(hostname -s)_web_self_signed.cer

Edit the following parameters:

Parameter

Description

<password>

The keystore password

<server_dns>

The server DNS name (optional)

<server_ip>

The server IP address

<BackupServer>

The certificate common name: edit according to site standards

When not specifying <server_dns> remove the dns: section from the command.

Update your workstation

Add the exported certificate (.cer file) to your local workstation trusted CA according to site standards and security policies.

Update the server

If a site certificate or new self-signed certificate was created, update the server configuration file by adding the following line:

vi $MODEL9_HOME/conf/connectorHttpsModel9.xml

Update the keystoreFile, keystorePass, keyAlias and keyPass settings to match the information provided by the security administrator, as shown in the following example:

<Connector port="443" protocol="org.apache.coyote.http11.Http11Protocol"
     maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
     keystoreFile="/model9/keys/web_self_signed_keystore.p12"
     keystoreType="PKCS12" keystorePass="changeit" keyAlias="tomcat"
     clientAuth="false" sslProtocol="TLS" />

Java strictly follows the HTTPS specification for server identity (RFC 2818, Section 3.1) and IP address verification. When using a host name, it is possible to fall back to the Common Name in the Subject DN of the server certificate instead of using the Subject Alternative Name. However, when using an IP address, there must be a Subject Alternative Name entry - IP address, not a DNS name - in the certificate.