Dockerizing CA Siteminder

How to leverage infrastructure-as-code to build a portable and lightweight CA Siteminder environment using Docker.

8
 min read

In our previous post we discussed several advantages of containerization when it comes to the migration of legacy identity apps to the cloud. It enables a phased migration that keeps your business running smoothly throughout the process (as opposed to the “lift and shift” approach) and also allows for the delay of “set in stone” architectural decisions such as which specific cloud environment (and associated stacks) will be targeted. In this post, we're moving from theory to practice: descending from a bird’s eye view all the way down to rolling out a fully containerized CA Siteminder environment, using Docker. ‍

Dockerization Approach

Separating installation from configuration

Siteminder has quite a significant footprint, as do other—typically JavaEE—enterprise products. The installation assumes intervention by a human to achieve the last-mile setup. In order to address those constraints by creating images in a repeatable fashion, last-mile setup is automated by hitting the Siteminder API for inserting configuration objects as well as to modify existing ones as necessary.

API-based configuration

CA Siteminder ships with a Perl and 'C' SDK which allows us to provision and manage most of the configuration objects programmatically. For the sake of simplicity, we’ve decided to go with the Perl, because it does not require an additional compilation step. It's worth noting that the Ansible-based provisioning and all the associated artifacts—such as scripts and configuration—are also containerized, and therefore isolated from the host environment.

Lightweight reference application

In order to achieve a quick turnaround time with end-to-end testing and iron out any eventual issues, you may want to use an application that sits as close as possible to the http server stack. Having a full-blown enterprise application (e.g. JavaEE) can slow you down, given the sizeable footprint of the associated environment (e.g. SDK, JRE, etc.) as well as requiring a build for every minor change. Our choice is to build on OpenResty (an Nginx fork) which enables full visibility and control of the Siteminder integration using the Lua scripting language.

Our CA Siteminder Sandbox

Building blocks

We're going to build the following docker images:

  • CA Directory: For persisting the Siteminder configuration objects as well as identities and associated entities
  • Policy Server: Hosts authentication and authorization services that agents consume. Additionally, it hosts the administrative interface for configuration and management
  • Policy Store: Stores authorization-specific details (e.g. rules)
  • Access Gateway: Also known as Secure Proxy Server (SPS), intercepts requests and makes sure that only authenticated and authorized requests reach the OpenResty application
  • OpenResty Application: Acts as a secured web application for providing single sign-on (SSO) on top of Siteminder

Best Practices

The Siteminder installation and setup process is very labor intensive, with a lot of moving pieces. Also, any misconfiguration or invalid intermediary state of the system occurring during this process can translate to a non-functioning system with little chance for recovery. It's typically less insane to begin again from scratch, than to try to fix a broken install.

On a high level, the way we chose to approach the creation of a docker image for Siteminder is as follows:

Set up a sandbox environment

Roll out a sandboxed Siteminder environment, ideally as a virtual machine (VM). You can use any virtualization platform (hint: try Oracle VirtualBox—it’s free). Or, you could go with a VM in any infrastructure-as-a-service (IaaS) platform, but a potential drawback of this choice is that you'll have to pay for the VM even if it's not turned on.

The main purpose of this sandbox is to obtain the Siteminder product installer properties that contain the configuration preferences to be used during a headless installation. Additionally, any relevant artifacts may be sourced from this environment—configuration descriptors, for example—and included in the docker image. Finally, it functions as a baseline during the image creation process that can be helpful for debugging when things don't work as expected.

Three phases of image creation

The CA Siteminder procedure includes not only installing the product passively, but also conducting setup procedures which rely on a previously installed service being up and running and servicing requests. This would imply the need to implement a custom workflow outside of Docker, for launching containers that expose network services required by the next installation step. Moreover, the image creation process has significant limitations (for instance, as far as networking is concerned) which do not apply to containers.

In order to address this, we've split the image creation process into three phases:

Bare bone image creation

One image per Siteminder product is built. During the image creation process there are no third-party services required.

Let's drill down on the Dockerfile descriptor below, which showcases the image creation for the Policy Server:


FROM centos:8
#
#
# Environment variables required for this build (do NOT change)
# -------------------------------------------------------------
#
ENV PS_ZIP=ps-12.8-sp05-linux-x86-64.zip \
    ADMINUI_PRE_REQ_ZIP=adminui-pre-req-12.8-sp05-linux-x86-64.zip \
    ADMINUI_ZIP=adminui-12.8-sp05-linux-x86-64.zip \
    SDK_ZIP=smsdk-12.8-sp05-linux-x86-64.zip \
    BASE_DIR=/opt/CA/siteminder \
    INSTALL_TEMP=/tmp/sm_temp

ENV SCRIPT_DIR=${INSTALL_TEMP}/dockertools 

#
# Creation of User, Directories and Installation of OS packages
# ----------------------------------------------------------------
RUN yum install -y which unzip rng-tools java-1.8.0 ksh openldap-clients openssh-server xauth libnsl
RUN groupadd smuser && \
    useradd smuser -g smuser
RUN mkdir -p ${BASE_DIR} && \
    chmod a+xr ${BASE_DIR} && \ 
    chown smuser:smuser ${BASE_DIR} 
RUN mkdir -p ${INSTALL_TEMP} && \
    chmod a+xr ${INSTALL_TEMP} && chown smuser:smuser ${INSTALL_TEMP} 

# Setup SSH access
# ----------------
USER smuser 

RUN mkdir /home/smuser/.ssh && \
    chmod 700 /home/smuser/.ssh && \
    ssh-keygen -A && \
    echo "ssh-rsa " >> /home/smuser/.ssh/authorized_keys && \

USER root

RUN rm -f /run/nologin && \
    mkdir /var/run/sshd && \
    ssh-keygen -A && \
    sed -i "s/^.*PasswordAuthentication.*$/PasswordAuthentication no/" /etc/ssh/sshd_config && \
    sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/" /etc/ssh/sshd_config && \
    sed -i "s/^.*X11UseLocalhost.*$/X11UseLocalhost no/" /etc/ssh/sshd_config && \
    grep "^X11UseLocalhost" /etc/ssh/sshd_config || echo "X11UseLocalhost no" >> /etc/ssh/sshd_config

# Increase entropy
# ----------------
RUN mv /dev/random /dev/random.org && \
    ln -s /dev/urandom /dev/random

# Copy packages and scripts
# -------------------------
COPY --chown=smuser:smuser install/* ${INSTALL_TEMP}/
COPY --chown=smuser:smuser ca-ps-installer.properties ${INSTALL_TEMP}/
COPY --chown=smuser:smuser prerequisite-installer.properties ${INSTALL_TEMP}/
COPY --chown=smuser:smuser smwamui-installer.properties ${INSTALL_TEMP}/
COPY --chown=smuser:smuser sdk-installer.properties ${INSTALL_TEMP}/
COPY --chown=smuser:smuser sm.registry ${INSTALL_TEMP}/
COPY --chown=smuser:smuser container-scripts/* ${SCRIPT_DIR}/
RUN chmod +x ${SCRIPT_DIR}/*.sh

USER smuser

# Install Policy Server
# -------------------------
RUN unzip ${INSTALL_TEMP}/${PS_ZIP} -d ${INSTALL_TEMP} && \
    chmod +x ${INSTALL_TEMP}/ca-ps-12.8-sp05-linux-x86-64.bin && \
    ${INSTALL_TEMP}/ca-ps-12.8-sp05-linux-x86-64.bin -i silent -f ${INSTALL_TEMP}/ca-ps-installer.properties

RUN cp ${INSTALL_TEMP}/smreg /opt/CA/siteminder/bin && \
    cp ${INSTALL_TEMP}/sm.registry /opt/CA/siteminder/registry/sm.registry

RUN echo ". /opt/CA/siteminder/ca_ps_env.ksh" >> /home/smuser/.bash_profile

# Install Administrative Interface Prerequisites
# -----------------------------------------------
RUN unzip ${INSTALL_TEMP}/${ADMINUI_PRE_REQ_ZIP} -d ${INSTALL_TEMP} && \
    chmod +x ${INSTALL_TEMP}/adminui-pre-req-12.8-sp05-linux-x86-64.bin && \
    ${INSTALL_TEMP}/adminui-pre-req-12.8-sp05-linux-x86-64.bin -i silent -f ${INSTALL_TEMP}/prerequisite-installer.properties

# Install Administrative Interface
# -----------------------------------------------
RUN unzip ${INSTALL_TEMP}/${ADMINUI_ZIP} -d ${INSTALL_TEMP} && \
    chmod +x ${INSTALL_TEMP}/ca-adminui-12.8-sp05-linux-x86-64.bin && \
    ${INSTALL_TEMP}/ca-adminui-12.8-sp05-linux-x86-64.bin -i silent -f ${INSTALL_TEMP}/smwamui-installer.properties

# Install the SDK
# -----------------------------------------------
RUN unzip ${INSTALL_TEMP}/${SDK_ZIP} -d ${INSTALL_TEMP} && \
    chmod +x ${INSTALL_TEMP}/ca-sdk-12.8-sp05-linux-x86-64.bin && \
    ${INSTALL_TEMP}/ca-sdk-12.8-sp05-linux-x86-64.bin -i silent -f ${INSTALL_TEMP}/sdk-installer.properties

# Important note: Make sure to SSH and run the setup_ps.sh script for registering the adminitrative UI with the policy server. 
#                 *This has to be performed only once*

# Define default command to start bash.
USER root 
CMD ["/usr/sbin/sshd", "-D"]

It mainly performs the following:

  1. Selects CentOS as the operating system
  2. Copies installer images from the host to the image
  3. Creates the Siteminder user, directories and installed operating system packages that are required
  4. Sets up SSH access, which is required for Ansible to access and carry out the setup operation as soon as the container is spun
  5. Copies both the configuration descriptors and script files that will be executed by Ansible during the next phase
  6. Runs the Siteminder installers for the policy server, administrative UI and SDK
  7. Finally, spawns the SSH server for the Ansible user to access in the next phase, to carry out the post-installation steps

The remaining Dockerfiles follow pretty much the same pattern.

Image setup

All the containers for the generated images are spun and configured accordingly using an Ansible script.


---
  - name: Setup CA Directory
    hosts: dx
    tasks:
      - name: Execute CA Directory post installation script
        shell: nohup /tmp/cadir_temp/dockertools/setup_psdsa.sh /dev/null 2>&1 &
  - name: Run Siteminder Policy Server
    hosts: ps
    tasks:
      - name: Execute the PS start command
        shell: nohup /opt/CA/siteminder/start-all /dev/null 2>&1 &
  - name: Configure Siteminder Policy Server
    hosts: ps
    tasks:
      - name: Execute PS post-install  command
        register: out
        command: "/tmp/sm_temp/dockertools/setup_ps.sh"
      - debug:
          var: out.stdout_lines
  - name: Setup Siteminder AG
    hosts: ag
    tasks:
      - name: Execute AG configuration command
        register: out
        command: "/tmp/sp_temp/dockertools/setup_ag.sh"
      - debug:
          var: out.stdout_lines
  - name: Post installation of AG
    hosts: ps
    tasks:
      - name: Execute AG post install command
        register: out
        shell: "source /opt/CA/siteminder/ca_ps_env.ksh && perl /tmp/sm_temp/dockertools/ProxyUIPostInstall.pl"
      - debug:
          var: out.stdout_lines
  - name: Run Siteminder AdminUI
    hosts: ps
    tasks:
      - name: Launch SM AdminUI
        shell: nohup /opt/CA/siteminder/adminui/bin/standalone.sh /dev/null 2>&1 &
  - name: Stop SPS UI
    hosts: ag
    tasks:
      - name: Kill SPS UI
        register: out
        shell: sleep 40 && /opt/CA/secure-proxy/default/proxy-engine/sps-ctl stop
        ignore_errors: True
      - debug:
          var: out.stdout_lines
  - name: Start SPS UI
    hosts: ag
    tasks:
      - name: Launch SPS UI
        register: out
        shell: /opt/CA/secure-proxy/default/proxy-engine/sps-ctl start
      - debug:
          var: out.stdout_lines

It consists mainly of setting up the CA Directory (acting as the policy and identity source), Policy Server, Admin UI, Access Gateway and Proxy UI. Both Siteminder-specific and ad-hoc scripts are executed. The former take care of managing the lifecycle of the target system; starting and stopping it, for instance—and the latter carry out post-installation actions that would have been done through a GUI if this were a manual installation.

Below is one of the Perl scripts required to configure the policy server:


$adminName          = 'siteminder';
$adminPwd           = 'siteminder';

$userdir_namespace  = 'LDAP:';
$userdir_server     = 'dx:7777';  # IP Address if LDAP, Data Source Name if ODBC

#
# LDAP User Directory Settings
#
$ldap_srchroot      = 'ou=Contoso,o=psdsa,C=US';
$ldap_usrlkp_start  = 'cn=';
$ldap_usrlkp_end    = ',ou=Contoso,o=psdsa,c=US';
$ldap_usrname       = '';
$ldap_usrpwd        = '';
$ldap_require_creds = 0;

#
# Host Config settings
#
$host_conf_name     = "sps-hco";
$host_conf_desc     = "Secure Proxy Server Host";

#
# Policy server host
#
$policy_svr_host    = "ps";

#                                                                              #
# End site-specific configuration                                              #
#                                                                              #

$policymgtapi = Netegrity::PolicyMgtAPI->New();
$session = $policymgtapi->CreateSession($adminName, $adminPwd);
 
die "\nFATAL: Cannot create session. Please check admin credentials\n" 
    unless ($session != undef);

clean_ps_store();

setup_ps_store();

sub setup_ps_store {

    # Create a User Directory
    print "\tCreating User Directory \'contoso-userdir\'...";

    $userdir = $session->CreateUserDir( "contoso-userdir",
                                        $userdir_namespace,
                                        $userdir_server,
                                        $odbc_queryscheme,
                                        "Contoso User Directory",
                                        $ldap_srchroot,
                                        $ldap_usrlkp_start,
                                        $ldap_usrlkp_end,
                                        $ldap_usrname,
                                        $ldap_usrpwd,
                                        0,
                                        2,
                                        10,
                                        0,
                                        $ldap_require_creds
                                        );

    if(!defined $userdir) {
        die "\nFATAL: Unable to create User Directory \'contoso-userdir\'\n";
    }

    print "done\n";


    print "\tCreating Host Configuration for SPS \'sps-hco\'...";


    $hco = $session->CreateHostConfig( $host_conf_name,
                                    $host_conf_name,
                                    1,
                                    20,
                                    2,
                                    2,
                                    60
                                    );

    if(!defined $hco) {
        die "\nFATAL: Unable to create HCO \'sps-hco\'\n";
    }

    $hco->AddServer($policy_svr_host, 44441, 44442, 44443);

    print "done\n";

    print "\tCreating Agent \'spsapacheagent\'...";
    $agent = $session->CreateAgent( "spsapacheagent",
                                    $session->GetAgentType("Web Agent"),
                                    "Secure Proxy UI Agent"
                                );
    if(!defined $agent) {
        die "\nFATAL: Unable to create Agent \'spsapacheagent\'\n";
    }

    print "done\n";
}

sub clean_ps_store() {

    $session->DeleteDomain('DOMAIN-SPSADMINUI-spsapacheagent');
    if($status == -1) {
        print "Error deleting domain\n";
    }

    $session->DeleteAgentConfig( 'spsapacheagent-settings');
    if($status == -1) {
        print "Error deleting agent s\n";
    }

    $status = $session->DeleteHostConfig( $host_conf_name );
    if($status == -1) {
        print "Error deleting host config\n";
    }

    $status = $session->DeleteAgent( "spsapacheagent");
    if($status == -1) {
        print "Error deleting agent\n";
    }

    $status = $session->DeleteUserDir("contoso-userdir");
    if($status == -1) {
        print "Error deleting contoso-userdir\n";
    }

}

More specifically, the main purpose of this script is to create an agent, a user directory and a host configuration object.       

The Ansible image setup process is executed from a container, so it does not require any specific environment or package in the host machine.                     

Final image provisioning

Once the images have been successfully assembled, they can be tagged and pushed to the repository for consumption. They can be used as-is, or as base images to serve more tailored Siteminder deployments.

Opting for Orchestration over Host Scripts

Host scripts are mainly programs written in high-level programming, or shell languages that live within the host. As a rule of thumb, baking images using host shell scripts is considered bad practice because a specific environment in the host machine is expected, consisting of a specific set of tools available in the current path—and this can translate to the "it works in my machine" syndrome. It’s important to make the automation process as portable as possible, so that it can operate in a server and workstation environment, independent of the operating system and installed packages. We chose Docker Compose for our orchestration tool. It's more than enough to create our workflow and it doesn't require installation of any additional DevOps package.

Below is the docker compose descriptor for CA Siteminder:


version: "3"
services:
  dx:
    build:
      context: ./dockerfiles/cadir/14.1.00
    image: atricore/dx
    hostname: dx
    networks:
      - sm
    ports:
      - "7777:7777"
      - "5022:22"
    stdin_open: true
    tty: true
  ps:
    build:
      context: ./dockerfiles/siteminder/12.8/ps
    image: atricore/ps
    hostname: ps
    networks:
      - sm
    ports:
      - "2022:22"
      - "8443:8443"
    depends_on:
      - dx
    stdin_open: true
    tty: true
  ag:
    build:
      context: ./dockerfiles/siteminder/12.8/ag
    image: atricore/ag
    hostname: ag
    ports:
      - "3022:22"
      - "9090:8080"
      - "9191:8181"
    depends_on:
        - ps
    stdin_open: true
    tty: true
    networks:
      sm:
        aliases:
          - extapp
  app:
    build:
      context: ./dockerfiles/apps/resty
    image: atricore/app
    hostname: app
    networks:
      - sm
    ports:
      - "7070:80"
    depends_on:
      - ag
    stdin_open: true
    tty: true
  browser:
    build:
      context: ./dockerfiles/apps/browser
    image: atricore/browser
    hostname: browser
    networks:
      - sm
    ports:
      - "4022:22"
    depends_on:
        - app
    stdin_open: true
    tty: true
  ansible:
    build:
      context: ./dockerfiles/ansible
    image: atricore/ansible
    hostname: ansible
    networks:
      - sm
    depends_on:
        - dx
        - ps
        - ag
    stdin_open: true
    tty: true
networks:
  sm:
    name: sm-net

Ready to Dockerize Siteminder ?

If you'd like to grab the whole package, feel free to checkout the code on this Github repository.

Subscribe to our newsletter now!

Thanks for joining our newsletter.
Oops! Something went wrong.