IBM Security Identity Manager Notes

How ISIM v6 authentication works

For TIM 6 the WAS authentication store is configured to point to an ISIM class. That class has a hardcoded (?) name for the user that it authenticates into WAS (wasadmin) and then for other users it authenticates against LDAP.
ISIM relies on WAS authentication. That's why the old TIM 5.1 SSO does not work. TIM 6 uses WAS, and WAS is in turn configured to use TAI++/ETAI to authenticate with SAM
Standard TAI++ requires TAM client side components in each JVM so it can talk to TAM's LDAP to build credentials (groups, attributes etc) for the logged in user
ETAI does not need TAM client side like PDJrteCfg or SvrSslCfg and can consume SAML 2.0 tokens

How ITIM reconciliations work

Scheduled reconciliations operate via the scheduler - just like any other scheduled item in ITIM. The scheduler reads the enrole.remote_resources_recons and enrole.remote_resources_recon_queries tables. This results in a row being added or updated in the ITIM database for each reconciliation when it is scheduled (remote_services_requests table). Each ITIM node has its own scheduler which periodically (in ITIM 5.0 every 30 seconds) queries the database to see if something needs to be kicked off. Because each ITIM node has its own scheduler, there is no way to control which node starts the reconciliation - it is essentially random. This means that it is possible for the reconciliations to not be evenly balanced and one node might run more reconciliations than other nodes if its scheduler happens to pick up the scheduled reconciliation record before its peers. Testing shows that this does not happen and reconciliations are fairly evenly distributed across all nodes in a cluster.

After the scheduler picks up a scheduled reconciliation it creates a start-the-recon JMS message on the itim_rs queue. The itim_rs queue is a local queue and is not shared with any of the nodes in a cluster. At this point, the reconciliation has not been recorded in the audit trail. When the start-the-recon message is received, the workflow engine is notified to start the reconciliation workflow, which creates a pending process in the workflow audit trail. When the workflow engine executes the main reconciliation activity, another JMS message to run the reconciliation is placed on the itim_rs queue. The two messages on itim_rs are important when examining the timing of reconciliation processes appearing in the audit trail.

If a node is running the maximum number of concurrent reconciliations (more on that later) and the scheduler picks up another scheduled reconciliation record and puts a start-the-recon message on the itim_rs queue, no record of it will show up in the audit log until an itim_rs message listener thread is made available (after one of the concurrent reconciliations complete) to process the start-the-recon message and create the run-the-recon message. Thus it is possible for reconciliations to be in a pending state, that is the scheduler has picked up the scheduled reconciliation record and processed it, and it not show up in View Requests. This does not mean that the reconciliation will not be processed, just that the itim_rs message listener hasn't gotten to that JMS message yet.

The itim_rs queue is distinct from the other queues and by default has a maximum of 5 threads allocated to processing messages on it (WAS console: Resources -> JMS -> Activation specifications -> ITIMRemoteServicesActivationSpec -> Maximum concurrent endpoints). This means that by default any given ITIM node can only have at most 5 reconciliations running concurrently.

Once a run-the-recon message is pulled from itim_rs, ITIM enters a three-phase process. Work in these phases is done by 8 threads ( enrole.reconciliation.threadcount) in addition to the messaging thread which receives the message to run the reconciliation.

In Phase 1 ITIM initiates a search for the accounts on the endpoint while concurrently starting an ITIM LDAP search to pulling out the corresponding accounts, if any. If the endpoint search finishes before the ITIM LDAP search does, the endpoint search is blocked from returning results until the ITIM LDAP search finishes. While reading in the results from the ITIM LDAP if more than 2000 ( enrole.reconciliation.accountcachesize) are found, for the remainder of the accounts only the erGlobalID and eruid are stored instead of the entire account object to minimize memory footprint.

Phase 2 begins after ITIM finishes pulling back all the accounts from the ITIM LDAP. In this phase the accounts are read from the adapter on the message thread and placed onto an in-memory, fixed size queue. As they are pulled off the queue by one of the 8 worker threads, they are compared against the account found in the in-memory list. If only the erGlobalID and eruid were stored due to the accountcachesize threshold, the full account object is looked up prior to comparing it to the record pulled from the adapter. If necessary, adoption scripts are executed to find the account's owner.

  1. If after the adoption script is run it's still an orphaned account, it's added/updated in the ITIM LDAP.
  2. If it's an owned account and policy checking for the reconciliation is disabled, or if policy checking is enabled and it's compliant, then the account is added/updated to the ITIM LDAP.
  3. If it's an owned non-compliant account, and policy checking is enabled, then the account is added to one of two in-memory lists (non-compliant and disallowed accounts). These will be handled in

Phase 3 When all results are pulled from the adapter, any accounts left in the in-memory list from the ITIM LDAP are removed from LDAP. Also, for any newly compliant accounts of deleted accounts where ITIM has a record of compliance issues in the ITIM LDAP, those accounts are added to a third in-memory list for action in Phase 3. At the end of Phase 2, the 8 worker threads are terminated, and the messaging thread is returned to the pool. The reconciliation workflow continues to the next set of steps in Phase 3. In Phase 3 policy violations are acted upon. For non-compliant and disallowed accounts, the actions depend on the policy enforcement setting for the service. Also, any stale compliance issues located during Phase 2 are removed. Each of the three lists is implemented as an ITIM workflow loop that takes the necessary actions on each list entry.

In a default ITIM environment it is possible for a node to have over 45 threads (5 * 9) working on reconciliation at one time if 5 or more reconciliations are running concurrently.

The CPU required to run a reconciliation depends on if the account value has changed and if it was changed if it is compliant or not. Unchanged accounts have the least overhead, followed by changed but compliant accounts. Changed and non-compliant accounts have the most overhead.

Number of concurrent reconciliations for a single ITIM node As mentioned above, the following parameters can be adjusted to tune reconciliations:

WAS console: Resources -> JMS -> Activation specifications -> ITIMRemoteServicesActivationSpec -> Maximum concurrent endpoints

Recommended range: 5 to 20

The number of concurrent reconciliations a single ITIM node can run concurrently depends on several factors, such as the CPU available to run reconciliations, the complexity of the workflows, and the amount of memory available to hold in-memory structures. If you have available CPU on the ITIM node, you can consider increasing this value up from the default of 5 to as high as 20. The CPU required to run the additional concurrent reconciliations should scale linearly with the number of reconciliations running concurrently assuming the workflow complexities for the provisioning policies are the same.

'Number of threads running per reconciliation ' enrole.reconciliation.threadcount

Recommended range: 2 to 8 (limited to the number of physical CPUs on the box or 2, whichever is smaller)

The number of threads running per reconciliation is dependent upon the speed of the ITIM node and the throughput from the adapter. If the adapter is returning results back faster than ITIM can process them, then the more threads that are available the faster that single reconciliation will finish. The more common case is for the ITIM node to be faster than the adapter can return the accounts. In this case adding more threads will not cause the reconciliation to finish faster but instead will result in idle threads consuming nominal resources. The speed that a single reconciliation thread can process accounts depends on if the account has changed. If it has changed the thread must evaluate policy and the complexity of the policy analysis and dependent workflows will determine the speed.

As a general rule, only the Windows Active Directory and RACF adapters can return accounts faster than a single ITIM node can process them. For these adapters start with 4 concurrent threads per reconciliation. For all other adapters start with 2 concurrent threads per reconciliation.

Even if the adapter returns results faster than ITIM can process them, more threads does not equate to faster due to thread synchronization and locking issues. Testing indicates that once the number of threads goes beyond the number of physical CPUs, the throughput remains mostly flat while CPU usage increases due to context switching. Decreasing the number of threads can improve CPU utilization without impacting reconciliation throughput.

Account cache size

Recommended value: 500 to 2000

The account cache is a performance and scalability compromise. For performance, it would be best to store the entire account object in memory. Unfortunately this does not scale for services with millions of accounts. The compromise is to use a value that will enable most services to have all of their accounts in the cache. In general 2000, the default, is a good starting point. Consider the following scenarios: 99% of your services have less than 2000 accounts and and the remaining 1% are above it. The default account cache size of 2000 would yield good performance for the 99% while still scaling for the remaining 1%. 50% of your services have 100 accounts but the remaining 50% have 50k accounts. If you are using 5 concurrent reconciliations, the default values should be fine as the maximum number of in-memory accounts in a worst-case scenario would be 2000 * 5 = 10,000. If you are using more than 5 threads (for example, 20) you might want to consider decreasing the number of cached accounts to 200 to avoid the worst-case scenario of having all the accounts being reconciled at one time are of the 50k size, thus having 2000 * 20 = 40,000 full accounts in memory at one time. 100% of your services have 2500 accounts. If you are using the default of 5 concurrent reconciliations, increasing the cache size to 2500 would improve reconciliation performance but would still limit the total number of in-memory accounts to 2500 * 5 = 12,500.

Windows Agent Configuration

Oddly enough, AgentCfg does not seem to exist in the ITIM servers bin directory. So you need to grab a copy from one of the agents that are running the same OS.

What agents are on a specific machine?

Find all the erurls (and if necessary the host=* also)

ldaps "(&(erurl=*)(!(erisdeleted=y)))" erurl

The results will look like this: erglobalid=41280318610535xxxxx,ou=orgChart,erglobalid=00000000000000000000,ou=i78,dc=com erurl= erglobalid=41280318610535xxxxx,ou=services,erglobalid=00000000000000000000,ou=i78,dc=com erurl= erglobalid=41280318610535xxxxx,ou=services,erglobalid=00000000000000000000,ou=i78,dc=com erurl= ...

  • and a few thousand more machines

Parse the output, strip out the machine name and for each machine:

C:\tivoli\Agents\Win2000Agent\bin>agentcfg -host -list
Listing agents running on node ''...
WIN2000AGENT               port 44970

To 'tail' the agent log to see what is happening. You will have to run this by 'hand' or write a script to enter the password and the break sequence (control C):

C:\tivoli\Agents\Win2000Agent\bin>agentcfg -host -tail -ag Win2000Agent

Enter configuration key for Agent 'Win2000Agent':
BSE:03/06/06 21:22:04 NoChangePassword::Replacing attribute value '0' with new value 'FALSE'
BSE:03/06/06 21:22:04 Converting attribute name 'NoChangePassword' to 'erW2KNoChangePassword'
BSE:03/06/06 21:22:04 DontExpirePassword::Replacing attribute value '1' with new value 'TRUE'
BSE:03/06/06 21:22:04 Converting attribute name 'DontExpirePassword' to 'erW2KPasswordNeverExpires'
BSE:03/06/06 21:22:04 UserStatus::Replacing attribute value 'no' with new value '0'
BSE:03/06/06 21:22:04 Converting attribute name 'UserStatus' to 'erAccountStatus'
BSE:03/06/06 21:22:04 AllowEncryptedPassword::Replacing attribute value '0' with new value 'FALSE'

... and so on until you break out of the program .... AGENTCFG Configuration:

agentcfg -ag XXXXprofile

CertTool and Adapter configurations

Cert Tool

certtool -ag TAMGSOAGENT

-->F Install Certificate


-->I Register Certificate -->C. Install certificate and key from PKCS12 file

Choice: c

Enter name of PKCS12 file: DAMLSRVR.PFX. Enter password: (password is:password). Certificate and private key successfully installed

And on the RACF side of the house we try it as a one-liner. . .

Oh, the libpath does not have the itim dlls in it so we export a new one before we begin

export LIBPATH=/u/itim/lib:$LIBPATH
IBMUSER:/u/itim/bin: >./certTool -agent racfAgent -pkcs12 ../data/itim45Agent.p12 -password secret

and if we are confused:

IBMUSER:/u/itim/bin: >regis -reg RACFAGENT.dat -list -protocol DAML

Registry listing for Agent 'RACFAGENT.dat'
Specific:ENROLE_VERSION       '4.0'
Specific:APPCCMD              'ITIMCMD'
Specific:APPCRECO             'ITIMRECO'
Specific:APPCOLU              'ITIMORG'
Specific:APPCDLU              'ITIMDEST'
Specific:APPCMODE             '#INTERSC'
Specific:PASSEXPIRE           'FALSE'
Main:Agent_Thread_Priorit '3'
Main:Agent_Config_Key     '7s1xDZdblah'
Main:Agent_EnableLogging  'TRUE'
Main:Agent_MaxFiles       '3'
Main:Agent_MaxFileSize    '1'
Main:Agent_Thread         'FALSE'
Main:Agent_License        'NONE'
Main:Agent_PassFilterToAg 'FALSE'
Main:Agent_ArchivePackets 'FALSE'
Main:Agent_SingleThreaded 'FALSE'
Main:Agent_AllowUserExec  'FALSE'

..... and so on and on and on TSO Netstat output:

EZZ2587I IBMUSER2 00000089             Listen
EZZ2587I IBMUSER2 00000088             Listen

Changing TIM contents thru the LDAP

Semi-general method for changing Javascript and XML objects

An unexpected consequence of this method is that the code is stored in the LDAP as clear text. So you can actaully use ldapseach to find all the entries that contain a value..

Add Javascript, Forms, XML

 function createIdentity(){
   var tf = false;
   var baseidentity = "";
   var identity = "";
   var counter = 0;
   baseidentity = subject.getProperty("sn")[0];
   tf =IdentityPolicy.userIDExists(baseidentity, true, true);
    return baseidentity;
    identity = baseidentity + counter;
    tf = IdentityPolicy.userIDExists(identity, true, true);

   return identity;
 return createIdentity();

ldapmodify -h -D cn=root -w secret -v -f mvs.erjavascript
ldapsearch -h -D cn=root -w secret -b dc=com "erglobalid=1808539802761507332"
dn: erglobalid=1808539802761507332,ou=policies,erglobalid=00000000000000000000,ou=i78,dc=com 

  • To change the erjavascript contents with what is in a file:

erjavascript:< ''file:///c:/itim_java/mvs_userid.js''

  • To encode the data

echo dn: erglobalid=1808539802761507332,ou=policies,erglobalid=00000000000000000000,ou=i78,dc=com>new.ldif
ldif -b erjavascript <mvs_userid.js >>new.ldif

  • To do a Form:

<tab index="0" selected="true">
<formElement name="data.uid" label="$uid" required="true">

  • The ldif file to change the form using the file would look like this:

erxml: < ''file:///c:/itimldif/form.xml''

  • to encode the data, optionaly, the ldif file to uuencode the file and replace the erxml attribute would look like this:

echo dn:erformname=ACompPerson,ou=formTemplates,ou=itim,ou=i78,dc=com>new.ldif
ldif -b erxml <ACompperson_form.xml >>new.ldif

Configuring ITIM to communicate with adapters using HTTPS connections

Problem The documented way to configure ITIM server to use SSL to communicate with adapters requires to set some JVM properties (at least "" and"). This has some drawbacks (as described below).

Solution The information necessary to establish a secured DAML connection are the following ones:

  1. trust store path
  2. trust store password
  3. key store path (SSL client authentication only)
  4. key store password (SSL client authentication only).

The first two ones are mandatory, the latter ones are required only if the SSL client authentication is enabled in the ITIM adapter configuration (this would require ITIM server authenticating itself to the adapter).

The JVM properties that defines the parameters above are (respectively) the following ones:

Setting the properties above has the following drawbacks: the need to restart the application server that runs ITIM everytime a configuration change is done; the potential occurrence of side-effects on other applications deployed on the application server.

However, it is possible to define this parameters at ITIM application level by setting the following properties in the configuration file (whose meaning follow immediately from their name):

For example, the file could include the following lines:

defining /tmp/ssl/myTrustStore as the trust store and "passw0rd" as the trust store password for all the ITIM services configured to use HTTPS to communicate with the adapter.

If everyone of these properties is defined, it takes precedence over the corresponding JVM property.

Furthermore it is even possible to define the SSL parameters at service profile level, by adding one or more of the following values to the "erproperties" multi-value attribute of the LDAP entry representing the service profile definition in the ITIM directory:

For example, the "erproperties" multi-value attribute of LDAP entry erObjectProfileName=w2kprofile,ou=serviceProfile,ou=itim,ou=tenant-id,suffix could include the following values:

defining /tmp/ssl/myKeyStore as the key store and "passw0rd" as the key store password for all the ITIM services that are istances of the w2kprofile and are configured to use HTTPS to communicate with the adapter.

If everyone of the above values is added to the "erproperties" multi-value attribute of a given service profile definition LDAP entry, it takes precedence, for that service profile, over the corresponding property defined in the file (and thus on the corresponding JVM property).


Expiring TAM password from TIM during a password change

In the situation where TIM is setting TAM passwords, customers have asked for a way to expire the newly set password in TAM. This can be accomplished by modifying the operational workflow that defines the change password event. It is important to note that this method can be used for any target platform.

Assumptions This document will assume that the Reader has a fully configured TIM 4.5 and TAM 4.1 environment. (If the Reader has TAM 5.1, the process is the same but the attribute name and agent used are different.) It is assumed that the Reader is comfortable navigating TIM's UI and can use the Workflow Editor. It is also assumed that the Reader has a fair understanding of Relevant Data and the way the Workflow Editor operates in TIM 4.5.

Step 1 - Edit the file This is the most annoying step, so let's get it out of the way first. Edit the file, which lives in the data directory in the ITIM home directory. Before the last line of all # signs, add the following line:

This allows the Workflow engine to use some other FESI extensions needed to accomplish this task. You will need to restart the TIM server.

Step 2 - Getting to the workflow Once the TIM server is back up, log in as itim manager or an equivalent. Go to the Configuration tab. Go to the Entities tab. Click on TAM4Account. Click on the Define Operations button. You will now see the list of available operations. Click on changePassword and you will enter the Workflow Editor. As you can see there isn't much to this workflow. It simply uses an extensions node to call the changePassword function.

Step 3 - Adding Relevant Data To make this process work, you must first declare some new relevant data items. Click on the Properties button; it's the one next to Save, Update, and Exit. This will bring up the Properties window. Click the Add button. This will bring up the Relevant Data Detail: Add window. Set the following values: 1) ID = owner 2) Type = Person Click Ok. Repeat this process with the following data. 1) ID = service 2) Type = Service Repeat this process with the following data. 1) ID = changedAccount 2) Type = Account It is important that you only set the data elements specified for each relevant data item. If you set attributes like Entity or Context, this process will not work. When you have finished these steps the Relevant Data window will look like this: Click Ok to accept the new Relevant Data changes. It is important that you get this all in one shot. Returning to this window and changing Relevant Data items can have unexpected repercussions.

Step 4 - Adding New Nodes Back in the main workflow editor screen, you'll need to add two new nodes: a Script node and an Extension node. Connect the CHANGEPASSWORD node to the Script node. Connect the Script node to the new Extension node. Connect the new Extension node to the End node.

Step 5 - Configuring the Script Node Open the Script node by double clicking on it. Set the Activity ID to setupData. Enter the following piece of JavaScript into the JavaScript field.

var account=Entity.get();
var serviceDN = account.getProperty("erservice")[0];
var localService = new Service(serviceDN);
var ownerDN = account.getProperty("owner")[0];
var localOwner = new Person(ownerDN);

First this code grabs the only piece of data that is available to it, the Entity, which is, in this case, a TAM account. The second paragraph determines the service the account is from and sets the Relevant Data item service accordingly. The third paragraph determines the owner of the account and sets the Relevant Data item owner accordingly. Finally, the last paragraph sets the Expire Password flag in the TAM account and sets the Relevant Data item changedAccount accordingly. You need to set all of these pieces of data so that the new Extension node can operate properly.

Step 6 - Configuring the New Extension Node This node will actually go and modify the TAM account, setting the Expire Password flag. (You could use an Operation Node here just as easily.) Open the new Extension node. Set the Activity ID to modifyAccount. Select the modifyAccount Extension Name Now you must assign the appropriate Relevant Data items. Click on the first line of Input Parameters, thus selecting owner / Person. Click the Search Relevant Data button. In the Relevant Data Search window, select the owner / Person line and click Ok. Repeat this process for the service / Service entry. Select service / Service in the Relevant Data Search window. Repeat this process for the account /Account entry. Select changedAccount / Account in the Relevant Data Search window. When you are finished the modifyAccount

Step 7 - Finishing Up Click Exit and click Save to save the changes you have just made. You might receive a warning about this workflow. This is merely a warning to tell you that now that you have overridden an out-of-the-box workflow with a custom one, TIM needs to circulate this change and it can take up to 10 minutes. (My experience has shown that this happens much faster than 10 minutes, but it's time for a coffee break at this point any way.). Notice that the Type for changePassword is now user-defined.

Notice that the modifyAccount step appears.

Debugging: If your password changes are failing, open up the View Completed Requests and look at the request. If there is an ECMA Script Interpreter error like the following, you did not edit the file properly or you forgot to cycle the TIM server afterwards. If your password changes are failing and you are seeing null pointer exceptions like the following, then you have, most likely, gone back into the workflow properties and changed the Relevant Data items. The best thing to do at this point is go back to the list of workflows for the TAM4Account Entity. Select changePassword and delete it. This resets the workflow back to the default settings, and you can try again.

Thanks I want to thank Brad, Sadu, and Sridhar for their help on this. Special thanks to Ken and George for making this my problem.

Gathering ITIM config and data from LDAP

Basic Data Collection

  1. Gather Organization Data.
  2. Determine Organization Sturcture


1.A. List Corporate structure in a 'ordered list'

1.B. If we are using a feed from HR or any other source as input sturcture this
    information using the corresponding data from the feed.

(root = ldaps "(objectclass=organization)" o description)

ou Root/Corporate                               Root Corporation
ou Root/Corporate/Accounting                    Corp Accounting
ou Root/Corporate/Accounting/Analysts           Corp Accounting Analysts
ou Root/Corporate/Accounting/Associates         Corp Accounting Associates

Same general list using FEED Data values: (assumption Feed Data is: LD.11.671.03293.09831 - opco.region.department.jobcode.jobcode2)

ou Root/LD                                      Landing Corporation
ou Root/LD/11                                   Landings Region 11
ou Root/LD/671                                  Landings Corporate Support Department
ou Root/LD/671/03293                            Landings Corporate Support Maintenance
ou Root/LD/671/03293/09831                      Landings Corporate Support Maintenance - Building

  1. Do a rough cut on Roles. A role is going to be the glue to get the person access to the parts of

the Computers/Accounts/Resources they will need. For each role we will create a provisioning policy that will define the machines and attributes that will be created.

  1. At points within the organization there are roles that are at a level withinn the organization.

Using the list we created - insert the roles (what is a role??) at that point:

ou Root/LD                                      Landing Corporation
role Root/LD/ALL_ETAM                           Landing Corp ALL ETAM Users Role
ou Root/LD/11                                   Landings Region 11
ou Root/LD/671                                  Landings Corporate Support Department
role Root/LD/671/COPR_SUPPT_ROLE                Landings Corp Support Dept Common Resources
ou Root/LD/671/03293                            Landings Corporate Support Maintenance
role Root/LD/671/03292/MNT_ENV_ROLE             Landings Corp Maintenance MS Office User Role
ou Root/LD/671/03293/09831                      Landings Corporate Support Maintenance - Building

To change the text in the Organization Tree list (or any other attribute actually)

This is usefull for the Organization Unit - so if the 'ou' is cryptic such as ou=LD, we can use the Description "Landing Corporation"

erRDNattr for Organization stuff to display the description

ldapmodify -h -D cn=root -w secret -c -f ModifyRDNattr.file

erobjectprofilename=OrganizationalUnit,ou=objectProfile,ou=itim,ou=i78,dc=com errdnattr=description
erobjectprofilename=Location,ou=objectProfile,ou=itim,ou=i78,dc=com errdnattr=description

ReCycle BIN:

ldapsearch -h -D cn=root -w secret -b ou=recyclebin,ou=itim,ou=domain,dc=com "erservicename=*" erservicename
ldapsearch -v -h -D cn=root -w secret -b ou=recyclebin,ou=itim,ou=domain,dc=com "erlastmodifiedtime=2003*" eruid  erservicename erlabel erpolicyitemname uid ou erlastmodifiedtime

General Format for LDAP Search:

ldapsearch -h -D cn=root -w secret -b ou=domain,dc=com "erglobalid=00000000000000000000"

Some of these are defined in the \itim\config\ldap\er-tenant.tmpl file... (what do we do with a tmpl file??) erglobalid's (after installation):

00000000000000000001 - errolename=ITIM Administrators
00000000000000000002 - erservicename=ITIM Service
00000000000000000003 - errolename=Administrator
00000000000000000005 - ou=policies erpolicyitemname=Default provisioning policy for ITIM

00000000000000000006 - ou=policies erpolicyitemname=Default identity policy for ITIM (Person)
00000000000000000007 - erpolicyitemname=Default identity policy for ITIM (BPPerson)
1------------------- - Service
2------------------- - ou=orgchart and ou=roles
3------------------- -
4------------------- - forms
6------------------- - users
7------------------- - organization
8------------------- - organization units (hr feed)

ITIM DB2 manipulations

db2 => list node directory
 Node Directory
 Number of entries in the directory = 3
Node 1 entry:
 Node name                      = F31LDAP
 Comment                        =
 Directory entry type           = LOCAL
 Protocol                       = TCPIP
 Hostname                       =
 Service name                   = ldapdb2
db2 => list database directory
 System Database Directory
 Number of entries in the directory = 5
Database 1 entry:
 Database alias                       = LDAPDB2B <<<<------
 Database name                        = LDAPDB2B
 Node name                            = F31LDAP
 Database release level               = a.00
 Comment                              =
 Directory entry type                 = Remote
 Catalog database partition number    = -1
Database 2 entry:
 Database alias                       = LDAPDB2
 Database name                        = LDAPDB2
 Node name                            = LDAPDB2
 Database release level               = a.00
 Comment                              =
 Directory entry type                 = Remote
 Catalog database partition number    = -1
db2 => connect to LDAPDB2B user enrole using enrole
Database Connection Information
 Database server        = DB2/6000 8.1.2
 SQL authorization ID   = ENROLE
 Local database alias   = LDAPDB2B

db2 => select 'Users: ',  count(distinct uid)  from ldapdb2.uid
1       2
------- -----------
Users:        55039
1 record(s) selected.

db2cmd /w /c db2 -fmycmdfile.txt

Command Line flags: (what are these SQL0031C File "C:\tools\DB2SQL92.BND" could not be opened. messages??) DB2SQL92 - no idea where the exe comes from - not a part of Personal Edition

Input file itimuserid.sql

--# db2sql92 -d ldapdb2 -a ldapdb2/secret -f \tools\itimuserid.sql -r output.file
select rtrim(substr(uid,1,45)) from uid order by uid;
select rtrim(substr(eruid,1,45)) from eruid order by eruid;

db2slq92 -d ldapdb2 -a ldapdb2/secret -f itimuserid.sql -r output.file

-d name of database
-a userid/password
-f file with commands
-r output file

db2 -tf tempsql

-t using ; for statement termination
-f use the file named
-v echo command to stdout
-c commit SQL Statements
-l log to history file
-r report output file

db2 -t +p < tempsql

+p turns off prompting (allows commands to be piped into DB2) Sample Select Statement to output 'wcrtusr' commands that I will run later.

select 'wcrtusr -rn "'||rtrim(realname)||'" @UserProfile:'||profile_name from users
      where profile_name like '%03.A%' fetch first 10 rows only

To find column names in a table

DESCRIBE select * from table_name

DB2 SQL Select example: Problem description: each 'logical' record is 3 rows - one for the base record, the second row is the tidappdata and the third row is the company data.

since we only want to display the next 30 'logical' records with each select ( whereever I start from + 3*30 - i.e, 90 rows ) so we compute the rownumber range we want to see. (Oh, we still need to parse through the output to build out output display)

dn: uid=bmusrqf,o=acomp
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: eUser
objectclass: ePerson
objectclass: racfBaseCommon
objectclass: racfUser
objectclass: racfGroup
objectclass: racfWorkAttrSegment
objectclass: eAccount
objectclass: eNTAccount
objectclass: AIXAdmin

select * from (select l.entrydata, rownumber() over (order by u.uid desc) as rn from ldap_entry l, sn s, uid u where (l.eid = s.eid or l.peid = s.eid) and s.

sn like 'S%' and u.uid like 'B%' and s.eid = u.eid ) as tr where rn between 1 and 60 so if my parms would be:

lastname_mask userid_mask chunk - then statement would sort of look like:

select * from (select l.entrydata, rownumber()
   over (order by u.uid asc) as rn
   from ldap_entry l, sn s, uid u
   where (l.eid = s.eid or l.peid = s.eid) and
   like 'lastname_mask%' and u.uid like 'userid_mask%' and s.eid = u.eid )
   as tr where rn between 1 and 60

Installing ITIM with non gui install

This comes from the top of Worldwide support for TIM, TAM and E-SSO at IBM.

A request to have the TIM GUI install broken down into it's component parts has been tabled as a future enhancement request. In the meantime, IBM WILL NOT SUPPORT any implementation of TIM that has not been installed using the GUI Install.

LDAP search for TIM Data

set ldaphost=<b></b>
set ldap_adminpw=<b>secret</b>
set ldap_search="^"(objectclass=erOrgUnitItem)^" ou erglobalid erparent"
if not %1. == . set ldap_search=%1 %2 %3 %4 %5 %6 %7 %8 %9
ldapsearch -v -h %ldaphost% -D cn=root -w %ldap_adminpw% -b <b>dc=com</b> %ldap_search%


ldaps "objectclass=erObjectProfile" erobjectprofilename

Now we can drill down and find out more details
ldaps "erobjectprofilename=smartPerson"
A role has one name: errolename To find all the roles and display the role name and it's parent:
ldaps "(&(errolename=*)(!(erisdeleted=y))(objectclass=erRole))" errolename erparent
A role has one parent: erparent. The parent is a OrgChart or Location item or the root.
ldaps "(&(erglobalid=654939804545083542)(!(erisdeleted=y)))" ou description
A role can be Dynamic: objectclass=erDynamicRole. Dynamic Roles differ from Static roles because they have javascripts: erjavascript
ldaps "(&(errolename=*)(!(erisdeleted=y))(objectclass=erDynamicRole))" errolename erparent erjavascript
Policies are objectclasses: erProvisioningPolicy, erIdentityPolicy and erHostSelectionPolicy
A policy has one name: erpolicyitemname
ldaps "(&(erpolicyitemname=*)(!(erisdeleted=y)))" erpolicyitemname erlabel erentitlements erpolicymembership
A policy has one or more memberships: erpolicymembership memberships are 'roles'
ldaps "(&(erpolicymembership=*)(!(erisdeleted=y)))" erlabel erpolicymembership
A policy has at least one entitlement: erentitlements. entitlements are xml (binary) and have one or more targets. a target TYPE = 0 is a ;profile; (erobjectprofilename). TYPE = 1 is a pointer to a specific service (ouch!!!). TYPE = 3 is Service Provsioning profile erobjectprofilename).
ldaps "(&(erentitlements=*)(!(erisdeleted=y)))" erpolicyitemname erparent erentitlements
A policy has one parent: erparent. parents are 'orgChart' Organizations.
ldaps "(&(erpolicyitemname=*)(!(erisdeleted=y)))" erpolicyitemname erlabel erparent
A policy has one target: erpolicytarget Javascripts
Javascript can be in the following:
A HostSelectionPolicy:
C:\itim_java>ldaps "objectclass=erHostSelectionPolicy" erjavascript objectprofilename erpolicyitemname
A Provisioning Policy (objectclass=erProvisioningPolicy) has a name: erpolicyitemname
ldaps "objectclass=erProvisioningPolicy" erpolicyitemname
ldaps "erpolicyitemname=ALL_Employees" erpolicyitemname erpolicytarget erentitlements erpolicymembership
ldaps "(&(erjavascript=*)(!(erisdeleted=Y))(objectclass=erHostSelectionPolicy))"
To find all policy targets:
ldaps "(&(erpolicytarget=*)(!(erisdeleted=y)))" erlabel erpolicytarget erpolicyitemname
ldaps "(&(objectclass=erPersonItem)(erroles=*)(!(erisdeleted=y)))" uid erroles cn
ldaps "(&(objectclass=erPersonItem)(erroles=*6333378833686639705*)(!(erisdeleted=y)))" uid erroles cn
An Account (objectclass=erAccountItem) has: An account has a id: eruid An Account has a type: objectclass=erRacfAccount An Account has a parent: erparent An Account has an owner: owner (which may be the owner?) An Account has one service: erservice

ldaps "(&(objectclass=erAccountItem)(owner=*)(!(erisdeleted=y)))" eruid erparent owner erservice

How many orphan accounts do I have

ldaps "(&(objectclass=erAccountItem)(!(owner=*))(!(erisdeleted=y)))" eruid erparent owner erservice

An account that has a person may be:

ldaps "(&(|(uid=sysusr2)(eruid=sysusr2))(!(erisdeleted=y)))" eruid uid erservice owner erparent

More bits on Data Mining

Starting from a Reconcilation from Windows

ldaps "(erntglobalgroups=*)" eruid cn erntglobalgroups description erntscriptpath erntlocalgroups>phase_one.txt

File: phase_one.txt now has the eruid, cn and erntglobalgroups recce found.

  1. for each user make a list of NT Groups Assigned
    1. Is this list of groups known?
      1. Y - bump the count by one
      2. N - Make this a known list with a count of 1.
    2. Look user up in hr feed
      1. Y - Found pop HR Data into a Table
      2. N - Pop user into a psuedo person

Data Mining steps:

  1. Find all defined roles and the parent of the Role:
  2. Find people
  3. Find a userid:

set fuser=sysusr2
set ts="createtimestamp modifytimestamp"
CMD: ldaps "(&(|(uid=%fuser%)(eruid=%fuser%))(!(erisdeleted=y)))" eruid uid erservice owner

  1. Find orphan W2K Accounts and get enough information to create a dummy people record:

ldaps "(&(objectclass=erW2kAccount)(eruid=*)(!(owner=*))(!(erisdeleted=Y)))" cn erw2kcontainer eruid

MQ Queues for ITIM

The following is a list of Tivoli Identity Manager queues:

  • itim_wf, the workflow queue
  • itim_wf_pending, the workflow pending queue
  • itim_rs, the remote services queue
  • itim_ms, the mail services queue
  • itim_adhocSync, the custom report services queue

Sample Exchange Distribution Lists PP

Say you want ITIM to put people in Distribution Lists automatically. The Exchange lists are groups in AD, so write a provisioning policy with the following code in a mandatory attribute: (change for your naming conventions)

{/* pick real accounts, not test*/
if (subject.getProperty("sn")==null || subject.getProperty("sn")[0].length < 9 || subject.getProperty("sn")[0].substring(0,9).toLowerCase()!="itim-test"){
/* use primary accounts only */
if ((parameters.eruid[0]).equalsIgnoreCase(subject.getProperty("uid")[0])){
    /* focus on Active or Paid-Leave employees only */
    if (subject.getProperty("employeestatus") != null && subject.getProperty("employeestatus").length > 0 && (subject.getProperty("employeestatus")[0] == "A" || subject.getProperty("employeestatus")[0] == "P")) {
        var values = new Array();
        var i=0;
    var suffix = " Employees"; /* assign everybody into Employees groups by default - agrees with TDI logic in AD_Distribution_Groups_ITDI_Config.xml */
        if (subject.getProperty('parent')[0].name == "Business Partners" || subject.getProperty('parent')[0].name == "Contractors")
        suffix = " Contractors";
        if (subject.getProperty("departmentnumber") != null && subject.getProperty("departmentnumber").length > 0)
            values[i++] = "RC"+subject.getProperty("departmentnumber")[0]+suffix;
    /* splits and joins below are a poor-man's replace function */
        if (subject.getProperty("l") != null && subject.getProperty("l").length > 0)
            values[i++] = "All "+subject.getProperty("l")[0].split("\\").join("-").split("/").join("-")+suffix;
        if (subject.getProperty("managerlevel") != null && subject.getProperty("managerlevel").length > 0) {
            if (subject.getProperty("managerlevel")[0].substring(0,1) != "S")
                values[i++] = "All Managers Directors Officers";
            values[i++] = "All Management";
    /* add an employees or contractors group to everybody */
    values[i++]="All"+ suffix;
        /* a generic way to do this based on OU: values[i++]="All "+subject.getProperty('parent')[0].name; */
        values[i++]="EXCH_GROUP_"+ subject.getProperty('uid')[0].substring(5,6);
        return values;
    } else return null;
} else return null;
} else return null;}

Sample HR Feed placement rules

BASIC placement rule - must have unique OU container names in the Org Tree example: "Sales" OU

var filt = '';
var orgunit = entry.ou;
if (typeof orgunit != "undefined") {
  for (i=0; i<orgunit.length; ++i) {
    if (i == 0)
      filt = 'ou=' + orgunit[i];
      filt = filt + ',ou=' + orgunit[i];
return filt;

ADVANCED - Use for feeds where the DSML feed specifies all as "ou" and placement deals with OU and Location containers in Org Tree

example: "East Coast" Location with "Sales" OU

var filt = '';
var orgunit = entry.ou;
if (typeof orgunit != "undefined") {
   for (i=0; i<orgunit.length; ++i) {
      if (i == 0){
         if (orgunit[i] == "East Coast" || orgunit[i] == "West Coast")
           filt = 'l=' + orgunit[i];
           filt = 'ou=' + orgunit[i];
      } else
        filt = filt + ',ou=' + orgunit[i];
return filt;

Sample Provisioning Policy JavaScript for group membership management

Say you want ITIM to put people into some groups, but only in these that are specified by the script. You can do it with a combination of rules. First create an group attribute in a provisioning policy that takes way any groups not assigned specifically, if the groups are in some naming convention: Groups: "^(DL_|SomeOtherBeginning).*" - Regular expression, Excluded. Then create a group attribute set to mandatory that give the groups:

{var values = new Array();
var i=0;
if (subject.getProperty("bargainingunit") != null && subject.getProperty("bargainingunit").length > 0)
    values[i++] = "DL_BU"+subject.getProperty("bargainingunit")[0];
if (subject.getProperty("mailstop") != null && subject.getProperty("mailstop").length > 0)
    values[i++] = "DL_"+subject.getProperty("mailstop")[0];
if (subject.getProperty("departmentnumber") != null && subject.getProperty("departmentnumber").length > 0)
    values[i++] = "DL_RC"+subject.getProperty("departmentnumber")[0];
if (subject.getProperty("managerlevel") != null && subject.getProperty("managerlevel").length > 0)
    values[i++] = "DL_Managers";
return values;}

Or something like this:

{var values = new Array();
var A = new Array();
var i=0;
if (subject.getProperty("bargainingunit") != null && subject.getProperty("bargainingunit").length > 0){
    str = "DL_BU"+subject.getProperty("bargainingunit")[0];
    A = str.split("/"); str = A.join("-");  A = str.split("\\");    str = A.join("-");  A = str.split(" ");
    values[i++] = A.join("-");
if (subject.getProperty("mailstop") != null && subject.getProperty("mailstop").length > 0){
    str = "DL_"+subject.getProperty("mailstop")[0];
    A = str.split("/"); str = A.join("-");  A = str.split("\\");    str = A.join("-");  A = str.split(" ");
    values[i++] = A.join("-");
if (subject.getProperty("departmentnumber") != null && subject.getProperty("departmentnumber").length > 0){
    str = "DL_RC"+subject.getProperty("departmentnumber")[0];
    A = str.split("/"); str = A.join("-");  A = str.split("\\");    str = A.join("-");  A = str.split(" ");
    values[i++] = A.join("-");
if (subject.getProperty("managerlevel") != null && subject.getProperty("managerlevel").length > 0)
    values[i++] = "DL_Managers";
A = str.split("/");str = A.join("-");A = str.split("\\");str = A.join("-");A = str.split(" ");
values[i++] = A.join("-");
return values;}

The split/joins are a poor-man's "substitute" command. You could also create an attribute that gives users default groups like this Grops: "Screen saver 30 minutes::Users" Default, Constant Value.

Sample adoption rule

var ps = new PersonSearch(); return ps.searchByFilter("","(uid="+subject.eruid[0]+")",2);

in configuration-> adoption rule

Sample iSeries adoption rule

var text=subject.eras400text;
var search1 = new String(text);
var search2=search1.substring(43,50);
var ps = new PersonSearch();
return ps.searchByFilter("Person","(employeenumber=" + search2 + ")",2);

Sample Provisioning policy to join groups

A couple of simple changes were required to add users to the fsa-user group in an automated fashion. While a union join directive does exist for the TAM group memberships, an automatic provisioning policy by default can not enforce a union, because it wants to create a new account instead. To correct this issue, the following changes were made:

TAM-SVC profile - changed the "add account" value from "Create user entry" to "Import OR Create user entry". This change allows the TAM service to update a user profile if found, and add a user profile if it isn't found.

TAM-SVC policy enforcement - Changed enforcement from "Mark" to "Correct" - This allows the TAM service to fix any non-compliant accounts.

FSA TAM provisioning policy - Changed fsa-user group memebership to "Mandatory". Changing the group to mandatory allows an account to be marked as non-compliant, which will be fixed by the above change.

Since TAM will always managed by ITIM, setting the policy enforcement to "correct" is not an issue. All other services (RACF, FBCorp, etc) should not be marked as "correct" at this time because they are still managed external to ITIM.

Should TIM TAM and Portal all share the same LDAP installationx

A: I don't see any problems with TAM and Portal sharing the same LDAP with different suffixes. However, I really think that TIM should have its own. Here are the reasons why: -Performance tuning for TIM vs. TAM are different. TIM is tuned more for read AND write processes. However, TAM is tuned more for read processes. For example, the way we tune the entry_cache paramater (used for base level searches) differ between TIM and TAM. TIM's recommended value is 512 MB. TAM's recommended value fluctuates with increasing users. The formula is based on the number of TAM users * 4. There are other differences in how LDAP will be tuned between TIM and TAM such as db_cache size and attribute indexing. -There are cases in which not tuned TIM had performance degratation with only 100 users. -We will be constrained by how we can tune because a global tune will mean that it affects TIM, TAM and Portal.

Why do I receive xData area RASDTAARA in RAS not foundx while installing the AS.400 agentx

Problem During the installation of the AS/400 agent, the user may see the message "Data area RASDTAARA in RAS not found".

Cause The FTP setting for "current library" on the AS/400 machine is not setup correctly.

Solution The FTP configuration settings needed for installing the agent can be setup with the following commands:

CHGFTPA Then prompt with F4

Make sure these are set:

Initial name format *LIB
Initial Directory *CURLIB

Alternatively, you can use the command line:


with type of *CHAR and length of 2000 to create this missing object.

Change password in ITIM through the command line

wsetadmin -rglobal,super:senior:admin:user:install_client:install_product:PasswordDecryptRole Root_chisos-region
echo $?
wlsusrs @UserProfile:nbUSERexisting
fifi fritz
wgetusr -p @UserProfile:nbUSERexisting fritz
Information about user login fritz:  Password: 7tJ_X9wd4VneBUZ8HB
wcryptpw -d 7tJ_X9wd4VneBUZ8HB fritz
wgetusr -p @UserProfile:nbUSERexisting@chisos fritz
Information about user login fritz:  Password: 7tJ_X9wd4VneBUZ8HB
wcryptpw -d 7tJ_X9wd4VneBUZ8HB fritz
grep fritz /etc/passwd
wcryptpw -d cdOwThCqblah
11/13/10 12:21:25 (14): no permission for `string' for operation `decrypt'
wsetadmin -r global,super:senior:admin:user:install_client:install_product Root_chisos-region
wcryptpw -d 7tJ_X9wd4VneBUdoh
An authorization error of type "insufficient authorization" occurred

Change Passwords

E:\tivoli_srv\bin\w32-ix86\bin>wpasswd -v -L -O four -P four user
Administrator Password Modification for username user succeeded:
Changed Common/NT/NetWare/UNIX password for use in user profile Y_profile
Changed password for user user on host server1.
E:\tivoli_srv\bin\w32-ix86\bin>net use \\server1\e /user:host new
The operation was cancelled by the user.
E:\tivoli_srv\bin\w32-ix86\bin>net use \\msuszko\e /user:wyoung four
The command completed successfully.

Encryption with TIM

TIM account password and their password history are one way hashed using SHA-256 all other passwords that tim manages by the mean of adapters are encrypted with a symmetrric algorithm. AES is the only two wya encryption supported by TIM 5 symmetric key for encryptiong these passworfds is stored in TIM keystore. the default sie is 128 bits but can be extendded to 192 or 256 by using a cipher migration tool ( refer to TIM documentattion)

look at for further information

itim keystore stores symmetric keys to encryspt account passwords. the keystore password is defaulted in the silent installation onlyu to hte value "sunshine" in the config file property - Keystore_password

How the service profile importer works

The code where the NullPointerException is occurring is trying to do the following:

- Retrieve a list of "group" objectclasses for the profile - for each group objectclass, retrieve the mappings for groupAttrs - set the ObjectProfile to the output of ProfileLocator.getProfileByClass(groupCustomClassName) - for each groupAttribute passed in to the method, retrieve the corresponding map from the custom group profile

IAM implementation technical words of experience

Do not configure correct mail-out servers on test systems if you dont want to annoy users with irrelevant test emails

Self-care limitations

The IBM supplied self care app has an important limitation. After a successful response to the questions it can only reset a user's password automatically and then e-mail it to the user's e-mail address.

The options are:

  1. Leave as is. The user's will have to be able to check e-mails in order to retrieve the password (plus users should already have e-mails on-file in TIM). Effort - 0.
  2. Open up the challenge/response portion of ITIM's own interface to unauthenticated access. I've got it working but it introduces security concerns. Effort - approx 4hr.
  3. Modify self care app to display the new password on the page after creating one automatically . Unsecure as well. Effort - approx 1-2 days
  4. Modify self care app to allow users to create new passwords. Effort - approx 2-3 days.

External ISIM User Registry support


before install


after install

Upgrading data by installing ISIM over an existing installation

This upgrade method relies on IBM provided upgrade scripts. It can not be easily automated and is more prone to bugs, because it does not remove old TIM 5.1 data, but changes it to work in SIM 6.

  1. Copy the ISIM data folder to use the same directory that was previously used on the original system.
  2. Update ITIM.product file in your <itim_home>/properties/version directory:
    1. Locate <version></version>
    2. Change only the version information from <version></version> to <version></version>.
  3. Change the directory to the <itim_home>/data directory and back up any files that must be merged during the installation.
  4. Create a backup copy of the existing files with only the .bak extension for:
    1. copy encryptionKey.bak.
    2. copy enRole.bak
    3. copy KMIPServer.bak
  5. Run instaix.bin
    1. In the Choose Install Directory window, you must select the existing Tivoli Identity Manager home directory that you want to upgrade. Accept the default directory, or click Choose and select the correct directory. Then, click Next.
    2. In the Upgrade IBM Security Identity Manager window, click Continue to Next to start the upgrade.
    3. In the Installation Directory of WebSphere Application Server window, confirm the WebSphere Application Server directory and click Next.
    4. In the WebSphere Profile Selection window, select the WebSphere Application Server profile name, and click Next.

Note: The cluster names you enter do not have to match the previous version of Tivoli Identity Manager, but they must exist from the configuration of WebSphere Application Server. For more information about configuring WebSphere Application Server for IBM Security Identity Manager, see Installation and configuration of WebSphere Application Server on the IBM Security Identity Manager product documentation site.

  1. In the WebSphere Application Server Data window, enter or accept the application server name. Ensure that the correct host name for the new computer is shown, and click Next.
  2. If you are running IBM Security Identity Manager in a cluster environment, verify the host name of the system on which WebSphere Application Server and IBM Security Identity Manager are to be installed. Click Next.
  3. If WebSphere administrative security and application security is turned on, in the WebSphere Application Server Administrator Credentials window, enter the WebSphere Application Server administrator user ID and password, and click Next.
  4. After deploying Tivoli Identity Manager 5.1 on WebSphere Application Server 7.0 Fix Pack 5, remove the ojdbc.jar file from ISIM_HOME/lib and replace it with ojdbc6.jar. Then, rename ojdbc6.jar to ojdbc.jar. This renaming is necessary because WebSphere Application Server 7.0 uses JDK1.6.
  5. After you enter the DBinformation, click Test to test the connection.

Note: The Database User and User Password fields are disabled. When you create the database user for IBM Security Identity Manager Version 6.0, make sure that you use the same database user ID and password that you used for the previous Tivoli Identity Manager server.

  1. Click OK after you change or verify all the fields on all the tabs. The database upgrade program is started to upgrade the database schema and data. The database upgrade can take some time to complete, and progress is not displayed. After it is complete, the LDAP upgrade program is started to upgrade the LDAP schema and data. This upgrade can also take some time. You can look at the log files in the ISIM_HOME\install_logs directory to see the upgrade progress, specifically the following log files:
    1. itim_install_activity.log
    2. dbUpgrade.stdout
    3. ldapUpgrade.stdout
    4. runConfigFirstTime.stdout

ITIM Built In reports

Reports that you can generate for IBM® Tivoli® Identity Manager 5.1 out of the box.


Account Operations
A report that lists all account requests. Allows filtering by account operation, service and other fields.
Account Operations Performed by an Individual
A report that lists account requests made by a specific user. Allows filtering by the user who made the request in addition to other fields.
Approvals and Rejections
A report that lists request approval activities that were approved or rejected. Allows filtering by activity approver, service, and other fields.
Operation Report
A report that lists all operations submitted in the system. Allows filtering by requestee, operations, and the request start date and end date.
Pending Approvals
A report that lists the request activities submitted but not yet approved. Allows filtering by service, activity status, and other fields.
Rejected Report
A report that lists all rejected requests. Allows filtering by requestee and the request start date and end date.
User Report

User and Accounts

Account Report
A report that lists accounts for a business unit. Allows filtering by service and business unit.
Accounts/Access Pending Recertification Report
A report that lists all pending recertifications for access definitions and accounts. Allows filtering by account or access owner, service type, and service.
Individual Access
A report that lists user access definitions selected by individual account owner, business unit, access, or service. Allows filtering by a user that owns accesses, business unit of the user, access defined in the system, and service where access is supported.
Individual Accounts
A report that lists the accounts and their owners. Allows filtering by user.
Individual Accounts by Role
A report that lists accounts owned by users of a specific role that is a member of provisioning policy. Allows filtering by role and business unit.
Recertification Change History Report
A report that lists the recertification history of accounts and user accesses. Allows filtering by account or access owner, recertification response, start date and end date, and other fields.
Suspended Individuals


Reconciliation Statistics
A report that lists the activities that occurred during the last completed reconciliation of a service, regardless of when the report data was synchronized. Remote services provide reconciliation statistics during a reconciliation. This report contains data from the last service reconciliation. Data synchronization is not a report prerequisite. Allows filtering by service.
A report that lists services currently defined in the system. Allows filtering by service type, service, owner and business unit.
Summary of Accounts on Service

Audit and Security

Access Control Information (ACIs)
A report that lists all access control items in the system. Allows filtering by access control item name, protection category, object type, scope, and business unit.
Access Report
A report that lists all access definitions in the system. Allows filtering by access type, access entitlement, service type, service, and administration owner of an access definition.
Audit Events
A report that lists all audit events. Allows filtering by audit event category, action, initiator, start date, and end date.
Dormant Accounts
A report that lists the accounts that have not been used recently. An account that does not have last access information is not considered dormant, including new accounts where the last access date is blank. These types of accounts are not displayed in a dormant report. Allows filtering by service and dormant period.
Non-Compliant Accounts
A report that lists all accounts that are noncompliant. Allows filtering by service and the reason for noncompliance.
Orphan Accounts
A report that lists all accounts that do not have an owner. Allows filtering by service and account status.
A report that lists target and memberships of the provisioning policies in the system. Allows filtering by policy name.
Policies Governing a Role
A report that lists all provisioning policies for a specified organization role. Allows filtering by role name.
Recertification Policies Report
A report that lists all recertification policies. Allows filtering by policy target type, service type, service, access type, and access.
Entitlements Granted to an Individual
A report that lists all users with the provisioning policies for which they are entitled. Allows filtering by user.
Suspended Accounts
A report that lists the accounts that are suspended. Allows filtering by user, account, service, and date.
Separation of Duty Policy Definition Report
A report that lists various separation of duty policies. Allows filtering based on policy name and business unit.
User Recertification History Report
A report that provides history of user recertification. Allows filtering based on date range, start and end date, policy name, business unit, user and recertifier.
User Recertification Policy Definition Report

The following reports are shipped with the Tivoli Common Reporting for TIM 5.1 (free of charge reporting module based on the open source BIRT system)
Audit and security: accesses

The audit and security report that lists all access definitions in the system.
Dormant accounts
The dormant accounts report that lists the accounts that have not been used recently.
Entitlements granted to an individual
The entitlements granted to an individual report that lists all users with the provisioning policies for which they are entitled.
Noncompliant accounts
The report that lists all noncompliant accounts.
Orphan accounts
The report that lists all accounts not having an owner.
Requests: approvals and rejections
This report shows request activities that were either approved or rejected.
Separation of duty policies reports
Various separation of duty policy reports.
Separation of duty violation report
The separation of duty violation report. This report contains the person, policy, and rules violated, approval and justification (if any), and who requested the violating change.
The report that lists services currently defined in the system.
Summary of accounts on a service
The report that list a summary of accounts on a specified service defined in the system.
Suspended accounts
The report that lists the suspended accounts.
User recertification history report
The report that lists history of user recertifications performed manually (by specific recertifiers), or automatically (due to time out action).
User recertification policy definition report