Quantcast
Channel: SAP Identity Management
Viewing all 172 articles
Browse latest View live

SAP IDM 7.2 integration with Exchange

$
0
0

Hi All,

 

I have to integrate SAP IDM 7.2 with Exchange. Can anyone provide me steps to integrate IDM and Exchange? I did not see any standard document available in SCN.

 

Regards,

Dhiman Paul.


Reconciliation reports in SAP IDM

$
0
0

Hello All,

 

With this blog, I want to share my knowledge on how the reconciliation reports can be generated for various ABAP & Java Systems that are integrated with SAP IDM.

 

In my scenario, we have 5 production clients for which identities & their access is managed from SAP IDM. As a part of audit, my auditors do check the consistency in the identities and their access, between the IDM and the target clients. So, quarterly I have to submit a report which should provide information like

    • Users available in target client but not in Identity Store.
    • Users available in Identity Store but not in target client.
    • Role Assignments available in target client but not in Identity Store.
    • Role Assignments available in Identity Store but not in target client.

 

Then I have made use of the reconciliation jobs that comes with the RDS solution and made my life easy !!

 

My solution works like this. Only Auditors & IDM Administrator will have access to the Reconciliation reports folder in IDM UI, from which they can select the target system for which they can generate the reconciliation report themselves. The report will be emailed to requestor’s email ID.

 

The solution in detail is given below.

 

I am on IDM 7.2 SP7. I am making use of reconciliation job templates that comes with RDS solution of IDM.

 

You can get the RDS-solution, from http://service.sap.com/rds-idm but you need SMP login.

 

The reconciliation job template (ABAP) that comes with RDS looks like below. You can find the reconciliation job for AS JAVA in the AS JAVA folder of the Reconciliation job template.

 

ReConTemplateABAP.png

 

 

Create a folder SAPC Reconciliation report and copy the Reconciliation template for the target system. If your target system is ABAP, copy the AS ABAP Reconciliation report template. If Java, copy the AS JAVA Reconciliation report template.

 

Select the job, go to options tab and configure the repository as shown in the below screenshot.

 

Step1A.png

 

 

Similarly do the same for all the reconciliation jobs of the respective repositories. In my case, I have 5 target systems, so I have 5 reconciliation jobs in my SAPC Reconciliation Reports folder as below.

 

Step1B.png

 

Ensure that the global constant SAPC_PATH_DOWNLOAD is configured and necessary access/sharing permissions are provided on that path. Because, this is the path to which SAP IDM writes the reconciliation report. After the report is generated, the report is selected, emailed and deleted from the path. You can see how the path, where the report is saved is configured in the passes of the reconciliation job in the below screenshot.

 

Destination.png

 

Now, the reconciliation jobs are ready !! These jobs can be run now from management console on click of Run Now button in the options button of the tab. This will generate the report and saves it in the path configured.

 

But I want to let the auditors generate the report themselves. So, we will create an ordered UI task and the attributes are selected as shown below.

 

Attributes.png

.

In the above screenshot, I have selected one attribute called SAPC_REQ_RECONREPORT which is a custom attribute which will list the jobs that are available in the SAPC Reconciliation reports folder.

 

The configuration of the attribute is done as below.

 

Attributes.png

 

               

I have created a privilege Z_MX_PRIV:RECONCILEREPORT and restricted the access only to this privilege. Users who want access to this reconciliation reports should be assigned with this privilege.

 

The screenshot for the access control tab of the UI task is given below.

 

AccessControl.png

 

Under the Reconciliation Report UI task, I have configured 3 jobs as shown below.

 

jobs.png

 

Job 1: Trigger the reconciliation job– The respective job selected in IDM UI will be triggered and the report is generated and will be saved to path defined the global constant. I have used a custom script Z_SAPC_triggerjob (slightly modified script of sapc_triggerjob). The screenshot of the destination tab of this job is given below.

 

job1.png

 

Job 2: Wait for report creation– This job will make the system to sleep for 2 mins and give time for reconciliation job to be completed. The sleep time can be adjusted based on your requirement. The screenshot of this job is given below and a script sleep60seconds is used to make the system to sleep for 60 seconds so that reconciliation job can complete its execution.

 

job2.png

 

Job 3: Send report via email– This job will attach the html report to the mail and send a mail to the requestor email id. The requestor should have email ID configured in IDM system. In this job, I have used a custom script Z_SendReportMail (slightly modified script of sapc_sendreport)

 

job3.png

 

Ensure that all the jobs and passes are enabled. Dispatchers are configured.

 

Now let’s have a look at UI !! Following is the screenshot of the Reconciliation report task. In the below screenshot all the available reconciliation reports are listed.

 

Ui1.png

 

Select the report that has to be executed and click on the Generate report. The report will be emailed to the requestor’s email ID. To be clear, the same has given as a note as highlighted in screenshot.

 

ui2.png

 

Check the inbox and you should receive a mail with the report.

 

mail1.png

 

Thanks,

Krishna.

Using VDS to display ROLE Validity

$
0
0

As has become my habit, I’ll start this entry off with a quote:

 

"Impossible" is a word that humans use far too often.” – Seven of Nine

 

A few weeks ago there was a thread on the IDM forum that asked about using the Virtual Directory Server (VDS) to display a user, their roles and most importantly, the expiration of those roles.

 

I thought that this was not possible to accomplish and stated so in the discussion thread, but it’s been nagging at me, as has the idea of doing some writing about VDS, which I’ve had a love/hate relationship since it was the MaXware Virtual Directory. I’ve often called it part of the “black magic” of IDM, but really, once you get your arms around it. It’s not that bad. So I started playing around with it a bit and was able to come up with a configuration that works.

 

With that being said, we’ll stop for one more quote and then be about it…

 

"The difficult we do immediately. The impossible takes a little longer." – Motto of the US Army Corps of Engineers.

 

Before we begin, a couple of notes:

 

  • I designed this around SAP IDM 7.2 SP8 / VDS SP8
  • My IDM instance is using SQL Server 2008, so if you're running this against an IDM based on Oracle there might be a couple of changes to be made.

 

To start, load up the VDS front end and Select File / New and then Database from the Group column and Generic Database from the Template column. Click OK. Fill out the Parameters and you should have something that looks like:

 

database template.png


I’d like to point out a couple of things here:

  1. Note that if you’re running VDS on a server with Active Directory, Open LDAP, any other LDAP or VDS running on Port 389, you’ll wind up with errors later on. Use another port. 1389 is one of my standbys, but usually stuff in the 7000 range is good too. Even though you might be thinking, it’s DEV/LAB/SANDBOX and it doesn’t matter, trust me it does. You’ll be going to production and things will not be working. This is the first thing to check. Do yourself a favor and check with your Network Admins early so that they can assign you a good block of ports for VDS operations.
  2. Note in the database connection string that we are using mxmc_oper. In order for this configuration to work, you must use the OPER account. I could probably find a way to make it work under mxmc_rt, but it would require a ton or so of custom coding, and that’s not really my style.
  3. For whatever reason I never changed the Display name value, use whatever makes you happy.

 

After you click OK, you'll be prompted to save your configuration.  Give it a valid file name and save the file in the default location.

 

Now, take a look at the Data sources node, and dive down past Singles to the Database node (Note this will be different if you changed #3 above) and click on the Database tab. Click the Get Get database schema button, find the idmv_link_ext view and select OK and for testing purposes add mcValidTo is not null to the Additional filter field. When you're all done, hit Apply.

database node.png

 

So what does this do, anyway? It tells us to look in the specified scope (idmv_link_ext) view within the identity store database which holds assignment information and to only include those entries where the mcValidTo column is not populated. This is good for troubleshooting since it will show only entries with Roles that have expiration dates (I seldom use them in my projects somehow, so this was helpful to me, your mileage may vary.) You should also notice a constant for the database connection (%$const.MX_DS_DATABASE% and a JDBC Driver populated (depends on what database you are using)


Now, pop over to the Data source attributes tab. Go down to the Default database parameters section and select Define… When the window appears, scroll down in the Available Attributes column and highlight mcThisMSKEYVALUE and click Add attribute. Make sure that the Attribute types is set to CN=.

The screen should look like this:

define parameters.png

If it does, go ahead, click OK. Congratulations, you just set up the parameters that are passed to VDS for defining the virtual LDAP DN.

 

Now go down a little bit in the VDS tree and find the Rules Node, expand the node and click on the FullAccess node and choose the Search Operation tab.

Rules node.png

This will allow us to set the Access controls on the Virtual Directory. Note that you can set up a virtually unlimited number of access rules, but we’re only interested in the default rules. Set it up the way I’ve specified here. You'll have to enter these one at a time.  Make sure that you double check your spelling! This could result in some issues later on.

FullAccess.png

If you want, feel free to set this up for the FullReadAccess rule as well.Incidentally, what we've done here is set up a listing of all the attributes that can be viewed when this configuration is accessed by an LDAP browser or other LDAP enabled application.


Now we need to map the FullAccess rule to a user. To do this, go to User groups and expand the Authenticated node and double click on the admin user that is there, click on the Reset button to define a password.password set.png

We’re nearing the finish line now. It’s time to construct the virtual tree and it’s pretty easy for our purposes, just use the default that you see in the example, which uses one static node (O=db) and one Dynamic node (*)


When this all done, start (or update if the configuration is already running for some reason) your configuration using the controls on the top of the screen. Now take a look in the LDAP browser and drill down (make sure you’ve filled in the credentials you set up before) and take a look. You should see something like this:

ldap browser.png


If you’ve done everything correctly, you should see some results. I threw this together fairly quickly and tested it but there could be a couple of holes.  If there's any issues, let me know and I will update the entry.


As time permits, I’ll look into expanding this example to demonstrate more functionality, some of the things I’m thinking about are:

  • Adding access rules
  • Changing the Virtual Tree structure
  • Writing back to the Identity store
  • Changing the mapping of attributes

 

I might not do all of these using this particular example, but I would like to show this functionality. If there are other things you’re interested in achieving with VDS, let me know and I’ll try to write on it.

IDM SQL Basics #2: Locating problem queries

$
0
0

Introduction

 

This is an unfinished version of the document that was published when I restored an older revision. And I accidentally made a document rather than a blog. Rather than hide it I've converted it to a blog-post and will leave it published while I try to finish it. Entries marked TBD (To Be Done) will be expanded later.

 

This is part 2 of the "IdM SQL Basics" blog posts.

#1: Overview of IDStore views, tables, queries & basic examples and some examples of performance impacts => Here

#2: How to locate problem queries in your implementation => You're here!

#3: Testing and improving queries SQL Server, Oracle, DB2 => Here


For the purpose of this document a problem query is one that takes a long time to complete, not one that doesnt work. And depending on where the query is used a long time can be measured in milliseconds, seconds, or minutes. There are a couple of built in tools to help you locate problems and problem queries that take an excessive amount to finish and causes bottlenecks as well as queries and tools that we (or I) use during development and support sessions. I'll also try to address which problem situations each method can be used to troubleshoot.

 

The tools I'll try to cover are

 

  1. Built in execution threshold logging
  2. Database queries
  3. Database tools (Activity Monitor, SQL Profiler, Enterprise Manager, Solution Manager)
  4. JMX Developer Trace

 

1. Finding problems using execution threshold logging

 

I'm starting with this most underused gem, the Execution Threshold Log. This apparently little known feature is built into the IdM product and logs executions of queries or procedures that exceed a specified amount of time. The specified amount of time is set using the global constant MX_LOG_EXEC_THRESHOLD or in the admin UI. This constant should already exist on your 7.2 system and have a fairly large number assigned to it, but if its not there you can create it manually. An important thing to keep in mind is that the number entered here is milliseconds. So entering 1 will cause large amounts of useless logs to be generated.ThresholdLogMMC.png

Events logged by this function are logged to the MC_EXEC_STAT table, but you should use the IDMV_EXEC_STAT view to look at the content as it gives some columns text-values that improve readability. You can also see the content of this in the admin UI.The function will log most operations done by the DSE RunTime, Dispatcher, UI and most of the database procedures they use. This means that slow uSelect operations, access controls, dispatcher job allocation and a host of other problems can be found by this function.

 

IDMV_EXEC_STAT

 

This view will contain a row of information for each event that exceeded the set limit. It contains a lot of useful information and columns that you can look at. What I usually do is run:

 

select datetime,componentname,mcExecTime,jobname,taskname,mskeyvalue,mcStatement from idmv_exec_stat order by mcExecTime desc

 

This gives the most useful columns with the worst offenders listed first, like this example that I made:blog_idmv_exec_stat_example.png

There are more columns in the view so please check it out. Here's a brief overview of the columns listed in my example:

 

  • MCEXECTIME - how many milliseconds the operation took to perform
  • COMPONENTNAME - where the threshold was exceeded (procedure, dispatcher, runtime ...)
  • JOBNAME/TASKNAME - If available they contain the name of the job or task
  • MSKEYVALUE - the user that the task was running for if available (not in jobs for instance)
  • MCSTATEMENT - Content depends on what type of event is logged. Can be a procedure name w. parameters, a query from conditional or switch tasks or something quite different

 

In any case, my simple example shows that I have a switch task statement that takes 6 to 12.5 seconds to execute. This task processes less than 5 entries a minute in worst case and would be a bottleneck in any scenario. You can also see some additional messages about procedures taking time, but since I know that mcProcUserReconcile is a housekeeping task and that it processed 1000 entries in the reported time so I wont spend time on them. Unfortunately there are not any built in "this is normal" tags but you can ask here or through support if something looks out of whack with any product procedures. Once you know the name of the task, job or attribute that's causing you problems it should be fairly simple to locate it and try to fix the problem (see the other parts of this blog series).

 

The Admin UI allows configuration of the threshold and contains a logviewer that shows the table contents in the "Statement Execution" tab:

ThresholdLogWebUI.png

 

As summary the execution threshold log can and should be used during development and in mass-update/load tests in the QA environments to find problem areas early. Just keep in mind that this table is not emptied automatically (yet) so if you start using it please maintain it too. When the test is done and increase the threshold value to avoid unecessary logging while in production mode.

 

 

2. Finding problems using database queries

 

 

This section covers items that can also be found using database tools which is covered in the next section. If you've had an IdM support ticket open regarding performance and ended up talking with the team in Norway you might already know that we often dive into SQL Developer/SQL Server Management Studio and start running queries to get to the issue quickly. The queries we run depend highly on the problem we're trying to solve, but I'll try to give some usecases for them.


The biggest downside to these is that usually require more access than what the OPER account has so you'll have to find the system/sa password or ask a DBA to run them for you. In a development or QA environment I believe you should have access to this as well as DB tools to analyze performance during and after performance tests anyway.

 

These queries are useful when you have a hang-situation and suspect queries are blocked at the database level.

 

 

SQL Server

 

To list queries that are currently being executed on SQL Server you can use the following query:

SELECT database_id,st.text, r.session_id, r.status, r.command, r.cpu_time,r.total_elapsed_time

FROM sys.dm_exec_requests r CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st order by database_id

The SQL Server Activity Monitor will show the last completed query which will not help if you're wondering why nothing is happening at the moment.

 

To get the name for a database_id or ID of a database name in SQL Server use:

SELECT DB_ID ('MXMC_db')  -- This is case sensitive if the DB is installed with CP850_BIN2 collation

SELECT DB_NAME(10)  -- 10 being the database ID you want the name of...

To produce a listing of historical queries that have taken more than 1000ms to execute on average you can run the following:

SELECT

          last_execution_time,total_physical_reads,total_logical_reads,total_logical_writes, execution_count,total_worker_time,total_elapsed_time,

          total_elapsed_time / execution_count avg_elapsed_time,SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,((CASE statement_end_offset

    WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS statement_text

FROM sys.dm_exec_query_stats AS qs

CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st

WHERE  total_elapsed_time / execution_count > 1000

      -- and execution_count > 10

ORDER BY total_elapsed_time / execution_count DESC;

The substring function picks out the "problem" code of procedures, and by uncommenting the line "and execution_count > X" you can also filter out statements run very seldom. Likewise you can also find often executed queries that are problematic by setting execution_count very high while lowering thetotal_elapsed_time/execution_count requirement. Also see Lamberts version of this statement in the comments section.

This can be very helpful to run after a performance test has completed. Before the test you can clear the statistics by running DBCC FREEPROCCACHE if youwant to clear out potentially stale data, but make sure to have a warmup time to repopulate caches in the test.

 

 

Oracle

 

 

To list current active/inactive sessions for a schema, in my case my prefix is MVIEWS so the screenshot does not match 100% of the text, run:

SELECT sess.process, sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sess.blocking_session,sql.sql_text

FROM v$session sess, v$sql sql

WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by status

This will give a list of sessions for the schema and show what query they're currently struggling with if any. This shows my problematic switch task in action:

blog_OracleSessionsRunning.png

 

If you only want to list active sessions and get the full SQL Text and SQL Ids you can try this variation of the query against the same two views as before:

select sess.USERNAME, sess.sid, sqlt.sql_id, sqlt.sql_text from v$sqltext_with_newlines sqlt, V$SESSION sess

where sqlt.address = sess.sql_address and sqlt.hash_value = sess.sql_hash_value and sess.status = 'ACTIVE'

and sess.username like 'MXMC%' order by sess.sid,sqlt.piece

blog_OracleActiveSessionsFullQ.png

To list historcially slow running queries use:

 

TBD

 

With the SQL ID you can do fun stuff like getting the exexution plan like this (if you have DBMS_SQLTUNE available):

select DBMS_SQLTUNE.REPORT_SQL_MONITOR(type=>'HTML',report_level=>'ALL',sql_id=>'1zf0h9sw688bx') as report FROM dual;

If you put the result of the query in a file named .html and you have a nicely formatted executionplan:

blog_OracleExecutionPlanHTML.png

Exactly what to read from this we will try to get back to in part 3.

 

 

DB2

 

 

TBD

 

 

3. Database tools

 

 

Both SQL Server and Oracle provide monitoring tools that can help troubleshoot a hang situation. In your development and QA environment it can be very beneficial to have access to these if you need to fine-tune or troubleshoot your implementation. They usually provide reports of queries that are running slow, most frequently and having the highest costs. This is by no means a full guide to using these tools, but rather a quick introduction.

 

One common issue that I've experienced with all these tools is that they do not always show the current queries being executed, but rather show the lastcompleted query. This is often not very helpful and one of the reasons the queries listed in section 2 are preferable at times.

 

 

SQL Server Reports

 

 

The SQL Server has built in reports that extract much of the information found by the queries mentioned above. If you right click on the database instance and select Reports you'll find a good list to choose from. The "Object Execution Statistics" for example will give you a huge list of the executable object the server has processed for the instance. It can also be exported to Excel.

 

(Click to open in full size)
SQLServerDBReports.pngSQLServerObjectExecStats.png

 

 

 

SQL Server Activity Monitor

 

 

The SQL Server Activity Monitor went through some drastic changes after the 2005 version, and I dont find it that helpful anymore. It will still list running sessions and allow you to see sessions/queries that are blocked or blocking each other.

 

SQLServerActivityMonitorStart.png

 

TBD

 

SQL Server Profiler

 

 

This application allows you to do full traces of the procedures and calls to the SQL Server. This can become huge amounts of data and it will affect performance . Also it requires a lot of privileges on the server (look it up if you need to, I just use SA).

 

TDB

 

 

SQL Server Deadlock Trace

 

This is a very useful option if you're struggling with deadlock errors. It does not affect performance, and when a deadlock occurs the server will log involved processes and the execution stack which makes it somewhat easier (there's seldom anything easy in regards to deadlocks) to get to the source of the problem.

 

As SA run:

DBCC TRACEON (1222, -1)

 

This will create .trc files in your SQL Server log directory (C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG) when deadlocks occur.

 

To turn it off again, run

DBCC TRACEOFF (1222, -1)

 

It will also be disabled again when the SQL Server is restarted. The result is something like this example shows, where I've tried to illustrate where the two processes each own a lock thats blocking the other process, with a third involved just to confuse the picture. The clue lies in the "Waiter" and "Owner" ids:

SQLServerDeadlockTrace.png

More information can be found here: Deadlock trace (MS)

 

Oracle Enterprice Manager

 

TBD

 

 

4. JMX Developer trace

 

 

The JMX Developer trace can be very helpful when troubleshooting performance problems from the WebUI/Rest interface. It logs the queries that are executed and their execution times.

 

http://service.sap.com/sap/support/notes/1907843

 

TBD

 

 

Disclaimers, excuses & comments

 

 

This was part two in a series of posts that are focused on database queries, troubleshooting and curiosities related to the SAP Identity Management Identity Stores and its contents. It is focused on tables and views as they are in the 7.2 release, but some tips apply to 7.1 implementations as well. Do note that this is not an official guide and that official docs such as helpfiles, tickets, notes etc. are the no.1 source. I'm writing this based on experiences from support calls, implementations and from working in the IdM development team. Some of this is also available in the training for IdM 7.2 and other places. I'm not a DBA so if I'm doing something very backwards or completely wrong here, please let me know so I can improve the document.

 

Feel free to correct me, ask for additional examples and clarifications as I hope to keep this series updated with new information as it appears.

IDM SQL Basics #3: Testing and improving queries

$
0
0

Introduction

 

This is part 3 of the "IdM SQL Basics" blog posts. This is a work in progress and currently only the MicroSoft SQL Server topic is more or less done but as this is taking much more time than expected I published them in their current state and will update them whenever possible.

 

#1: IDStore views, tables, queries & basic examples and some examples of performance impacts => Here

#2: How to locate problem queries in your implementation => Here

#3: Testing and improving queries  => This should be it

 

This doc tries to give you some ideas on how to troubleshoot long running queries that you might have found in your IdM solution, perhaps using methods from part #2. Since most queries can be written in a couple of different approaches I'll try to show how to compare the versions and help decide which one would do your system good, and which one would be bad. I also give an approach for halting single entries temporarily in the workflow and getting values from them that can be used to replace %MSKEY%/%MX_MANAGER%/other value references in the queries.

 

SQL Server

 

SQL Server has some built in mechanisms that are very useful and most of them can be used without additional permissions.

 

Statistics

 

SQL Server Management Studio (SSMS) and SQL Server can give you some fairly accurate execution statistics if you enable them. The ones I use most frequently are IO & TIME, which are enabled by running the following statements in a query window.

SET STATISTICS IO ON

SET STATISTICS TIME ON

 

I usually use IO & TIME combined as this gives a fairly accurate time and IO usage. When using SSMS you only need to run these commands once and they remain valid for the remainder of the session.  Running this produces very little to be excited about:

ssmsEnableIOTIME.png

To demonstrate the output they give I've used a query from Part #1. Unfortunately I've lost the virtual machine I used in part 1 and 2, so the data and names of the users are unfortunately different but I trust you see my point despite that. The query lists all attributes for entries named USER.AQ.1000%, with the bad example first:

select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in

(select mcmskey from idmv_vallink_basic where mcAttrName = 'MSKEYVALUE' and mcValue like 'USER.AQ.1000%')

The resultset will be listed as usual:

ssmsTIMEIOresultBad.png

The exiting stuff is sitting in the "Messages" tab:

ssmsTIMEIUmessageBad.png

First is the result from the time statistics where we can see the time spent parsing the query and creating an execution plan at the top.

Followed by the number of rows returned (86).

Next we see the results of the IO statistics - a list of tables (or their indexes) that were accessed to produce the resultset and the number of read-operations performed. In general you can say that numbers to the left are good (scan being lowest cost index operation) and then cost grows as the number go to the right. If you see physical reads you might try to execute the query again to see if it just needed to populate caches, but its usually not a good sign.

At the end you get the final time statistics, showing the total query execution time.

 

Now, lets do the same with the "better" version of the query (see part #1 again):

 

select mcMSKEY,mcAttrName,mcValue from idmv_vallink_basic where mcMSKEY in

(select mcmskey from idmv_entry_simple where mcMSKEYVALUE like 'USER.AQ.1000%')

 

The messages tab shows us the total execution time is a lot faster, and the IO statistics shows that the number of scans/reads are significantly lower which means its less of a load on the database. Joins, merges of data from different tables etc.

ssmsTIMEIUmessageGood.png

So when running into a query that takes a long time to execute this can be a good way to see how much better your improved version is.

 

There's an additional statistics I've found helpful at times: Profile. This will give a very detailed list of what the query execution actually looks like. Its quite a task to read the results,

 

SET STATISTICS PROFILE ON

 

Using PROFILE can help locate where excessive read operations or loops occur. Its still not the definitive all-included list of things the server does, but there's already enough numbers and data in this to interpret badly so I usually stop here. If you see something like Table Access or Full Scan its time to panic. At that point the server has given up trying to use an index and you're looking at a row by row read of all your data in the affected table and should be avoid whenever possible.

 

Profile of the bad example and the better example:

ssmsPROFILEresultBad.png

ssmsPROFILEresultGoodish.png

 

 

 

Execution Plan

 

You can do a very quick and dirty comparison of two queries by selecting them both and run them with Include Actual Execution Plan enabled. Click the "Include Actual Execution Plan" button before executing the query and if you've selected multiple statements you will get an automatic cost related to batch analysis.

ssmsExecutionPlanComparison.png

This example shows that the second statement without a doubt has the lower cost of the two.

 

 

Oracle

To Be Done (Autotrace, Explain plan, Profile +++)

 

DB2

 

To Be Done

 

How to test... PVOs, conditionals, switches and uSelects

 

One of the big challenges is that the data in the IdM solution can be temporary. Pending Value Objects (PVO) for example are deleted when processing is completed. If the conditional statement you're having problem with acts on PVOs it becomes very difficult to test the query.

 

A crude but very efficient "breakpoint" is to insert an action before or after where you simply remove the dispatchers from the job . Sometimes the "Sync" or "Log: <whatever>" tasks are placed perfectly for this and yuo can use them for something good. This will cause the processing of the task to stop and all entries/PVOs will stop at that point and remain available for query testing. This more or less requires that you have the system to yourself since it blocks all processing on that task.

 

Another alternative is to create a selective block by using a script that checks if the entry has a particular reference or value and block only that. Its actually one of the few good uses I've found for those "Sync" & "Log: <whatever operation>" actions that are in the framework. You can offcourse create a new action anywhere and use the same scripts. Here's a very basic example that you can extend to output the mskey & other values from the entry that you'd need to fill in data in the SQL Statements:

 

1) I've defined two global constants:

halterJobGlobals.png

2) Then I modify my "Log:Create User" task, and set the retry count on the task to a number higher than the haltcount:

halterJobRetries.png

 

3) I also replace the NADA script with a "PauseForEntry script" (it is attached at the end of this blog), and add some additional values:

halterJobDestination.png

 

If the mskey or entry reference matches the global it will set the haltcount value in a context variable and count it down to 0 before letting the entry continue, using uSkip (1,2) to fail the entry until then.

 

And with that I buy myself about 100*30 seconds to test/debug queries on a specific entry without affecting other developers working on the same system and I get the MX_MANAGER and MX_TITLE values I need for my conditional query or whatnot. I can also release the entry at any time by changing the global constant to another mskey, such as 0. I would not recommend leaving this in when migrating to production environments though.

 

 

External references

 

SQL Server

Statistics Time: http://technet.microsoft.com/en-us/library/ms190287.aspx

Statistics IO: http://technet.microsoft.com/en-us/library/ms184361(v=sql.110).aspx

Statistics Profile: http://technet.microsoft.com/en-us/library/ms188752.aspx

Showplan_ALL: http://technet.microsoft.com/en-us/library/ms187735.aspx

Showplan_TEXT: http://technet.microsoft.com/en-us/library/ms176058.aspx

Showplan XML: http://technet.microsoft.com/en-us/library/ms176058.aspx

 

 

PauseForEntry script:

// Main function: PauseForEntry
function PauseForEntry(Par){          mskey = Par.get("MSKEY");          parentmskey = Par.get("PARENTMSKEY");          haltcount = Par.get("HALTCOUNT");          haltfor = Par.get("HALTFOR");           debugValues = Par.get("DEBUGVALUES");           if (mskey = haltfor) {                    mStr = "Debug script found entry matching global DEBUG_HALTFOR mskey:"+mskey;          }           else if (parentmskey = haltfor) {                    mStr = "Debug script found PVO referencing matching global DEBUG_HALTFOR mskey:"+mskey;          }           else           {                    // This is not the entry we're looking for... Move along                    return;          }            currentHaltCount = uGetContextVar("HALTCOUNT",haltcount);          mStr += "\r\n Current haltcount:"+currentHaltCount;          currentHaltCountI = parseInt(currentHaltCount);          if (currentHaltCountI < 1) {                    // we've given ourselves enough time, let the entry go                    mStr+= "\r\n Releasing entry";                    uError (mStr);                    return;          }           else           {                    currentHaltCountI --;                    mStr+= "\r\n Holding entry, debugvalues are:"+debugValues;                     uError (mStr);                    OutString = uSetContextVar("HALTCOUNT",currentHaltCountI );                    uSkip(1,2);          }          return ;
}

 

 

 

SAP NetWeaver Identity Management 7.2 – Mobility, REST, UI5

$
0
0

Dear Customers & Partners,

please be aware of TechEd session SIS203 related to SAP NetWeaver Identity Management 7.2 – Mobility, REST, UI5.  It will be presented on TechEd Las Vegas, Amsterdam and Bangalore.

 

In this session, you will learn the tools that can be used for creating your own SAP NetWeaver Identity Management-based applications for mobile devices. Main focus will be on the new OData REST API that SAP NetWeaver Identity Management provides. You will also receive an overview of product user interface for HTML 5 which uses the new API. You will see two demos: One for HTML 5 user interface and one for mobile web application consuming the new OData REST API.

 

The sessions are going to be held as follows:

  • Wednesday, October 23, 2013 - 02:00 PM-03:00 PM – Room: Palazzo D, TechEd Las Vegas. Speaker Jannis Rondorf
  • Thursday, October 24, 2013 - 10:30 AM-11:30 AM – Room: Murano 3201A, TechEd Las Vegas. Speaker Jannis Rondorf
  • Thursday, November 07, 2013 - 06:00 PM-07:00 PM – Room: L7, TechEd Amsterdam. Speaker: Hristo Borisov
  • Friday, December 13, 2013 - 03:45 PM-04:45 PM – Room: L9, TechEd Bangalore. Speaker: Abhishek Vijay Nayak

 

See you there!

Oracle Database: Turn of Recycle Bin

$
0
0

Hi community,


I just stumbled over the Oracle 10/11 recycle bin functionality. It seems that when creating databases with the Oracle DBCA (Database Configuration Assistant), this feature is by default turned on. With this functionality, it is possible to recover dropped tables from the recycle bin.

 

This feature is not usefull when using this database for IdM, as with every initial load / update job, temporary tables are dropped. So at the end, the databse will grow up with each of this jobs running.

 

In order to turn of, use following commands:

http://www.nabeelkhan.com/turn-off-recyclebin-in-oracle-10g11g/

 

Regards,

Jannis

TechEd 2013 - It's all about the future

$
0
0

As I write this I am getting ready to pack up and leave beautiful Las Vegas, Nevada and TechEd 2013. I had a chance to meet some great people face to face like Tero Virta and Arndt Spitzer, meet up with some old friends like Jared Kobe and Courtney Mead and also learn from some great SAP people Kare Indroy, Roy Tronstad, Jannis Rondorf, and Kristian Lehment  some who gave some fantastic presentations on IDM. We saw a lot of new functionality coming down the road and the future of IDM is looking pretty bright!

 

Following Mark Finnern's mandate to share what we've learned here at TechEd, I'd like to give you some of my impressions.

 

I think that there were two primary focuses regarding SAP in general and IDM in particular were HANA and UI5/Mobile.

 

HANA is something new to the IDM arena and while it's going to start slow, expect it to pick up speed quickly.in the next couple of years is inevitable. While right now we are pretty much limited to provisioning and de-proivisioning users. In coming service packs, expect the ability to modify users, increase integration and eventually it looks like the end goal is to run IDM on HANA.  It's going to take some time, but we'll get there.

 

The other big thing is definitely UI5 and all the ways you can use it.  Extending IDM through the use of UI5 and RESTful technologies is clearly the future. I saw at least two different approaches on bringing IDM to mobile through apps and HTML. I see some exciting things ahead.

 

So a lot of what I saw was about IDM's future and it's more than just HANA and Mobile. Part of moving ahead is leaving things behind and this will be happening to IDM as well.  But I think it's something we IDM professionals can live with.

 

The End of the MMC Console!

 

It's going to be a long process, but what I saw in a couple of presentations tells me that in over the next couple of Service Packs.  From what I saw and got to work with in a hands-on session we're going to see some new and interesting ways of working with IDM.  As it gets closer and more screenshots / documentation becomes available, I'll be posting more. What we do know now is that it is Eclipse based, will run on Windows and pretty much any UNIX/LINUX and will feature a very neat graphical workflow design tool.  We'll also see more management content move from the admin console to the Web UI.

 

If you have the chance to attend TechEd in your particular corner of the World, I strongly suggest you attend so you can meet up, see, and more importantly use all this cool new stuff!

 

To quote the old 80's song, "The future's so bright, I gotta wear shades!"


Using Transparent Authentication with SAP IDM VDS

$
0
0

The Virtual Directory Server (VDS) is an interesting tool, however like all IT tools, it's just some interesting sounding technology if we do not have the ability to put it into a Use Case for the Business.  With VDS one of the things that is frequently requested is some sort of authentication to the data in represented in the Virtual Directory configuration. 

 

Of course this is not mandatory for every use case, but  frequently it is required and the easiest thing to do is to leverage another Directory Server that contains users and passwords.  Fortunately VDS provides something called Transparent Authorization which can be used in this case.  I recently had an opportunity to work with this functionality on a recent project and thought I would share some .

 

One of the really cool things is that you can use this with virtually any kind of VDS implementation where VDS is being used in its LDAP representation mode (Not sure if this will work for Web Services as well) So I'm not going to spend too much time talking about the greater configuration, but I'll focus more on what needs to happen for authentication to take place. If you'd like to play around with a configuration take a look at this post, where I walk through a virtualization of the IDM Identity Store.

 

First step is to change set up the authentication.  Note that we set the Authentication class to "MxTransparentAuthentication" by selecting the "Change..." button.  Next create two parameters as seen bellow, TRANSPARENT_DS and DEFAULTGROUP.  These should be set to the IP/defined Hostname of the server to be used for Authentication and then specify the default VDS group that will be used.

VDS-authenticated class.png

Now we need to configure the Pass through part of the authentication so that the user credentials will be passed.  This is done by using the asterisk ( * ) character.  This character is used throughout VDS as a wildcard in the configuration.

VDS-Authenticated Node.png

Once this is done, start the configuration (or restart it if it's already started) and test it out as I've done below using Apache Directory Explorer (or the LDAP based application / browser of your choice.)

VDS-Authentication Client.png

 

There you go, you're ready to access your configuration based on authenticating on an external Directory Service!

Logging GRC Web Service Calls in VDS 7.2 SP8

$
0
0

Troubleshooting the IdM-GRC interface is so much easier when you have  the full SOAP messages sent back and forth between Virtual Directory Server and GRC web services. In earlier VDS support packages, these SOAP messages used to be written to the standard log file "operation.trc"  when log level was DEBUG or higher. In an SP8 environment I have recently worked in, however, I couldn't find the SOAP messages in the logs any more, no matter how high I raised the log level. This article will explain how I got the SOAP message content back into the logs.

 

Customizing the log4j configuration

 

Open the file externals\log4j.properties underneath your VDS home directory in a text editor. Append the following lines:

# Increase log level to ALL for category in Apache Axis
# where full SOAP request/responses messages are logged
log4j.logger.org.apache.axis.transport.http.HTTPSender=ALL

 

I found that any customizations to this file will be lost when I uninstall and then re-install VDS. To avoid having to re-customize the file after update to the next support package, I prefer to copy the customized file to a customer-specific location, and then have the VDS runtime load that customer copy via JVM option. Here's how to do that:

 

Copying your custom log4j.properties to an update-safe location

 

Create a customer-specific directory underneath your VDS home directory. I choose the subdirectory name "custom" here. My example commands assumes that your VDS home is at C:\usr\sap\IdM\Virtual Directory Server. Adapt that to your environment as required. Open a Windows command prompt, and enter the following commands:

set VDS_HOME="C:\usr\sap\IdM\Virtual Directory Server"
mkdir %VDS_HOME%\custom

 

Copy the externals\log4j.properties to which you have applied your customizations into the custom directory:

copy %VDS_HOME%\external\log4j.properties %VDS_HOME%\custom

 

Loading the custom log4j configuration file location via Java system property

 

Start the VDS console, and choose Tools -> Options from the menu bar:

 

unity-2d-shell_764.png

 

Select the Classpath tab, and add the following JVM option into the "Additional Java options" field:

-Dlog4j.configuration=file:///C:/usr/sap/IdM/Virtual%20Directory%20Server/custom/log4j.properties

 

unity-2d-shell_766.png

Press OK to save your changes.

 

Note that you must specify the location of the log4j configuration file as a URL; anything else won't work.  Again, you may need to adapt that URL according to the VDS home directory of your specific environment. For more information regarding MS Windows file URIs, refer to this MSDN article.

 

In order for the changed JVM options to be picked up by any already installed VDS operating system service, I found that I need to re-install the service. Just saving the JVM options and updating the service configuration via "Update" button, or re-starting the service didn't work for me. The service would still use the externals\log4j.properties file in this case. See my comments above regarding potential update problems with that approach. So my recommendation is that you re-install at least the VDS GRC service before proceeding to test in the next section.

 

Testing SOAP message logging

 

This test assumes that you have a GRC configuration running in VDS, either as a service or as an application. We'll use the simple GRC web service GRAC_SELECT_APPL_WS, which returns the list of all applications (aka connectors) configured in GRC. It's invocation from the integrated LDAP browser in the VDS console is pretty straight-forward, so it's ideal for a simple test.

 

From the VDS console menu bar, select Tools -> Browse LDAP...

 

unity-2d-shell_784.png

 

The LDAP browser dialog will be displayed. Press the "Wizard..." button and enter the required connection data to connect to your VDS GRC service via LDAP:

 

unity-2d-shell_785.png

 

You should be able to specify the exact same values as shown below, except for the port number. Make sure that matches the LDAP port number of your running VDS GRC configuration. If you have this configuration open while doing the test, as is the case in my screen shot below, you can see the port number in the status line at the bottom of the VDS console.

Host name: localhost
Port number: <your configuration's LDAP port number>
Starting point: ou=selectapplications,o=GRC
Return attributes:
Search type: ONE
Filter: (objectclass=*)

unity-2d-shell_786.png

Save your data by pressing OK in the LDAP URL dialog.

 

From the "Credentials" drop-down list of the LDAP browser, select "User + Password":

 

unity-2d-shell_787.png

 

Enter the internal VDS user name and password to connect to your GRC configuration's LDAP server. In a default installation, that's grcuser/grcuser

 

unity-2d-shell_788.png

 

Press the "Search" button. A list of GRC applications, whose CN typically corresponds to a logical ABAP system name, should be displayed in the LDAP browser.

 

unity-2d-shell_789.png

unity-2d-shell_790.png

 

The "Search" operation has invoked the ABAP web service GRAC_SELECT_APPL_WS on the GRC ABAP server. We can now verify that the SOAP request and response message of this web service call have been recorded into the log files. From the VDS console menu bar, select View -> Select and view a log...

 

unity-2d-shell_792.png

In the "Open File" dialog, you should now see a new trace file "external.0.trc", in addition to the well-known operation.trc and operation.log files. Open the "external.0.trc" file to display it directly in the VDS console's log viewer.

 

unity-2d-shell_795.png

 

The SOAP request message is contained in the log message starting with "POST /sap/bc/srt...". The corresponding SOAP response message is contained in the log message starting with "<soap:Envelope>", as highlighted below:

 

unity-2d-shell_796.png

 

As I don't find the integrated log viewer of VDS console to be very well usable, I'll show the full log message text of both lines in a  text editor instead of in the VDS console directly:

 

SOAP Request XML

unity-2d-shell_797.png

A word of caution

 

Note that as in in previous support packages, the information contained here is sensitive because it includes the HTTP basic authentication header in full. That's why I manually grayed out the respective line in the SOAP request XML screen shot above. This HTTP header exposes the ABAP user name and password which VDS uses to connect to the GRC systems more or less in clear text (BASE64 encoding only). For that reason, I recommend to carefully restrict access to these log files at the OS level, and if possible apply the whole logging configuration demonstrated here in development environments only, but not in production.

 

SOAP Response XML

 

unity-2d-shell_798.png

 

As you can see, the full information we require for detailed problem analysis is there. Contrary to previous SPs, the information is now no longer in the operations.trc file, but in a separate log file (external.0.trc). But I guess that doesn't hurt.

 

Hope that helps!

 

Lambert

IDM / Oracle / CRW - A Tall Order

$
0
0

Hello all. My name is Brandon and I would like to review over a recent problem I had to solve in my IDM consulting life. To give you just a bit of environmental background, the company I am working with currently runs IDM 7.2 on a Oracle database.

 

The task at hand was to write a report in CRW that would pull all the identities that had membership in a given role or roles. The user running the report would select the role or roles they wanted and also needed to be able to filter by active or terminated employees or both. I felt the Oracle SQL query I wrote would be worthy of a blog entry as the data had to be pulled from three different places so maybe this will help someone out someday. Here's what I came up with:

 

select distinct display_name as Employee_Name, HR_Status, PS_DEPT_DESC as Department_Description, mcdisplayname as Manager from
(select mskey, display_name, HR_Status, PS_SETID, PS_DEPT_DESC, MCDISPLAYNAME,
RANK() OVER (PARTITION BY mskey
ORDER BY ps_setid DESC) "RANK" from
(select * from (select mskey, mcattrname, mcvalue from MXMC_OPER.IDMV_VALLINK_EXT)
pivot (max(mcvalue) for mcattrname in ('DISPLAYNAME' as Display_Name,'MX_FS_PS_STATUS' as HR_Status,'MX_MANAGER' as Manager,'Z_PS_DEPARTMENT' as Dept_ID))
join MXMC_RT.Z_PS_DEPARTMENTS D on D.PS_DEPT = Dept_ID
join MXMC_OPER.MXI_ENTRY M on M.MCMSKEY = Manager
where mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'DISPLAYNAME' and mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MX_FS_PS_STATUS' and mcsearchvalue in ('ACTIVE','TERMINATED') and mskey in
(select distinct mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MXREF_MX_ROLE' and mcvalue in (261336,27,261369))))))
where rank = 1
order by display_name

 

I would love to show you some output from that query but, due to client confidentiality, I can't. However, I can make a quick table below to give you an idea as to what comes up:

Employee_NameHR_Status
Department_Description
Manager
Brandon BollinActiveIT - SecurityMatt Pollicove
Clark KentActiveLegalJerry Siegel
Lex LuthorTerminatedFinanceJoe Shuster

 

These three employees are members of one of the roles with MSKEYs of 261336, 27 or 261369 and both active or terminated is being displayed. In order to get this simple output, I decided to start with the most basic parts of this query, pulling identities from IDM and filtering on role membership and active / terminated. Lines 10 thru 15 was where this whole thing started.

 

select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'DISPLAYNAME' and mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MX_FS_PS_STATUS' and mcsearchvalue in ('ACTIVE','TERMINATED') and mskey in
(select distinct mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MXREF_MX_ROLE' and mcvalue in (261336,27,261369)))

 

This query will simply pull display names of the identities I wanted in the given roles and HR status or statuses. When importing this into CRW, the ACTIVE / TERMINATED portion and 261336,27,261369 would be substituted with variables which would be set via user input. For anyone who's ever worked with IDM, this part of the query should be pretty straight forward. At this point, I noted the number of results I received. For the sake of example, let's say I got 500. This way, I knew that, going forward, that would be my baseline. Anything I compounded onto this query would have to return the same number of results for it to be correct.

 

So now I have to add the identities' HR Statuses, department descriptions and managers' names. To do this, I added lines 5 thru 9 onto the query:

 

select * from (select mskey, mcattrname, mcvalue from MXMC_OPER.IDMV_VALLINK_EXT)
pivot (max(mcvalue) for mcattrname in ('DISPLAYNAME' as Display_Name,'MX_FS_PS_STATUS' as HR_Status,'MX_MANAGER' as Manager,'Z_PS_DEPARTMENT' as Dept_ID))
join MXMC_RT.Z_PS_DEPARTMENTS D on D.PS_DEPT = Dept_ID
join MXMC_OPER.MXI_ENTRY M on M.MCMSKEY = Manager
where mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'DISPLAYNAME' and mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MX_FS_PS_STATUS' and mcsearchvalue in ('ACTIVE','TERMINATED') and mskey in
(select distinct mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MXREF_MX_ROLE' and mcvalue in (261336,27,261369))))

 

The select on line 1 above pulls all the pertinent information from IDMV_VALLINK_EXT. However, since the identity center database isn't normalized like most databases, I now need to turn it on it's side, if you will. That's where the PIVOT command comes in. All of the values selected from line 1 above will become rows instead of columns. While this isn't an exactly accurate description of what PIVOT does, it's good enough for this description. Google PIVOT sometime if you want to know more as this command does a number of other things depending what what operator you user (MAX, MIN, AVG, etc.). Since the department and manager IDs are saved as PS_DEPT number and MSKEYs on a user's identity respectively, I now add the two joins to pull out the human friendly names of the departments and managers. At this point, I'm thinking I'm done. My required four columns for the final report should be there so I run the query. I get back something like 591 results. What?! That's way more than my 500 baseline.

 

Upon some further digging, I discovered that, at some point in this company's past, they reorganized their departments when they were acquired by another company. All the old department names were still in the database so the joining of PS_DEPT on to the Z_PS_DEPARTMENTS table was pulling all department names for any user that existed during this transition. Users that had two, in some cases more, departments were getting more than one line in the results. Now what? How do I only pull the current information?

 

Thankfully, there was a column in Z_PS_DEPARTMENTS that allowed for this, PS_SETID. Once I was told that which SETID was current, it just so happened that the current set was always in first one listed in my results. Now all I needed to do was filter by RANK. That's where the rest of the query comes into play:

 

select distinct display_name as Employee_Name, HR_Status, PS_DEPT_DESC as Department_Description, mcdisplayname as Manager from
(select mskey, display_name, HR_Status, PS_SETID, PS_DEPT_DESC, MCDISPLAYNAME,
RANK() OVER (PARTITION BY mskey
ORDER BY ps_setid DESC) "RANK" from
(select * from (select mskey, mcattrname, mcvalue from MXMC_OPER.IDMV_VALLINK_EXT)
pivot (max(mcvalue) for mcattrname in ('DISPLAYNAME' as Display_Name,'MX_FS_PS_STATUS' as HR_Status,'MX_MANAGER' as Manager,'Z_PS_DEPARTMENT' as Dept_ID))
join MXMC_RT.Z_PS_DEPARTMENTS D on D.PS_DEPT = Dept_ID
join MXMC_OPER.MXI_ENTRY M on M.MCMSKEY = Manager
where mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'DISPLAYNAME' and mskey in
(select mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MX_FS_PS_STATUS' and mcsearchvalue in ('ACTIVE','TERMINATED') and mskey in
(select distinct mskey from MXMC_OPER.IDMV_VALLINK_EXT
where mcattrname = 'MXREF_MX_ROLE' and mcvalue in (261336,27,261369))))))
where rank = 1
order by display_name

 

The RANK command adds a "RANK" column onto my results. Any result with multiple PS_SETIDs would now be ranked 1, 2, or 3. I only want that first result so at the bottom of the query, I only return rank = 1. The first line in this query selects only the columns I wanted returned to be displayed and the last line orders them by their display name. When I ran this, I got back 500 results. Victory!! Plugging into CRW was a breeze after that. I'll still never forgot the e-mail I got back from my project contact when I sent him this query for approval before plugging in into CRW. It was essentially a two sentence long, "WOW!"

 

In writing this blog entry, I know I cut some corners on explanations so if you wish any clarifications, comment below and I can answer back. Again, as stated above, maybe this will help someone out when trying to pull IDM data someday, Additionally, if you have any suggestions on how I could have gotten the same result back simpler, PLEASE let me know! I am a huge lover of constructive criticism.

 

Thanks all and feel free to rate the quality of this post as well. It's my first blog entry so hopefully I got it right in the eyes of the readers. Tschüss!

IDM 7.2 - 'Error loading java virtual machine DLL \jvm.dll -193

$
0
0

Hello there,

This error is usually monitored when you change your java location.

Please first try uninstall and install of dispatcher. Also recreate the dispatcher scripts.

 

If this doesn't help run Windows Services and try to find MXDispatcher Service.

Start it from there.

 

 

This workaround will trigger the dispatcher and it would be in running state. If you now start/stop it from the IC it would operate normally.

 

Maybe someone could comment what is the reason behind this?

A little synchronization can pay big dividends ! End to end password synchronization

$
0
0

According to Gartner, 20% to 50% of ticketsopened withHelpdeskconcern password problems. The estimated cost oftreatment is 15euros (META Group Resp. GartnerITKeyMetricsData,summary report, 2011).


This blog co-authored with Benjamin GOURDON is based on several customers' experiences which are looking for an alternative to single sign-on.


The purpose ofthis blog isto presentan easysolution to implementdesigned to greatlyreducethe number of callsto the support.This method, proven by many customers, providesa lowerROIthan 3 months.

 

 

Password management challenges

You want to synchronize the password of all your users throughout yourIT landscape with a simple solution which is able to provision SAP and non-SAP applications. SAPNetweaverIdentity Managementcan easily help you for this.

Illustration of the password synchronization challenge

 

 

Of course it is possible to simply reset SAP passwords directly from SAP IDM web interface but this blog deals with password synchronization from user’s Windows session (Active Directory domain password) to SAP and non-SAP applications. This means that we have tobe able todetect the changeofpassword inyourActive Directoryandthen provisionit as a productive password to applications (user is not prompted to change it at the first connection).

So this blog suggests an easy solution to implement a complete password synchronization using SAP Netweaver Identity Management in 4 steps:

  1. Catch the change of password at Active Directory’s side
  2. Send this password to your Identity Center
  3. Handle the new password to write it in the IdStore
  4. Trigger the provisioning of the password to applications

 

 

Illustration of the 4 steps methodology

 

 

Step 1 & 2: Catching change of user password end sending it to your Identity Center

SAP Netweaver Identity Management provides a tool which allows to catch the change of password in Active Directory: Password Hook. It has to be installed on each domain controller to ensure a complete monitoring of password changing flows.

For installation prerequisites and procedure please have a look on SAP documentation here:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0bce8df-02e8-2d10-11a3-b61f8df4e4b1

Find below an example of Password Hook configuration (not enabled on this screen):

 

 

When Password Hook detects a password change it executes automatically ajob configured and exported as .dse file from your Identity Center. For job definition you can do the following:

 

 

The new password is then sent to your SAP Netweaver Identity Management database in a temporary table. I recommend the following columns for the table:

  1. Automatic Incremental key
  2. User unique ID
  3. User encrypted password
  4. Date of modification
  5. Name of the controller which sent the password

 

To encrypt the password you should use the same keys.ini file of your Identity Center to encrypt the password before sending it to IDM (DES3 encryption).

The first column has a very important role in our workflow: it allows to know which password has been treated by the runtime by comparison with another internal counter at repository level.

The last column is an additional information about which domain controller sent the password. It can be useful if you want to know if a domain controller or if the Password Hook is down.

 

 

Step 3: Handle new password in Identity Center

To trigger the new password entered in the temporary table you should use an Event Agent which keep watching the automatic incremental key of the table as described below:

 

 

 

So when a new line appears the Event Agent executes a defined job composed by 4 passes to execute actions (including scripts):

  • Update MX_PERSON customized attribute like Z_ENCRYPTED_PASSWORD_FROM_AD

 

  • Write log into a log table

 

  • Delete the entry in the temporary table

 

  • Increment a counter on the repository (as variable) to ensure that temporary table’s key = repository variable

 

 

Maintaining a counter at repository level permits to ensure that there is no lag between entries treated in the temporary table and entries treated in Identity Store. In case of problems (Event Agent down) it permits to identify easily how many passwords are waiting for treatment.

 

 

Step 4: New password provisioning to applications

If SAP IDM is designed to provision Active Directory password, Password Hook will be started automatically every time password is modified. So the challenge is to synchronize to other applications only the good password which is from the user himself. Here is a simple and pragmatic method to address this issue.

Because it is not possible to make any check or workflow at Password Hook side, you have to make your checks in your Identity Center before writing definitively the new value of the password.

So triggers (Add and modify event tasks) on attribute Z_ENCRYPTED_PASSWORD_FROM_AD are needed to start a customized workflow assuming following values:

  • Attribute Z_ENCRYPTED_PASSWORD_FROM_AD, corresponding to the new encrypted password received from AD and Password Hook
  • Attribute MX_ENCRYPTED_PASSWORD, corresponding to the current encrypted password in IdStore
  • Global constant Z_DEFAULT_PASSWORD, corresponding to the default value defined for reset password by administrators (example of value : Welcome123)

Illustration of the workflow used to check password in IdStore



Remark: Instead of using a customized attribute in productive IdStore, the other option that could be used is to add a Staging Area IdStore to execute these checks.

 

The corresponding configuration in the Identity Center below (including queries for the two Conditional tasks):

 

 

You are now able to ensure coherence and synchronization of your user’s passwords.

  Think about your end users wellness gain and enjoy the time you will save in the future on workload of user's support!


Enterprise Group Management using SAP Netweaver Identity Management 7.2

$
0
0

In this blog I would like to provide an overview of how IdM 7.2 has enabled SAP to manage its massive amount of Distribution Lists which are part of day to day business operation. While some technical aspects are covered, I will focus mostly on the business value that IdM 7.2 into the solution.

 

Background

 

SAP internal Distribution Lists (DLs) were previously maintained using a very old implementation of R/3 portal(a.k.a SAPNet). This system introduced a risk to business operation as it was too old and the hardware/software stack was already at the end of their life cycle. As a result, Global IT decided to define a project for the migration the processes and data over to other target platforms and retire SAPNet completely.

 

As the delivery manager and head of IdM competency center in one of the IT application teams(i.e. Social Collaboration Platform), I had the responsibility of providing a retirement path for SAPNet, manage the planning and oversee the execution of all the work streams within the scope of the retirement project. One of the work streams was indeed Group Management which required us to migrate all legacy DL data as well as relevant processes to another system. SAP Netweaver Identity Management 7.2 was confidently chosen as the product for DL management.

 

 

Pre-Migration Status

 

In large companies such as SAP email communication is key. As such, timely creation and availability of Distribution Lists are always critical to the success of business. Aside from the initial creation of DLs, the ongoing changes to DL hierarchies as well as user memberships, involve a lot of effort and overhead costs if the process were to be dependent solely on an IT support function within the organization.

 

At SAP, there are also some special type of DLs which are used for security purposes (rather than pure communication). These DLs are built automatically based on a controlling attribute (i.e. cost center) with some complex hierarchy. The DLs are then used by internal applications to enforce authorization.

In order to overcome the challenge and complexity, SAP had implemented an integrated solution comprised of the following components many years ago:

 

  • An application with an end-users interface for self-serve DL management(SAPNet).
  • One or more source systems to build DL memberships automatically in the back-end based on some business-centric criteria and replicate the DLs to the main repository.  An example of the criteria would be “A DL which includes all sales managers”.
  • A process to replicate all creation/updates/deletion of DLs from the main repository into corporate Active Directory.
  • A set of APIs to expose DL data to other internal systems that would need to either read from or write DLs into the repository(i.e. RFC enabled Function Modules in SAPNet).

 

All of processes had to be re-implemented based on IdM 7.2 and all legacy data migrated to IdM 7.2 Identity Store before those process could be retired on the old SAPNet system. The analysis and planning to establish the correct sequence for both process and data migration was enormously complex due to various reasons such as quality of legacy data(i.e. hierarchies with loops etc), limited documentation of the old processes(developed more than 10 years ago), etc. In this article we will focus on the solution itself rather migration activities.

 

IdM 7.2 Takes Over

 

Now the fun part begins! That is to design an architecture which can not only satisfy the current requirements of our project but also lay a solid foundation for future growth. To achieve this, project team decided to utilize an internal cloud platform that offers Continues Delivery and DevOps.

This decision posed a huge challenge for the project as we had to work within the boundaries of product features as well as budget & timeline but the effort worth it. At the end, we were able to automate server setup and deployment of IdM Identity Center, Run time and Database host. What a pleasure!

 

A completely new landscape (i.e. development environment) could be built from scratch with the most recent code/configuration in less just a few hours! Although, I have to point out that the server setup and deployment to Netweaver UI could not be automated due to time constraints.

 

Let’s review the architecture and examine the role of IdM 7.2 in the enterprise level distribution list management solution:

 

architecture.png

Some business processes rely heavily on DLs that are generated in SAP business systems based on some criteria (as opposed to creation or updates by end-users). Those DLs need to be replicated to AD before they are usable. To achieve this and in order to achieve a federated approach for DL provisioning to AD, IdM 7.2 sits in between the business systems and AD. Two Virtual Directory Servers have been implemented for this purpose:

·        

  • The first one is generic one which enables any application within the corporate network to connect to IdM and create or update DLs. This VDS is fully scalable and can handle multiple clients in parallel while enforcing necessary logic to maintain data ownership per client. Currently, SAP’s HR as well as a reporting system are connected to this VDS and pushe their DLs to IdM several times a day. DL memberships are determined according to some criteria before the updates are replicated to IdM.

 

  • The second VDS is very specific and implemented exclusively for another instance of IdM (called IT IdM) which is in charge of user/role provisioning to SAP Business systems. The IT IdM pushes pre-defined DLs used for authorization purposes to this VDS multiple times a day. The same VDS is also receiving user data updates from the IT IdM system.

 

  • As I mentioned in the beginning of this blog, a self-serve interface for DL management is required to avoid dependencies on an IT support function for every single change.  Our self-serve solution is a Web UI application that is fully integrated with IdM. The integration between IdM and WebUI is bi-directional:

 

    • An in-bound connection has been implanted using a VDS in which the Web UI pushes all end-user updates to IdM and from there to AD.

 

    • As for the out-bound integration, the Web UI is exposing some RESTful APIs. These APIs are called by IdM provisioning to replicate any updates from other source systems to the Web UI. This way all the back-end DLs are also visible to end-users. However, they are all displayed in read-only mode as the owner of such DLs are the upstream systems (and not the end-users).

 

IdM 7.2 Netweaver UI is used as an administration interface. Here some  custom UI tasks have been implemented to cover the necessary functionalities for our support 1st and 2nd level support colleagues. The custom tasks enable to check on the replication status of DLs( both inbound and outbound) as well as enhanced DL maintenance functionality(compared to Web UI functionality available to end-users).

 

A set of Java-based RESTful APIs have been developed and hosted on NW host. These APIs enable other applications to query IdM DB, search for DLs and read DL attributes. A wide range of parameters and options are available to suit the needs of the client applications.

 

The most critical and complex integration is the outbound connection from IdM to the corporate Active Directory. Although IdM 7.2 offers a connector to AD out-of-the-box, due to some specific data replication requirements, we had to develop a custom LDAP connector. This connector consists of a set of Java components , IdM jobs, and custom DB tables.  A queuing mechanism has been implemented to ensure all updates are processed in order and can be re-pushed to AD in case connectivity is lost.

 

The IdM DB utilizes SQL Server 2008 which was originally hosted on a cloud VM.  Both the host and DB used to be built & deployed leveraging DevOps Chef scripts. However, overtime the VM reached its I/O limits due to high volume of updates and consequently the db performance degraded. After reviewing different options, we came to a conclusion that a physical host would be a better option long term and moved ahead with the change.

 

Each of the integrations has one of more corresponding jobs that run according to their schedule. Some require more frequent executions that others. Additionally, there are some stand-alone jobs that cover other areas of functionalities such as DL generation based on cost center controlling attribute in IdM.

The summary above is of course a simplified view of the architecture but I hope it can give you an idea of how IdM 7.2 supports business processes related to DL management at SAP.

 

Now let’s look at some statistics.

 

 

The Stats

 

It is always useful and interesting to gather some statistics and better understand the usefulness of a solution. Below are some figures I have gathered recently:

 

 

Stats.png

* Note that there are many DLs with a nested hierarchy whose member count is not included in this table.

 

 

 

The chart below shows the size of the AD connector queue size over a 24 hour period (Dec. 3rd, 2013):

 

chart.png

 

 

The Bottom Line

 

The implementation of SAP IdM 7.2 for Distribution List management did not come easy. There were many challenges along the way from the early stages when a blueprint was produced all the way to the implementation, go-live and the support afterward. The effort, however, was worth it as critical business data and processes were migrated successfully to IdM from a very old legacy system. SAP IdM 7.2 has been running since late Q1 earlier this year and continues to be the backbone of DL management for years to come. The implementation has been a true example of "SAP Runs SAP” program.

 

I hope you have found this blog useful. Please feel free to post your questions or comments and I will be sure to address them accordingly.

 

Ali Ajdari

Scripting basics: Using Initialization/Termination/Entry Scripts in IdM-passes

$
0
0

During the later part of last year had to develop a set of quick&dirty reports in a proof of concept project where the ETL-cababilites of IdM were demonstrated. I used the Initialization/Termination/Entry scripts and it gave me an idea to write a blog about using them. I am not sure how the basic IdM training course by SAP addresses these topics but if it doesn't and since not all the new IdM'ers participate the training, so maybe this helps someone in getting started or gives some ideas.

 

In IdM passes you can define the following types of custom scripts to be triggered upon execution of the pass:

  1. Initialization Script
  2. Termination Script
  3. Entry Script

 

The toGeneric pass has also three additional types of custom scripts:

  1. Open destination
  2. Next data entry
  3. Close destination


Consider following dummy toGeneric pass as an example. It runs a dummy SQL-statement in Source-tab that returns static text as a result set which gets passed to Destination-tab. The passed entries are "processed" by the scripts in Destination-tab.


Source:

Source.jpg

The Source-tab’s SQL-statement returns two dummy rows (or records, or entries depending how you want to see them) to the Destination-tab.


Destination:

Destination.jpg

The Destination-tab calls all the 3 types of scripts possible in toGeneric-pass plus getCount-script in the attribute mapping. What ever is returned by getCount-script gets inserted into it's place in the table cell and is passed to the Next Data Entry script among the other attributes defined in the mapping.

 

Execution Order
Let’s examine the job log and see the execution order plus what the output was. All the scripts in the example output their name plus what was passed as parameter.

ExecutionOrder.jpg

So the execution order is:

  1. Initialization Script
  2. Open Destination Script
  3. Entry Script
  4. Next Data Entry Script
  5. Close Destination Script
  6. Termination Script

 

Initialization Script

InitializationScript.jpg

The Initialization Script was called first and from the output it’s visible that while it received the parameters in Par-object none of the macros were executed. All the values appear as they were typed in the Destination-tab.

 

In the example we have one custom variable called “theCount” and as it is introduced outside the function definition it becomes global variable and can be used in any other scripts in the pass as long as the other script also defines the same global variable. Variable theCount is set to initial value 0 in the Initialization Script.

 

I’ve used Initialization Script mostly in two ways:

  1. Setting the initial values for attributes in pass (or in whole job if the pass is before where the value is later used)
  2. When using delta-functionality in a way that entris no longer present in the source would be automatically deleted in IdStore. Here the initialization script is handy in checking if the data source has lesser number of rows than expected. For example if some data transfer has failed and the source is empty, using delta to mark entries to be deleted could be fatal without any checks.


Like the name suggests and output displays the Initialization Script is called once.

 

Open Destination Script
OpenDestination.jpg

The Open Destination Script is called next based on the output it does not even get the Par-hash table. Open Destination is typically used like the name suggests, in opening a connection, for example an JCo-connection in provisioning task. In JCo-call scenario Next Data Entry Script could do the actual call and Close Destination could close the opened connection. Based on the output Open Destination got called once.

 

Entry Script
EntryScript.jpg

The Entry Script in Source-tab is called next. Based on the output the hash table "Par" has it’s elements fully translated to data contents. The example uses Entry Script in growing the counter variable by one and storing the new value to Par to be used in the Destination tab. (BTW, for some reason having an underscore in the element name, for example “THE_COUNT”, crashed the pass.)


The Entry Script is called as many times as the Source-tab definition returns rows.

 

Next data entry Script

NextDataEntry.jpg

The Next Data Entry Script in Destination-tab is called next and again it is called as many times as the Source-definition returns records. It receives the full Par-hash table and it’s correct values along with the values we just manipulated in the Entry Script.


Close destination Script

CloseDestination.jpg

The Close Destination was called as second to last.


Termination Script

Termination.jpg

The Termination Script was the last script to be called.

 

HTML-file generation using Initialization/Termination/Entry scripts

The example job has two passes; first one that loads data from CSV-file to temp-table and another pass that reads the contents of the table and writes them to HTML-file.

 

Populating temp table

Source:

HTMLFilePass1Source.jpg

Destination:

HTMLFilePass1Destination.jpg

The destination has just file vs. table attribute mapping. I always name the columns in files after the IdM-attributes so it simplifies the “interface” and it sort of documents itself.

 

Writing HTML

 

Writing plain text file or file with CSV-structure is pretty easy from IdM as all that needs to be done for the formatting are the column/attribute-mapping and defining the CSV-delimiter, headings etc.

 

HTML is slightly trickier as all it’s possible to output in the Destination-tab are the repeatable elements meaning just table cells. The start and end of the HTML document plus start and end of the table must come from somewhere and this is where Termination Script is handy.

 

Source

HTMLFilePass2Source.jpg

The source SQL reads the entries from the temp table. Note that it’s possible to have a JavaScript in the SQL-statement.

MySQLFilter.jpg

The JavaScript is executed first and whatever is returned by mySQLFilter gets embedded into SQL. Good exampe on how to use the JavaScript within SQL can be found from the BW-interface and how IdM sends the data to SAP BW via LDAP push.

 

Initialization Script

SetHtmlFileName.jpg

Initialization Script is used to generate the somewhat unique filename from timestamp. Name of the file is stored to global variable so that the name is accessible for other scripts. The colons are removed from the file name with uReplaceString-function. The Initialization Script also sets the counter that is used in counting the rows to zero.

 

Entry Script

Entry Script just grows the counter like in previous example.

 

Termination Script

FormatHtmlFile.jpg

As the previous example showed the Termination Script is called in the end. So, Termination Script is called after the table cells are written into the file and here it reads the table cells into a string plus adds the start and end of the HMTL-page around the table cells.

 

The HTML-page uses a style sheet that is returned from script just to demonstrate that there can be more than one script in the script “file”.

 

Destination

HTMLFilePass2Destination.jpg

The destination has simple attribute mapping that writes the HTML-table rows to the file.

getHtmlFileName.jpg

The output filename is returned by a script getHtmlFileName, which just returns the value from global variable.

 

The result

Result.jpg


Call REST Service by IdM runtime inside JavaScript

$
0
0

This blog post is about calling a remote REST Service, e.g. some 3rd Party Application, which is publishing its data via a REST API.

This could be done with the VDS, executing a HTTP Request against this REST Service.


It is also possible to perform this inside a JavaScript, which will be executed by the IdM runtime directly, without the need to set up a VDS inside your landscape.

Unfortunately, the used Rhino JavaScript Engine used inside IdM is not able to perform AJAX calls directly, so we have to do this via Java (Thanks Kai Ullrich for the hint with "Scripting Java inside JavaScript").

 

Below you find some example code.

 

Cheers, Jannis

 

 

// Main function: doTheAjax

 

function doTheAjax(Par){

 

    // import all needed Java Classes

    importClass(Packages.java.net.HttpURLConnection);

    importClass(Packages.java.net.URL);

    importClass(Packages.java.io.DataOutputStream);

    importClass(Packages.java.io.InputStreamReader);

    importClass(Packages.java.io.BufferedReader);

    importClass(Packages.java.lang.StringBuffer);

    importClass(Packages.java.lang.Integer);

 

    // variables used for the connection, best to import them via the table in a ToGeneric Pass

    var urlString = "http://host:port/rest_api";

    var urlParameters = "attribute=value";

    var httpMethod = "POST"; //or GET

    var username = "administrator";

    var password = "abcd1234";

    var encoding = uToBase64(username + ":" + password);

 

    // In case of GET, the url parameters have to be added to the URL

    if (httpMethod == "GET"){

        var url = new URL(urlString + "?" + urlParameters);

        var connection = url.openConnection();

        connection.setRequestProperty("Authorization", "Basic " + encoding);

        connection.setRequestMethod(httpMethod);

    }

    // In case of POST, the url parameters have to be transfered inside the body

    if (httpMethod == "POST"){

        // open the connection

        var url = new URL(urlString);

        var connection = url.openConnection();

        connection.setRequestProperty("Authorization", "Basic " + encoding);

        connection.setRequestMethod(httpMethod);

        connection.setDoOutput(true);

        connection.setDoInput(true);

        connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");

        connection.setRequestProperty("charset", "utf-8");

        connection.setRequestProperty("X-Requested-With", "XMLHttpRequest");

        //connection.setRequestProperty("Content-Length", "" + Integer.toString(urlParameters.getBytes().length));

        connection.setUseCaches(false);

        var os = new DataOutputStream(connection.getOutputStream());

        os.writeBytes(urlParameters);

        os.flush();

        os.close();

    }

 

    //get the result and print it out

    var responseCode = connection.getResponseCode();

 

    var is = connection.getInputStream();

    var isr = new InputStreamReader(is);

    var br = new BufferedReader(isr);

    var response = new StringBuffer();

    var line;

    while ((line = br.readLine()) != null) {

        response.append(line);

    }

    br.close();

 

    uWarning("Sending " + httpMethod + " Request to URL: " + urlString);

    uWarning("Response Code: " + responseCode);

    uWarning("Response: " + response.toString());

 

    connection.disconnect();

 

}

Platform Choices for IdM

$
0
0

Background

 

It looks like SAP IdM is getting a bit more interest now, particularly based on the number of new "faces" on the forum, which is a very exciting time for the product and for those of us that have been working on it for a while. With that in mind, I thought I would share some observations on the platform options available for IdM, as there are a few things that are not as obvious at first sight.

 

AS Java

The Platform Availability Matrix (PAM) for IdM, available at http://service.sap.com/pam states that for IdM 7.2, the following are supported platforms and are discussed in more detail in the installation guide:

 

SAP EHP1 FOR SAP NETWEAVER 7.0
SAP enhancement package 1 for SAP NetWeaver 7.0

SAP EHP1 FOR SAP NETWEAVER 7.3
SAP enhancement package 1 for SAP NetWeaver 7.3

SAP EHP1 FOR SAP NW CE 7.1
SAP enhancement package 1 for SAP NetWeaver Composition Environment 7.1

SAP EHP2 FOR SAP NETWEAVER 7.0
SAP enhancement package 2 for SAP NetWeaver 7.0

SAP EHP3 FOR SAP NETWEAVER 7.0
SAP enhancement package 3 for SAP NetWeaver 7.0

SAP NETWEAVER 7.0
SAP NetWeaver 7.0

SAP NETWEAVER 7.3
SAP NetWeaver 7.3

SAP NETWEAVER CE 7.2
SAP NetWeaver Composition Environment 7.2

 

What is not clear is that AS Java 7.0 is only in "Maintenance Mode" support from SAP. I've yet to find where this is written down, but I've been told it very clearly by the UI teams. What this means is they will fix anything that breaks, but not put any new features on to it. This means it is effectively locked with the UI features from IdM 7.2 SP4. So for me, if you are starting from scratch, you should start on AS Java 7.3. It looks better, and supports all the new features, which are well worth having.

 

Database

Again, from the PAM, IdM 7.2 is currently supported on SQL Server, Oracle and DB2. As with everything SAP these days, I'm sure it is only a matter of time before HANA is included, and, as IdM is already a very database centric product using stored procedures, this is a very natural fit. This will of course transform IdM, delivering sub micro-second responses, in-memory, at the speed of thought, while making the tea...

 

But back in the real work, I'm going to focus on the current offering....

 

My experience has been predominantly on a single large IdM deployment on Oracle and so first off, I'm going to ignore DB2, as I have no experience of it, for IdM or anything else in SAP, and so don't think it is fair to comment.

 

As for IdM on Oracle, it is fair to say it has not been smooth sailing. I think the development of IdM by the product team is done on SQL server, and then converted on to Oracle by some means that I'm not clear on. This process is not always smooth and we have had been shipped tools and code containing SQL server syntax that was not picked up. We've also got some outstanding performance problems, and some strange "features" appearing occasionally as our database table statistics are updated.

 

Based on these facts, again, if I was starting for scratch, I would deploy IdM on SQL server, as a database, even though there is still a fair bit of bias, mainly historical in my opinion, about the robustness of it as a database platform generally.

 

Runtime and Design Time

We have both windows and Linux environments for our runtime and have had no problems with either, with the windows ones being slightly easier to administrate, as you can manage the start and stop directly from a MMC design time installation on the same servers. So, again if I was starting from scratch, I would go for windows design time and runtime, putting both on each of the runtime servers required, assuming one is not sufficient.

 

Conclusion

So, based on the above, if I had to pick a platform to deploy IdM 7.2 on, ignoring any other factors such as existing IT department skills, organisational preference, snobbery about UNIX over windows, it would be

 

  • Design Time and Runtime - Windows Server
  • Database - MS SQL Server
  • UI - AS Java 7.3 - any O/S and database

 

I would of course be delighted to hear what others have experienced, and think about platform choices, and if I've many any glaring omissions, please let me know.

On queue processing, or the lack thereof. Part #1

$
0
0

A common issue we see in support are messages about processing of tasks and workflows stopping. Frequently the message to us is "the dispatcher has stopped". In many cases its not stopped, but rather has found something else to do than what you expected. So I've decided to try to document a few things about queue processing, the dispatcher, troubleshooting processing halts and provide some useful queries.

 

Though this post will be focused on the solution as it is from 72SP7 I'll try to point out the differences to earlier versions.

 

Feedback is most welcome, and additions as well as corrections can be expected. So for now while publish is clicked, some errors expected :-)

 

Overview

 

Those already familiar with/uninterested in the internals of IdM or with a halted system looking for solutions can skip this part and go directly to

On queue processing, or the lack thereof. Part #2 (which I'm still working on, sorry)

 

The dispatcher is the key component for processing workflows in IdM. It processes task expansions, conditional and switch task evaluations, approvals/attestation, executes link evaluation of assignments and its also responsible for starting the runtimes that executes actions and jobs.Quite a lot to do for a single component, so lets look at how it does this.

 

To process all this the dispatcher runs multiple threads, each with their own database connection. This also allows us to give you control over what task(s) each dispatcher can process. Meaning you can let a dispatcher running on or very near the database host do database intensive tasks such as task/approval/link processing, while a dispatcher closer to a datasource can process jobs and actions dealing with the target system(s). The reason that I include some of the procedure names in here is that using some of the queries I've documented previously you might recognize what is stuck.

 

 

Tables and Views

 

mxp_provision/mxpv_provision

The mxp_provision table is the main table for all workflow processing. Any workflow that is initiated by assignment events, users through a UI, uProvision calls in scripts, etc. all end up making an initial entry in this table. The main columns are mskey, actionid and auditid which are unique. There are more columns as well that we'll get to later. Its important to know that when processing a workflow (or process) the unique identifier in the system is mskey, auditid, actionid. If the same task is executed several times on a user it will always have a different auditid. This is also why its not possible to have the same task linked twice within a ordered task in a workflow. If you do you get constraint violations in the database and messages like "Error expanding task" in the audit. The provision table is dynamic and there is no history.

 

mxpv_grouptasks_<approval/attestation/conditional/ordered/switch>

The dispatcher(s) uses these views to get the next bulk of entries to process. By default these will list the 1000 first entries in the queue of each tasktype as indicated by their name. Older versions of IdM would list everything in one view, mxpv_grouptasks, and the dispatcher would process all the different tasktypes in a single thread. This could be controlled by setting a global constant, MX_DISPATCHER_POLICY, which would switch between using the joint view or the separate views in a couple of service packs. can't say for sure in which release this approach was abandoned, but I believe its to blame for the excessive amount of dispatchers we see in use in productive systems. Now the dispatcher creates an independent thread per view and running many dispatchers on a single host has little effect.

 

mxp_audit/mxpv_audit

Any workflow that is initiated also get a corresponding entry in mxp_audit where the overall state of processing of this entry/task/audit combo is kept. The audit information is kept forever.

 

mxp_ext_audit/mxpv_ext_audit

If you enable the somewhat mislabeled Trace the system will create an entry per task/action in the mxp_ext_audit table. This will contain the result of conditional/switch tasks and other useful information. This is also kept forever.

extAuditEnable.png

 

mc_execution_log/mcv_executionlog_list

This is a new table/view from SP8 and it contains messages that you usually would find in the runtime logfiles as well as messages from the dispatcher when processing entries. This is really fantastic and will get its own blogpost.

 

A small test scenario

 

Lets look at it in action using a setup I've used for performance and function testing

 

queueWorkflow#1.png

My example workflow called Dispatcher test #1.0.0 has an ordered task with multiple tasktypes below it.

 

It starts with a simple ordered task with an action containing a To Generic pass that sets CTX variable

 

Next is a conditional task that always goes into True

 

Then a conditional task that always goes into False

 

Followed by a new ordered task that expands into

A switch task with cases for the last digit of the mskey (0..9), each contains an action

 

Then an ordered task with "Wait for Events" containing a single action executing another ordered task

 

And finishing of with an ordered task containing an action that logs that its complete

 

 

 

0 - "Dispatcher test #1.0.0" task initiated

 

Lets see what happens when this is executed. In this example I've just executed it using the "test provision" function on my "administrator" user

 

mxpv_provision

queueLevel0.png

This is the Initial state. uProvision/event/assignment/test_provision/something else has initiated the task. At this point the top level task is in the queue, ready to run.

 

mxpv_grouptasks_ordered

queueLevel0_grptskordered.png

This task is also visible to the dispatcher in the mxpv_grouptasks_ordered view, which contains the first 1000 ordered tasks ready for processing from the provisioning queue. One ordered task entry is available for processing. Ordered tasks have one operation and that is expanding the tasks/actions they contain which we see in the next step.

 

mxp_audit

queueLevel0_audit.png

The audit shows that the task has been initiated.

 

1 - "Dispatcher test #1.0.0" task expansion

 

A dispatcher tasks thread will now pick up 1/Dispatcher test #1.0.0 from the view mxpv_grouptasks_ordered and expands the task.

 


mxp_provision

queueLevel1.png

mxpv_grouptasks_ordered:

queueLevel1_grptskordered.png

Now the ordered task 2892/Update ctx emu is ready for processing indicated by state=2.

 

mxpv_audit:

queueLevel1_audit.png

State of the audit is now officially Running

 

2 - "Update CTX emu" task expansion

 

mxpv_provision

queueLevel2.png

Now the ordered task 2892/Update ctx emu is expanded and this adds our first action task to the queue.


mxpv_grouptasks_ordered

<empty>

Actions can only be processed by a runtime, so at this pont the mxpv_groupstasks_ordered view is empty as there are no more ordered tasks to process at the moment.

 

mxpv_audit

queueLevel2_audit.png

The audit shows that the last completed action is now 2892/Update ctx emu.

 

3 - Processing the action task


At this point another thread in the dispatcher looking for actions takes over. This runs a procedure called mc_dispatcher_check whos only task is to let the dispatcher know if there are, and and if so, how many, jobs or provisioning actions available for it to run. This check (*) requires a lot of joins on lots of tables and as a result this procedure is sometimes seen to take a few seconds when the queue reaches around 1 millions rows in pre-SP7 releases.

 

In this case it will return 0 windows jobs, 0 java jobs, 0 windows provisioning actions, 1 java provisioning action.

 

From SP7 this procedure will generate a cache table to avoid rerunning the check too frequently as it would start slowing down systems when the queue got to about 1 million rows. This table, mc_taskjob_queue, will contain a list of actions available that no runtimes has yet picked up. It refreshes as it nears empty.

queueLevel3_tskjbq.png

So with this result the dispatcher will now know there is 1 action ready to run, and a runtime started by dispatcher with id=1 will have an action/job to run.

 

If there were more than 1 returned, it would look at its "Max rt engines to start" value to see if it should start more than one runtime at this moment.

It also checks how many it already has started that have not ended and compares this to the "Max concurrent rt engines" setting.

And then checks against the global "Max concurrent rt engines" to see that its not exceeding this.

 

So, if all is OK, the dispatcher will now start a java runtime process to run a job.

 

4 - The runtime executes the action task


At this point the dispatcher has started the runtime by initiating a java.exe with lots of parameters such as the database connection string and classpath extensions. Its important to note that the dispatcher does not tell the java runtime process which job it wants it to start. It just starts the runtime process and lets it pick something from the queue by itself. The runtime does this using the procedure mc_getx_provision, which in pre SP7 releases would run a somewhat complex query looking for an action to run which was basically just the same as the dispatcher had already done(*). If this started to take more than 5 seconds (or whatever you configured your dispatcher check interval to) the dispatcher would see that the jobs were not picked up and start more runtimes which got stuck in the same procedure. From SP7 we do a quick lookup in the cache table mc_taskjob_queue to avoid this problem


As the runtime engine initializes it will log to a file called /usr/sap/IdM/Identity Center/prelog.log. This file can be useful to check it should contain messages that occur before it can connect to the database, especially if its not able to connect to the database at all. Once the runtime has run mc_getx_provision it will download the job/action configuration into /usr/sap/IdM/Identity Center/Jobs/<folder with GUID of job/action>where it will keep its temporary files from now on. This folder contains the last versions of the text-formatted .log and .xml log-files. The full path of this folder is listed in each log in the management console as well. The textlog is very useful in cases where there's so many messages that they can't all be uploaded to the IdM database.


jobLogLocation.pngjobTempFolders.png


Anyway, in most cases the runtime is able to get the configuration and start processing entries. Each entry is processed by itself and after each entry the runtime will update the provisioning queue mskey/actionid/auditid combination using either the mxp_set_ok or mxp_set_fail procedure depending on success/failure of the operation.

 

5 - Test something true

 

According to the workflow the next step to process should be "Test something true" which is a conditional task and will as such be listed in the mxpv_grouptasks_conditional view.

 

mxp_provision

queueTestSomethingTrue.png

And "Test something true" is now in the queue, ready to run.

 

mxpv_grouptasks_conditional

queueTestSomethingTrueTGC.png

Also notice that the SQL statement for the conditional operation is part of this view.

 

mxpv_audit

queueTestSomethingTrueAudit.png

Our task is still a work in progress.


The dispatcher does a parameter replacement on %% values in the MXP_SQL and runs the statement, then evaluates the result. Depending on the result being 0 (false) or higher (true) it will run the mxp_set_FALSE or mxp_set_TRUE procedure for the mskey, actionid, auditid combination and the procedures will expand to the next step.

 

6 - Test something true action, and so on...

 

As a result of the previous evaluation ending in a True result the action in the True node has been expanded into the queue. Also notice how the mxpv_provision view includes the result of the conditional statement. This also occurs with switches. This information is stored in the extended audits if enabled which is really useful for tracing problems.

 

mxpv_provision

queueLevel6Prov.png

At this point the processing should start to be clear and this document is already too long before I've even started on the troubleshooting part :-) Now the action will trigger a runtime to start, then the Test something false process will be expanded and so on through my test scenario.

The dispatcher picks up the entry from the queue, evaluates, runs procedure to continue workflow. Nothing interesting happens until the Exec external and wait for task is started, which has the Wait for event tasks results option checked.

 

queueLevel6WaitForEvent.png

This is used in various places in the provisioning framework such as in the Provision workflow where a user needs to get an account created in the target repository before any assignments are added.

 

7 - Exec external and wait for


In this case I've halted my event task so that I can see what the queue looks like.


mxpv_provision

queueLevel7waitForExternal.png

The MSG column shows that audit 1273983is waiting for audit 1273984 before it continues. In this case I've stopped the dispatcher capable of running the action of task 45, so its temporarily stuck here. So, starting the dispatcher it will continue the external and eventually continue and finish the workflow.


8 - Suddenly all done, but wait, what happened?


To close this off and get on to the troubleshooting I just wanted to mention the extended audit table. With it I can get a complete picture of the events for my two audits:
extAudit.png

As a mew feature from SP8 and on, I can even get all the messages from the runtimes as well by looking at the mcv_executionlog_list view.



Additional notes and curiosities

 

How does the runtime process entries?

 

As mentioned previously the runtime uses mc_getx_provision to reserve an action to run. With the actionid it also gets a repository ID, then it retrieves the the jobdefinition linked to the action and prepares to process all entries in the queue for the task its been handed for given repository. So it wil process the queue for the task one repository at a time (by default). This is nice when connection/disconnecting to repositories take a long time. Not so nice during bulkloads when you have 20.000 entries processing to your least important ABAP system that somehow got priority over the important one. Anyway, the queue it will process is found using:

SELECT * FROM MXPROV_ENTRIES WHERE MXP_ACTIONID=@P0 AND MXP_REPOSITORY=@P1

(its using a prepared statement so @P0 and @P1 are input parameters)

 

I've once more created a small scenario to test and demonstrate this:

RTprocessingUnremarkableTask.png

For this task I've queued 9 entries targeted to 3 different repositories, and as a result mxp_provision contains 9 entries for repositories GENREP1, GENREP2 and GENREP3. (The query that lists this is in part #2):

RTprocessingUnremarkableTaskQUEUE.png

This is is also what ends up in the cache (mc_taskjob_queue) table, and what the query in the procedure of older versions resolve:

RTprocessingUnremarkableTaskQUEUECache.png

With a somewhat recent release you should also see that the logs for the job is uploaded per repository, and that they appear to be processed in order:

RTprocessingUnremarkableLOG.png

 

So how do do I optimize the runtime processing then?

 

Glad I asked. Some time ago an option called Allow parallel provisioning was added to tasks. This option allows IdM to create clones of the original job that can run in parallel. The clones are empty in the sense that they dont have a configuration of their own, just a reference to the shared master. With this enabled the timing in the log changes completely (enabled in green, not enabled in red):

RTprocessingRemarkableOptionLOG.png

If I'm quick I can even catch it in the status screen, and the action will also reflect it:

 

RTprocessingRemarkableOptionStatus.png

RTprocessingRemarkableOptionNewData.png

 

Basicly what happens is that the dispatcher has started 3 runtimes at the same time to process each repository in parallel. This also requires that the Max rt engines to start setting is bigger than 1 in my demo since my queue is too small for it to have any effect otherwise. This is done behind the scenes by the procedures that the runtime calls so no action is required by you when adding new repositories.

 

"This is so awesome! Why didnt you make this the default?!?!" You might ask. This works best when used selectivly. Imagine you have hundreds of ABAP repositories (some actually do). If your system could handle 50-70 runtimes in parallel you run the risk of them all being busy updating your ABAP repositories while nothing else happened.

 

 

 

This editor is getting a bit slow and perhaps unstable, so I'll continue this in part #2.

Custom error handler in Workflow actions

$
0
0

I really just wanted to archive this somewhere else than in my mailbox where it keeps getting lost even though I'm asked for it every 2 years or so :-)

 

Sometimes actions fail, but the reason is that everything is OK. Such as adding a member to a group when the member is already a member of the group. (Always wanted to write that!). Or you just dont care that the action failed, you want the workflow to continue anyway and not end up in the On Fail event just yet.

 

If thats the case the Call script in case of error option is just what you need. This example is from 2010 but I believe it should still work. I dont have an LDAP server to test it on at the moment so please let me know if its broken. Its accessing some specific objects to get the actual error so its quite nice to have around. You dont need to make it this advanced though. The only things you really need are:

 

- Check the error

- If you want the workflow to go on, execute uSkip(1,1);

- If you want to end the workflow and go to whatever On Error/Chain Error events exists, just exit the script or verify it using uSkip(1,2);

 

uSkip sets the exit state, first parameter is 1 for entry, 2 for pass (use in jobs only, not provision actions). The second parameter is state where 1 is OK, 2 is FAILED.

 

customErrorHandler'.png

 

// Main function: myLdapErrorHandler
//
// Some LDAP servers reports an ERROR if a a multivalue add or del operation tries to add an existing or delete a non-existing value
// This occurs for uniquemember, memberof and a few other multivalue attributes
// Because this is reported as an error the workflow will stop...
// This script checks if reported LDAP Error is
//    "LDAP: error code 20" (value already exists)
// or
//    "LDAP: error code 16" (value doesnt exist)
// and if so, sets provision status OK so that the workflow can continue
//
// This script must be run as On Error in a To DSA pass
//
// chris@sap 20100204
function myLdapErrorHandler(Par){   entry = uGetErrorInfo();   if (entry != null)   {      UserFunc.uErrMsg(0,"Got data from errorInfo");      attr = entry.firstAttr();      LdapEntry = entry;        if (entry.containsKey("err_ModException"))      {         var exc = entry.get("err_ModException");         var orig = exc.getOriginalException();         if (orig != null)         {            UserFunc.uErrMsg(0, "Original mod exception" + orig);            addPos=Instr(1,orig,"LDAP: error code 20",1);            delPos=Instr(1,orig,"LDAP: error code 16",1);            if (addPos>0) {               UserFunc.uErrMsg(0, "SUN error on multivalue add for existing value detected, setting provision OK");               UserFunc.uSkip(1,1);            }            if (delPos > 0) {               UserFunc.uErrMsg(0, "SUN error on multivalue delete of nonexisting value detected, setting provision OK");               UserFunc.uSkip(1,1);            }         }      }   }
}

Br,

Chris

On queue processing, or the lack thereof. Part #2

$
0
0

Feedback is most welcome, and additions as well as corrections can be expected. Since I've got some sprint work that need some focus this is published as is and I'll get back to it in when I can and will try address any comments as well.

 

Overview

 

Those not familiar with/interested in the internals of IdM queue processing can consider looking at the first part: On queue processing, or the lack thereof. Part #1

 

I may update this with the matching screens from the Admin UI over time.

 

And as always remove "with (nolock)" from the queries when running not running on SQL Server.

I haven't had time to test these on Oracle but leaving this part in will guarantee a failure, and the case syntax is a bit differnt in Oracle. This is one of the first things I want to get back to.

 

Getting an overview of the queues

 

One of the most important things to do in case of a productive stand-still or issue is to get an overview of what's in the different queues.

Link evaluations, approvals and workflows have separate queues and processing of them is done by different threads in the dispatcher(s).

Jobs are simply set to state=1 and scheduletime < now in the mc_jobs table.

 

Jobs and actions

 

As mentioned above, jobs do not really have a queue. They are scheduled to run by having scheduletime set and state set to 1. The dispatcher will start runtime(s) to process jobs if the mc_dispatcher_check procedure returns 1 or more standard jobs to run. The java runtime will use the procedure mc_getx_job to reserve a job from the available jobs. Once running the state in mc_jobs changes to 2.

 

Just to clarify, ajob sits outside the Identity Store(s) in a job folder and usually works with bulk processing and contains 1 or many passes. Actions are inside the workflow of an Identity Store and can only contain 1 pass and process 1 entry at a time. To slightly confuse the matter, the configuration of an action task is a job, in the mc_jobs table, and the logs it creates are stored in the mc_logs table. There's a link between the task in mxp_tasks to mc_jobs on mxp_tasks.jobguid = mc_jobs.jobgui.

 

With this knowledge a query listing jobs and provisioning actions that are running can look like this:

 

select name,case when provision=1 then 'Action' else 'Job' end type, CurrentEntry, Current_Machine from mc_jobs with(nolock) where state = 2

This produces output like this:

part2_verySimpleJobsActionsRunning.png

Note that the CurrentEntry column in mc_jobs is updated every 100 entry, or every 30 seconds by the runtimes.

 

 

The provisioning queue & semaphores

 

The provisioning queue is based on the table mxp_provision. To process parts of the queue a dispatcher must first set a semaphore that indicates that other dispatchers should keep away from processing the same type of task. This is done by setting a semaphore (basically its own Id as owner along with a timestamp) in the mc_semaphore table. The timestamp is updated as the thread is processing entries, and a semaphore whose timestamp is older than 300 seconds is considered dead. This means that if you have conditional statements taking a very long time to run so that the dispatcher thread is not able to update the timestamp within 300 seconds the semaphore is released and another dispatcher will start processing conditional statements as well. That means trouble because the two threads risk running the same conditional mskey,action,audit combination!

 

The provisioning queue is divided into views according to the threads in the dispatcher: mxpv_grouptasks_ordered/mxpv_grouptasks_unordered/mxpv_grouptasks_conditional/mxpv_grouptasks_switch/mxpv_grouptasks_approval/

mxpv_grouptasks_attestation

 

These views will at most contain 1000 entries from a Top 1000 limiter. As mentioned in part #1, actions that are to be processed by runtime engines are picked up by a procedure and has no view.

 

The link evaluation queue

 

This queue contains assignments that need to be evaluated. Any mxi_link entry with mcCheckLink < now is in this queue. This includes role/privilege assignments and entry references such as manager.

 

The dispatcher processes this from the view mxpv_links. This view will contain 0 entries in normal situations, up to 1000 under load. To get the real number of links that need evaluations you can run:

 

SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1))

To see if a specific user has privileges that are queued for evaluation, or if a privilege has entries where it's state is still to be evaluated:

 

-- Assignments to evaluate for 'User Tony Zarlenga'
SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1)) and
mcThisMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'User Tony Zarlenga')
-- User assignments to evaluate for privilege 'PRIV.WITH.APPROVAL'
SELECT count(mcUniqueId) FROM mxi_link WHERE (mcCheckLink < getdate()) AND (mcLinkState IN (0,1)) and
mcOtherMskey in (select mcmskey from idmv_entry_simple where mcMskeyValue = 'PRIV.WITH.APPROVAL')

 

 

Listing actions ready to be run by runtime engines

 

Runtime actions are listed in the provisioning queue with actiontype=0. Combined with state=2 (ready to run) and exectime < now the entry is ready to be processed by a runtime. A very basic query listing number of entries ready for processing by different actions is:

 

select count(P.mskey) numEntries,P.actionid, t.taskname from mxp_provision P with(NOLOCK), mxp_tasks T with(NOLOCK)
where P.ActionType=0 and T.taskid = P.ActionID
group by p.ActionID,t.taskname

 

Unless you have a lot of actions with delay before start configured actions will usually have an exectime in the past. This query will create a simple result showing the entries that can be processed by runtimes:

part2_simpleActionQuery.png

Listing actions ready to be run by runtime engines and the state of the job

 

In most cases this is only part of the full picture. You really want to know if a runtime is actually working on those entries as well. Let's add mc_jobs and mc_job_state to the query to get a bit more detail:

 

select count(P.mskey) numEntries,P.actionid, t.taskname,js.name as jobState
from mxp_provision P
inner join mxp_tasks T on T.taskid = P.ActionID
left outer join mc_jobs J with(nolock) on J.JobGuid = T.JobGuid
left outer join mc_job_state JS with(nolock) on j.State = JS.State
where P.ActionType=0 and P.state=2 and T.taskid = P.ActionID
group by p.ActionID,t.taskname,js.name

The current reason for my system not processing anything is getting clearer:

part2_simpleActionQueryExt.png

No actions are running so something is blocking or stopping the runtimes from starting and I know to look at the dispatcher. Since I've manually stopped it it's no big surprise and troubleshooting is simple.

 

Just a few of the actions/jobs are running

 

If you think that not enough runtimes are being started and see situations like this:

part2_simpleActionQueryExtGoingSlow.png

You should look at item 5 in the checklist below and also have a look at the properties and policy of the dispatcher

 

Dispatcher properties and policies

 

part2_dispatcherProperties.png

 

Max rt engines to start determines how many runtimes a dispatcher starts when it finds X actions ready to run in the queue. In this case, even if 100 are ready to run it will only start 1 in this check interval (see the picture to the right).

 

 

Max concurrent rt engines controls how many runtimes a dispatcher will have active at the same time. 1 is always reserved for the Windows Runtime though. So my system is now limited to a single active java runtime at any time.

 

Max loops for rt engine is also a very useful setting. Starting a java runtime processes and loading all the classes can often take a second or three and in a low-load scenario this can be the slowest operation in the system. This setting tells the runtime that once it's done with an action/job it should enter a small loop with a 1 second delay to check for additional actions that are available. This also increases performance as it is independent of the check interval(see two pictures below)

part2_dispatcherGlobalMaxRt.pngAlso notice the global setting for max concurrent rt engines. If you have 3 dispatchers that can run 50 simultaneous runtimes each you can still limit the total active runtime count to 100 for instance.

part2_dispatcherPolicyCheckInterval.png

The check interval controls how frequently the dispatcher connects to the database to check for available tasks, actions and jobs.

 

A general recommendation is to increase this in systems with multiple dispatchers so that the average interval is around 5 seconds. So when running 2 dispatcher, check interval is 10 on both, with 4 dispatcher interval is 20 and so on.

 

By increasing the Max concurrent rt engines setting to 10, and max rt engines to start to 3 the situation is quickly changed to the point where it's difficult to create a screenshot:

part2_simpleActionQueryExtGoingFaster.png

 

Troubleshooting actions/jobs not starting

 

A quick checklist for troubleshooting:

 

  1. Check that the dispatcher process (mxservice.exe on windows) is running
  2. Check how many java processes you have in task manager (or using pf or similar on Unix)
    • If there are no or just a few java processes then the runtimes are most likely not started by the dispatcher
      • Check prelog.log for startup issues
    • If you have lots of java processes but no actions running, then the runtime is probably having problems connecting to the db or reserving a job
      • Check prelog.log
  3. Check that the dispatchers are allowed to run the jobs that are queued
  4. A job listed with state ERROR will not run, and has to be forced to restart. Check it's logs for errors though, they end up error state for a reason (most of the time)
  5. Check the database activity monitor, reports or using queries from IDM SQL Basics #2: Locating problem queries to check if
    • If the procedure mc_dispatcher_check is running for a long time the dispatcher is unable to perform the check on the queue to see how many actions are ready for processing and java runtimes will not be started.
    • If the procedure mxp_getx_provision is running for a long time the result is many java processes in the system but they are unable to allocate jobs

 

Listing the number of items and their state in the queue

 

I would usually start off with the following query that lists the number of entries per task, per state and including the state of the linked Job for action tasks:

 

select     count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType,s.name StateName, ISNULL(JS.Name,'not an action task') JobState
from  mxp_provision P with(nolock)  inner join mxp_Tasks T with(nolock) on T.taskid = P.actionid  inner join mxp_state S with(nolock) on S.StatID = P.state  inner join MXP_ActionType A with(nolock) on A.ActType=P.ActionType  left outer join mc_jobs J with(nolock) on J.JobGuid = T.JobGuid  left outer join mc_job_state JS with(nolock) on j.State = JS.State
group by     t.taskid,T.taskname, A.name, S.name, JS.Name
order by     A.name, S.name

I've started my dispatcher test task described in Part #1 for 1000 entries The query above gives me a result like this during the processing

 

troubleShoot_queueState1.png

(click to enlarge)

A quick explanation of some of the type/state combinations and what would process them

 

Action Task/Ready To Run: Action that is ready to be processed by a runtime

+ JobStatus: The state of the job linked to the Action Task. If it's Idle it means a runtime has not picked this up yet.

 

Conditional, Switch and (un)Ordered Tasks are processed by dispatchers that have a policy that allows Handle Tasks.

Ready to run for conditional or switch task means i'ts ready for evaluation

Ready to run for Ordered/Unorderd task means the workflow can be expanded into the queue

Expanded OK means the workflow at this level is expanded

Waiting generally means that its waiting for a sub-process or child event to finish

 

The final view of the provisioning queue, with current entry count for actions

 

Since the mc_jobs table contains a column named CurrentEntry we can also see how many entries running actions have processed using:

 

select  count(P.mskey) numEntries,t.taskid,t.taskname,A.Name ActionType,s.name StateName,  case when js.name = 'Running' then 'Running, processed:'+cast(ISNULL(J.CurrentEntry,0) as varchar) else js.name end state
from     mxp_provision P with(nolock)  inner join mxp_Tasks T with(nolock) on T.taskid = P.actionid  inner join mxp_state S with(nolock) on S.StatID = P.state  inner join MXP_ActionType A with(nolock) on A.ActType=P.ActionType  left outer join mc_jobs J with(nolock) on J.JobGuid = T.JobGuid  left outer join mc_job_state JS with(nolock) on j.State = JS.State
group by     t.taskid,T.taskname, A.name, S.name, case when js.name = 'Running' then 'Running, processed:'+cast(ISNULL(J.CurrentEntry,0) as varchar) else js.name end
order by     A.name, S.name

The result is quite useful as it's now possible to see how many entries the actions that are running have processed so far (click to see):

part2_notSimpleActionQueryAlmostUltimate.png

 

This will have to do for revision one. If I get time to add more this week I will, but there is patches and SPs to work on as well.

Viewing all 172 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>