Quantcast
Channel: SAP Identity Management
Viewing all 172 articles
Browse latest View live

SAP NetWeaver IdM REST API UI - calling POST method/example

$
0
0

SAP NetWeaver IdM REST API UI - calling POST method/example

 

 

            I had a problem executing a POST method, after a new security requirement was added (the Virus Scan Interface has been enabled) within IdM  REST Interface Version 2 to prevent XSRF attacks. So I had to execute a non-modifying request (GET, HEAD, OPTIONS) first, where the X-CSRF-Token header field has the value Fetch. And after I had the value from my fist call in X-CSRF-Token header field I was able to execute a modifying request (POST...). Here is an example, how I do that:

 

var xsrfTokenValue="";

var myData = new Object();

 

$.ajax({

                type: "GET",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                async: false, 

                contentType: 'application/json', 

                headers: {

                               "X-CSRF-Token": "Fetch",

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"

               },

                success: function(res, status, xhr){

                     xsrfTokenValue =xhr.getResponseHeader("X-CSRF-Token");

               }

});

$.ajax({

                type: "POST",

                url : "http:host:port/idmrest/v72alpha/entries/{MSKEY}/tasks/{TASKID}",

                dataType : "json",

                headers: {

                               "X-CSRF-Token": xsrfTokenValue,

                               "X-Requested-With": "JSONHttpRequest",

                               "X-Requested-With": "XMLHttpRequest",

                               "Content-type": "application/x-www-form-urlencoded"

               },

                data:myData,

                async: false, 

                contentType: 'application/json',

                success: function(data){

               }

});

 

 

Note:

  • Into xsrfTokenValue variable is the value for X-CSRF-Token header stored(from the GET method)
  • into my headers I have all required IdM headers.
  • Into myData(in my POST request) you can dynamically generate the Object(the needed data send back to IdM) send with the POST method

Undestanding SAP Provisioning Framework (group assignment)

$
0
0

Background

A consultant asked me about the inner working of group assignment functionality in the SAP provisioning framework 7.2 and why we are using the MX_GROUP + MX_PRIVILEGE coupling and so on.

 

So I decided it might be worth sharing it with the bigger audience and put it in a blog, after all it is non-trivial stuff and perhaps one of the more complex parts of the SAP Provisioning Framework in 7.2.

 

Anyway it been quite some time since I took on task of starting of the SAP Provisioning Framework 7.2 and it has been progressing quite a bit since then, but still the main concepts still remains pretty much the same.

 

Many of concept changes from the 7.1 framework and 7.2 actually stems from the early Notes Framework.

 

Both the way driving events by privilege assignments and dealing with modify events and group handling where first tested out in the Notes framework. Other concepts such as validation tasks roots back to origin from the GRC framework, where the need for checking assignments head of provisioning was introduced.

 

In 7.1 SAP Provisioning framework as you might know the add member task was used for this and approvals while provision was done by provision tasks, this meant dealing with rollback in case of erroneous assignment and there also would exist a time slot where you had the privilege before it was assigned in the backend.

 

There were already new features in the product for dealing with these issues so basically the 7.1 framework was growing old and not closely enough integrated with development to take early adoption for new upcoming features. So a new 7.2 SAP Provisioning framework was called for.

 

Following benefits we considered as the motivator factor for the 7.2 SAP Provisioning Framework

 

 

Main benefits

  • Proper privilege handling (you don't get privileges before you are entitled to the them

                      7.1 : rollback on privilege events 

                 vs 7.2 : pending privileges

  • Universal framework that handles complicated wait-for logic for you          
  • Hook-ins, gives new framework implementer and foundation to build on.   
  • Better visualization of what goes on.

                7.1 :  Used entry type events triggering firing, and static scripts that hard set attributes 

                7.2 :  Used a layer out task structure. Avoiding the hidden events in scripts.

  • Maintainability: less dependencies between frameworks and underlying 3rd party frameworks controlled by plugin points
  • Less hardwired. Provisioning triggering relates to privileges not entry types and attributes.

        Making it friendly to co-exist with other independent provisioning solutions that does not acquire entrytype or attribute triggering.

 

Of course there been many significant feature down the road since then such as group of privilege that was particularly important to avoiding bottleneck of assignment provisioning in 7.1. Todays context concept, attestation and advanced approval just to mention a few.


Group assignments

But today I wanted to focus on the group assignments.

In AS Java and also the Notes connector (now part of SAP Provisioning Framework), we use MX_GROUP object which is coupled with a group privilege.

 

Basically the nature of MX_GROUP is that it does not have modify events, it considered a container object.

 

If you are somewhat familiar with the SAP Provisioning framework 7.2 you know that there are account privileges (PRIV:<REP>:ONLY) for each repository,

and perhaps you also is aware that there is an internal privilege PRIV:SYSTEM:<rep> that is assigned as a result of account assignment.

The account privilege controls provisioning and deprovisiong to the repository while the system privilege controls modify events.

The event task are by default inherited from repository,

so all privileges except system privileges will disable the modify event by explicitly setting the MX_MODIFY_TASK to -1.

And likewise the PRIV:SYSTEM:<rep> will disable all event task except the modify task (which is inherited, hence not specified on the privilege).

 

The same applies for MX_GROUPs, you assign an account privilege to a group and it gets created in the repository and obtains the system privilege.

In addition however it is assigned a group privilege (PRIV:<REP>:GROUP:<IDENTIFIER> that controls the assignment of membership.

 

So given that you have an MX_GROUP with account and system privilege assigned. There is two ways of assigning members to the group.

1)     1)  Either you assign the user as a member of the group,     or        2)  you assign the group privilege to the user

 

 

So let’s take it step by step.

 

1. You assign a member to a group

This causes MXREF_MX_GROUP attribute to be updated on the user (with mskey of the group).

Since the user has the PRIV:SYSTEM:<REP> privilege and this privilege has the modify attribute MXREF_MX_GROUP enabled this will trigger a modify event in the provisioning framework.

 

modify_attribute.png

 

2. The modify task fires.

On newer version of SAP Provisioning Framework 7.2 there is checked of “User context variables”.

 

  context_variable_on_tasks.png

This enables the generation of MX_EVENT* context variables for the sessions, a prerequisite in newer framework for detecting modification event.

(Basically you can halt the execution by deselecting a job on a task in the task there, pick up the audit id from provision queue, and you can see these audit variables)

While we before used SAP_CHANGENUMBER to track these changes this is much safer and faster approach.

 

3. Update of the group privilege.

 

The task “process modify type” set context variable for different operations, and the conditionals evaluates them.

 

In this case, since the MXREF_MX_GROUP was changed the operation flag will be set, and the ‘Update group privilege’  task will run.

 

The ‘Update group privilege’ task will for each member of the group assign the group privilege.

 

If you have a bunch of users in the group they will already have this privilege so nothing will happened, for the newly added group members however the group privilege assignment will fire of a provision event.

 

modfy_update_group_privilege.png

 

4. Provisioning by group privilege.

If it has been a nested group assignment (AS Java) you would now go in the MX_GROUP switch,

but since this is a persons and not an account privilege assignment you follow to the plugin execution of 4. Exec plugin – Assignment User Membership.

Firing of the plugin task of the repository. Basically how that works is that the task “inherits” the repository from the privilege (group privilege) and

the execute plugin task obtains the plugin task through the repository hook variable as a runtime resolution. And execute a basic provision on the hook.

The execute plugin task and hook task utilizes the wait-for/after concept to halt further execution in the task tree until the plugin task completes.

 

execute_assignment.png

After having executed the respective backend operation. The assignment type is checked.

Since we are dealing with group it has to ensure MX_GROUP and MX_PRVILEGE is in sync on the user.

So it will update the MX_GROUP with the members, meaning the MXREF_MX_GROUP is updated on the user.

Since this is a multi-value reference attribute it will re-trigger the Modify task in step 3.

However the user will already have the group privilege now so it will not re-trigger.

 

Alternatively,

if you assign a group privilege to a user  you basically start the process at step 4 directly.

 

In SAP Provisioning Framework 7.2 v2.

The process is the same, but since the v2 is embedded in Java (with the advantage of improved speed)

The detection of MXREF_MX_GROUP in modify happens inside the Java code.

 

version2_assignment.png

 

And the assignments evaluation happens in the Split conditional and you are redirected to the MX_PERSON_ASSIGNMENT.

And the conditional : Person after Assignment provisioning deals with the assignment type evaluation etc.

 

version2_assignment2.png

SAP Identity Management HTML Reporting: How to create HTML Reports.

$
0
0

HTML Reporting

This is an article focused on the reporting functionality of Identity Management with the main focus on HTML Reporting.  The article is applicable to both Identity Management 7.2 and 7.1 versions.

Identity Management needs to satisfy certain reporting requirements, such as :
  • What are the attributes of a given user?
  • What are the business roles assigned to a given user?
  • What systems does a given user has access to?
  • Which business roles are available in the system?
  • How many users/business roles, etc. are available in
    the system?
In this article I will show you how to use Identity Management’s functionality in order to create pretty, complete and useful HTML Reports.  With Identity Management you can create reports using the information which is available in the Identity Center.  The core of HTML Reporting is actually very simple – SQL Queries, which get values for a given entry type from the database.  Wrapping up those queries with some HTML and CSS code makes a fully customizable appearance for the generated reports, which gives flexibility and better user experience to the end users.  Let us start this magical tour into the wonderful world of HTML reporting with the entry typeitself:

MX_REPORT

The MX_REPORT entry type is the entry type for report requests and exists in Identity Management as of 7.1 SP2. The report is executed as an action task on the MX_REPORT entry type. As the report is a task, the task status indicates the progress of the report, i.e. pending, ok or error. By default, the MX_REPORT is not listed as a searchable entry type.  Note that the report task will always create an entry of type MX_REPORT, regardless of the provided entry type in the task definition.

Let’s take a look at the most important attributes of the MX_REPORT entry type:
  • MSKEYVALUE
  • DISPLAYNAME- display name of the report.
  • MX_OWNER
  • MX_REPORT_ENTRY
  • MX_REPORT_DATE– date on which the report was requested.
  • MX_REPORT_FORMAT– format of the report (this could be PDF,HTML,DOC, etc.)
  • MX_REPORT_RESULT– this attribute holds the full report result. It is saved as a binary in the database.
  • MX_REPORT_RESULT_REF– this attribute holds a reference to the report result, in case it is stored in a separate file server.
Reports are shown in the View Reports tab of the Identity Management User Interface.  In this tab, we have a table with 5 columns:
  • Entry – This corresponds to the value of the MX_REPORT_ENTRY attribute.
  • Status - Status of the report task (pending, OK or Error). This status is calculated based on the status of the task execution, taken from the MCMV_AUDIT view.
  • Report Date - This corresponds to the value of the MX_REPORT_DATEattribute.
  • Report Name - The name of the report. If DISPLAYNAME has a value, this value is stored, otherwise the value of MSKEYVALUE is stored here.
  • Report Result - This corresponds either to the MX_REPORT_RESULT or to the MX_REPORT_RESULT_REF attribute.
   
In order for a report to be shown in the View Reports tab, it must have a value for either MX_REPORT_RESULT  or MX_REPORT_RESULT_REF.  Otherwise, the report will be stored in the database, but it won’t be visible in the User Interface, as it has no result file attached to it.
We’ve seen the entry type for reporting, now let’s get into some real action.  Let’s create a report task, which returns all the assigned privileges and roles for a given user. To make things more beautiful, we will wrap it with some CSS and HTML, to create a good-looking report.  Let’s begin…

HTML Reporting Task
We start by creating the task – let’s name it “Create Report”.  We make it a UI task, and on the attributes tab we mark “Report task”.  This will automatically display all the attributes for the MX_REPORTentry type and create a new MX_REPORT entry type, regardless of what we have selected as the entry type of the task itself.  We select the task entry type to be MX_PERSON (which means what it will be executed on a person, but will create a report)and select some attributes to be displayed. (check the screenshot below.) The last step is to add access control to the task – let’s say we want only the administrator user to be able to create reports for entries. So we give access type – logged-in-user, with value-“Administrator” and on behalf of everybody relation.  Here is how the task definition should look like:

1.jpg2.jpg3.jpg
So far we have only a UI task, that can be executed by an Administrator and creates a new entry of type – MX_REPORT.  As there is no report_result, we don’t have an actual report ready. In order to create the report, we would attach a new empty job, with one To Identity Store pass.  In the pass destination tab, we will add MX_REPORT as an entry type, because the pass will make modifications to the newly created MX_REPORT entry.  The idea
here is, that after we create the report in the UI, we would receive the value of the MX_REPORT_ENTRY attribute, which is actually the mskey of the user, we are running the Create Report task on, and use this value as MSKEY in our queries, which will return the assigned privileges and roles. We will need MSKEYVALUE, to point to the created report, MX_REPORT_FORMAT, which will be with value – “HTML”, because if it’s in another format, when
opened in the UI, it will not be opened in the browser as needed; and MX_REPORT_RESULT, which will get its value from a custom script. In the script, we will use the magic of HTML5 + SQL Queries in order to create the report itself. Let’s create the script.

HTML Reporting Script

We create a new local script for the action task. Let’s name it HTMLReport. The script will take as input parameter the value of MX_REPORT_ENTRY (the mskey of the user, we are reporting on) and will return a binary representation of the report, which will be stored in the Identity Center.  I will explain the process of creating the script:
    1. We create a Header and Footer, which will be just static CSS and HTML code in the beginning and in the end of the HTML file. We do this to delimitate the static parts of the HTML code, so that they won’t interfere with our main logic.  The header contains the opening HTML tags and the CSS used, and the footer just contains the closing HTML tags.  For the sake of simplicity in this article, I won’t add the real CSS code I used in this example. If you want
      to use the same CSS, you can find a reference to the file at the end of the article.  Nevertheless, your header should look like this:       
    2.           <html>
                <head>
                <style type="text/css">
                <….........CSS here……………..>
                </style>
                </head>
                <body>
              And your footer should be like this:
                </table>
                </div>
                </body>
                </html>

                2. Now, let’s create the script itself. For now, we have the HTML opening tags and the CSS, and the HTML closing tags, along with the table closing                  tag. Now, let’s populate the table itself.  We will create 2 rows and 3 columns. The 1st row will contain the table headings: User Name, Assigned                Privileges and Assigned Roles. The  second row will contain the username, privileges and roles, all extracted from the Identity Center.
                    Privileges are extracted with the following query:
      select  mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and 
       mcthismskey='"+Par+"' 
                   
               
                    For roles, the query is:
                   
      select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE'
      and mcthismskey="+Par+"
       
      Since Par(which is the value of MX_REPORT_ENTRY) , contains the mskey of the entry, we have to extract the mskeyvalue of that user to be shown under User Name. This is done via the following query:
      select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+"
      Getting the results from those queries as values is done via the uSelect() function.
      We store the results of those queries in variables and add them to the table elements and store this into a variable (oHTML), which represents the table body.
      The last thing left to do is to return the binary representation (hex code). This is done via the uToHex() function. We also need to add  “{HEX}” as prefix.
                  This is how our example script looks like in the end:
      //
      Main function: HTMLReport
      function HTMLReport(Par){
      var oHeader = uFromFile("C:\\Reporting\\Template\\header.html","-1","false");
      var oFooter =uFromFile("C:\\Reporting\\Template\\footer.html","-1","false");
      var AssignedPrivileges=uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_PRIVILEGE' and
      mcthismskey='"+Par+"'");
      var oList="";
      var oArray=AssignedPrivileges.split("!!");
      for(var i=0; i<oArray.length; i++){
      oList=oList+oArray[i]+'<br>';
      }
      var AssignedRoles = uSelect("select mcothermskeyvalue from idmv_link_ext2 where mcAttrName='MXREF_MX_ROLE' and 
      mcthismskey='"+Par+"' ");
      var oList2="";
      var oArray2 = AssignedRoles.split("!!");
      for(var i=0; i<oArray2.length; i++){
      oList2 = oList2 + oArray2[i] + '<br>';
      }
      var userName = uSelect("select mcmskeyvalue from idmv_entry_simple where mcmskey="+Par+" ");
      var oHTML='<div class="HTMLReport"><table><tr><td>UserName:</td><td>Assigned Privileges</td><td>Assigned Roles</td></tr>';
      oHTML+='<tr><td>'+userName+'</td><td>'+oList+'</td><td>'+oList2+'</td></tr>';
      var oHex="{HEX}"+uToHex(oHeader+oHTML+oFooter);
      return oHex;
      }
      We save and go back to the To Identity Store pass definition

          3. As a final step, we add the value of MX_REPORT_RESULT to be calculated via the HTMLReport  script, with MX_REPORT_ENTRY as                input  parameter. The Pass definition should look like this:


         

4.jpg


      4. Save the task and log on to the User Interface with the “Administrator” user.


 

Starting the task via the User Interface

 

Let’s execute the task on an entry. We go to manage tab and search for a person.  We select the person, go to “Choose task” and select the “Create Report” task.

5.jpg

The “Create Report” task is opened and we fill the attributes needed, then click “Save”.
6.jpg

This will save a new entry of type MX_REPORT, with MSKEYVALUE and Display Name – Superman Report, MX_REPORT_DATE– 25.04.2014, and values for DESCRIPTION and MX_REPORT_ERROR.  Saving this will execute the action task and the To Identity Store pass, which will set HTML as value to the MX_REPORT_FORMAT attribute, and will calculate the value of MX_REPORT_RESULT, using the Script we created.  To see our result, we will go to the View Reports tab:
Untitled.jpg

 

We can see our newly generated report.  We can see that it is for the Entry Clark Kent, the report task is successfully executed (Status – “OK”), the report name and the report result. If we click on the Result, we will see our report in the browser:

8.jpg

 

Pretty, isn’t it?  It contains the UserName of the user and all the assigned privileges and business roles.

 

 

You can find the header and footer files, which I used with the abovementioned script to create this example report, attached to the article.  If you like them, they are available for free usage

 

 

 

Yours truly,

Emanuil

AS Java Initial Load with HTTPs

$
0
0

In order to encrypt the communication between IDM and AS Java during the Initial load or any other jobs, you may want to use HTTPs instead of HTTP for a JAVA server. However, if you choose the https protocol, you may get an error in Initial Load job. Error message looks like this

 

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target


screen1.png

This is because the Java‘s server certificate is not trusted by your IDM java program.  All you need to do is to add server's (or root) certificate into JRE's default trust store.

 

The JRE's trust store is located under jre/lib/security. The file name is cacerts without extension.

 

Try command

keytool -importcert -file RootCA.crt -keystore cacerts

 

screen2.png

re-run the job. You will find the error is gone.

 

There are other ways to solve the problem. But I guess this is the easiest. The solution is also suitable for communicate with any other HTTPs server or LDAPs server.

SAP IDM: Provisioning is stuck (yet again) - My checklist

$
0
0

So what's this blog all about?

An (really) important part of IDM is the provisioning to other systems. And it's a part that loves to keep us on our toes, as the amount of threads and blogs about this topic show. Through my time working with IDM the provisioning stopped working quite a few times due to different reasons. And every step of the way I learned a new reason and a new solution to get it going again. To keep up with all of that and to help me solve it faster when it decides to get stuck yet again I started to write down a checklist.

 

Some time ago I posted the better part of my little checklist in a thread and through some encouragement by Matt I decided to create a blog post out of it to share the whole thing to a wider audience and explain the steps a bit more. This is my first technical blog here on SCN so I'm a little nervous and excited at the same time. ^^


Just to be clear: Not all of the points of the checklist might work for you, since we're on a Windows server with the IDM management console and use an Oracle database.

 

 

 

 

My tools

  • Access to the Management Console (MMC) of the IDM
  • Access to the Monitoring-tab via http://<portalurl:port>/idm/admin
  • SQL developer with database connection to IDM database
  • Permission to access the Windows services of the IDM-server and to start/stop the services
  • Permission to reboot the IDM-server
  • Really good connections to the database administrators

 

 

 

How do I know it's that time again?

There are three signs that I check if the provisioning is really stuck again:

  1. I look at the "Job log" in the MMC to see if the last entry for the provisioning-dispatchers is from more that 15 - 20 minutes ago (even through it was triggered in the last minutes).
  2. The provisioning queue on http://<portalurl:port>/idm/admin is only growing.
  3. The dispatchers, that are assigned to do the provisioning, are shown as running in the MMC under "Dispatchers > Status" and the timestamp for "Last check" is updated when I click on refresh.

 

If all those steps come back with a "yes", I'll get...

 


The Checklist

  1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error".
  2. Restart the provisioning dispatchers in the "Services"-menu of the server.
  3. Check for waiting tasks via the SQL developer.
  4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.
  5. Reboot the server the MMC and dispatchers are installed on.
  6. Restart the IDM database.


That's the checklist in short, if you just need a little reminder or an idea for what to look at next. I'll explain the points a bit more in detail now.

 

1. Check the "Status"-overview in the MMC to see, if a job is showing the state "Error"

 

In the MMC you'll find the "Status"-overview as the first entry under "Management".

ScreenShot089.jpg


It shows all the jobs for that Identity Center connection (in this case it's named IMP). To check for jobs that have the current state "Error", just click on the column header "State" and it will be sorted by content. If you have checked the box "Show only enabled jobs" at the bottom of the page, the jobs with error-state should be shown in red font at the top or end of that list.

If you find a job that is associated with provisioning and it's on "Error", right-click on it and start it with "Run now".

 

 

2. Restart the provisioning dispatchers in the "Services"-menu of the server.

 

Go to "Start > Administrative Tools > Services" on the IDM server and look for your dispatchers, that are assigned to the provisioning, in that list. They should be shown as started. Right-click on them and choose "Restart".

This is what gets our provisioning going most of the time.

 

 

3. Check for waiting tasks via the SQL developer.

 

Open the SQL developer and work your way through the following SQL statements:

select * from mxp_provision where msg like 'Wait for%'

This checks for tasks, that wait for the successful state of other tasks. The MSG-column gives you an auditid for the next SQL-statement. It's the ID of the task, that is blocking the execution of the first task.

 

select * from mxp_audit where auditid=<auditid>


The MSG-column now shows information about the state (e.g. failed) of the blocking task, the reason and for which user and which assigment. With these information you can decide to leave it alone to handle itself (because it's got nothing to do with the blockage) or you can use the next SQL-statement.

 

update mxp_audit set provstatus=1100 where auditid = <auditid>


This last statement sets the blocking task to the state "OK" (= 1100) and therefor the waiting task (the one you found with the first statement) can finally be executed.

 

 

4. Check the "Windows RT conn. string" on the Database-tab of the Identity Center configuration in the MMC.


When you click on the IC configuration (the "IMP" in the first screenshot), the "Database"-tab will be displayed. At the end of it you'll find the string under "Runtime engine".

ScreenShot090.jpg

Open it and test the connection on the "Connection"-tab. If it comes back as failed, correct the name of the data source and/or the logon information. Then test it again to see if it's successful now.

 

 

5. Reboot the server the MMC and dispatchers are installed on.

Well, that's pretty self-explanatory. ^^

If you don't have permission to do this yourself, have the contact data of the administrator(s) at hand, who can reboot the server for you (just like with the restart of the services in #2).

 

 

6. Restart the IDM database.

This was only necessary once (until now), but to complete my checklist I'll include it here, too. Since I don't have direct access to our oracle database, I let our database administrators handle this one for me.

 

 

 

What's more to say?

Well, that's it! I hope the list can help you when your provisioning decides to take a break again. This is - of course - nowhere near a complete list, but a result of my experience with the issue.

If you have some tips of your own to add, I absolutly welcome it! As you know "Sharing is caring". So leave a comment when you have your own little checklist for this issue or if you want to give some feedback for my blog (which I'm really looking forward to, because I am a big fan of constructive criticism and it is my first technical blog).

 

Thank you for your time and attention. I hope it was not wasted! *g*

 

Regards,

Steffi.

Beyond Import

$
0
0

My Swiss army tools for dealing with configurations and development of frameworks through the years has been the Beyond Import.

Beyond Import is an innovation project that I started 4 years ago, and allows for comparing   7.1/7.2 configurations.

The tool was considered a bit too powerful as it allowed modification of configuration directly.

The current version has however been changed with this in mind. Only allowing safe operation modus

and any modification / editing of configurations has been stripped away.

 

Still it is a powerful tool and proven aid for finding differences between configurations etc.

 

That said the project remains inactive today and there is no support on it as such.

No planned support for future versions...

 

The project is available at: https://forge.sap.corp/projects/bidm/

 

Setup

When launching Beyond Import you are asked to configure you database connection(s) and/or MMC file(s)

 

Launch.png

 

The MMC option is usually a bit slow and less functionality to it. But it should be possible to for instance compare MMC file vs database identity store, but would recommend actually importing the MMC file.

Also MMC files must be a complete identity store exports.

If you do not have two databases at hand you are also fine entering same connection for both (and

for instance alter configuration in IdM and just refresh one side of the tree)

 

The load button allows you to load previously stored configuration setup (see also next screen)

Basically after filling in connection parameters running test the connection parameter will be preserved if successful.

If you have problem it might be a problem of old database driver, also it has not been much used on Oracle.

 

IDS.png

 

After having completed first form you will be asked about which identity store to use. Here you can also save the connection parameters to file for next time you want to run this.

Now it is just to click analyze.

 

DeskTop.png

 

There are three main windows, tree compare view, log view (diff), inspection view. There are two options for helping you with window layout, cascading and grid view.

The selected window will be set as the primary. It does not deal specially with dual monitor setup so you might have to drag it over to primary monitor for layout options to work proper.

 

Tree compare view

BTN_COMPARE.png

runs the diff and adds highlighting to the configuration:

 

               Red implies node has a mismatch

Yellow implies child node(s) has a mismatch

Stroke hrough– implies the folder does not exist/match  with opposite tree.

 

BTN_APPRX.png

will slacken the way tasks/jobs are compared, not necessary matching on GUID but allowing matching names etc.

 

BTN_SYNC.PNG

when enabled the sync button attempts to scroll to corresponding nodes in opposite tree.  Sometimes you want to disable this such as when clicking on linked objects (global scripts).

As the primary object will be found instead.

 

BTN_NEXT.png Find the next mismatch, from the selected node.

 

BTN_REFRESH.png Refresh buttons load configuration again from database.

BTN_CLEAR.PNG remove highlighting and clears the log view.

 

Log view

The compare operation logs to the view in a diff format like style.

Also manually compare operations will be logged.

               This view is particularly useful when you just want a quick list of object that are changed.

 

Inspection view

                Allows you to see table/xml attributes that are changed:

                RED– mismatch

                Bold– compared

                Grayed out– in ignore list

 

You may setup your own ignore list, from tool bar but there is no reload of this so requires application restart.

 

ScriptCompare.png

For encoded attributes (such as script definitions or job definitions) they will be decoded and compared if you click on the value.

On scripts you also sometimes see [<digit>] append to it. This refers to number of usage/referrals.

 

 

Context menu

Navigate and right click nodes gives you context menu with available option for that object type.

When comparing individual nodes you have to right click on both sides and choose compare node.

On scripts you can use “find usage” to show script referring to script in use.

 

FindUsageGlobaScripts.png

It is also integrated a fast search for object. Nodes always keep bidirectional references between their parent-child hence no database lookups are needed.

 

Schema

Table names are resolved from table metadata. So new columns should show up, the main dependencies are on indexes and future table extensions.

Primitive data type is ignored, basically everything is compared as string, this is essential for XML compilation.

And XML objects are basically extracted as XPATH object comparing directly against table objects.

 

 

 

 

 

Command console

When working on large configuration you may have to increase memory usage by JVM.

By launching UI from command console using:

java –jar BeyondImport.jar –Xms1024 –Xmx1024m or similar.

 

Additinonal you can run the diff output to command console by:

java -jar BeyondImport.jar file.ext 1> log 2> errorlog

 

Her the output is concatinated to the file log and the error output to errorlog.

The file.ext is setup to contain JDBC urls and identity store ids.

 

               

Compliant Identity Management - Don't Leave Your Virtual Doors and Windows Open

$
0
0

Network Security: Don’t Leave Your Virtual Doors and Windows Open

 

Imagine designing a new home. It’s likely you’d focus on the
overall layout first and then move on to the layout of each room. From there,
you’d incorporate important features, like your heating and air conditioning
systems, plumbing, and maybe a surround sound system. Maybe you’d start
selecting appliances. And of course, you’d want input on the design and décor
of your floors, walls, and ceilings.

 

 

But what if your contractor forgot to include locks on your
doors? Or used easily shattered glass for your windows? What about installing a
security system or screens to keep out pests? No matter how functional or
beautiful your home is, your investment isn’t worth much if it’s vulnerable to outside
threats.

 

 

But that’s often the case for many organizations that build
out their network organization. They design an efficient, state-of-the-art
solution with an attractive interface, but they forget a key component: network
security. In effect, they’re leaving their doors and windows open to the
internet equivalents of home burglars and pests – the hackers, cyber
terrorists, worms, and moles.

 

 

Network Security Shouldn’t Be an Afterthought

Often, security is added retroactively, when the damage is
already done. Many companies don’t recognize that they have a problem until
after their digital walls have been breached. And what’s even more dangerous is
that some may not even realize that an attack has occurred at all. Often, the
attacks are designed to be surreptitious. The longer an attack goes undetected,
the more information can be stolen.

 

A single cyber-attack can tear down what a company has spent
years building, resulting in:


  • The loss of intellectual property and
    proprietary data
  • Disruption to services for days, weeks, or
    months
  • Permanent damage to your brand loyalty and
    reputation
  • Legal costs associated with compensating
    customers for loss or identity theft
  • Compensation related to delays in meeting
    contractual obligations
  • Loss of customers to competitors
  • An increase in insurance premiums

 


So just how common is cybercrime? Both small businesses and
corporations are at risk. In my next post, I’ll talk numbers.

Sending Emails from Message Templates in Custom Tasks

$
0
0

Taking the blog Assignment Notification task, customization and custom messages one step further, here is an explanation how to send emails, which are based on the message templates editor in Web Dynpro, in all different kind of tasks.

 

The basis therefore is the chapter Sending custom messages.

 

There is no real need to create your own message class for your custom emails, so first, create an email message template in Web Dynpro:

 

2014-07-07_16-05-31.png

 

Create a custom task like described in Sending custom messages. As recommendation, put the NOTIFICATIONTASKID into the NOTIFICATION Repository as task reference, and reference to it in the job.

 

2014-07-07_16-25-43.png

 

2014-07-07_16-20-34.png

 

That's all.


Did you know?

$
0
0

So I've found out a few things recently...

 

As I mentioned in my SCN Status, I'm happy to say that I'll be speaking this year at d-code in Las Vegas! If you're planning to go, please do your best to attend ITM118 - SAP Identity Management – Experience from a Customer Implementation Project. I don't know the exact schedule yet, but I'll be sure to let you all know. You can find more information here.

 

While the title speaks about Implementation, I'll have some good information for folks in all phases of an IDM project from a Best practices based on my 10 years expereince with IDM going back to the MaXware Identity Center.  As they say, the more things change, the more they stay the same, and some things about implementing IDM haven't changed much, if at all.

 

Also, did you know the name of our favorite SAP module has changed? Based on a post from Harald Nehring, we are now referred to as SAP Identity Management (SAP IDM)  I like it.  Much more compact and efficient.

 

So let's see over the years it's been:

 

MaXware MetaCenter (~2003)

MaXware Identity Center (~2004)

SAP NetWeaver Identity Management (2007)

SAP Identity Management (2014)

 

Nice to see the product is still changing and maturing. There's some other things changing with regards to SAP IDM and I'll be speaking more about that soon.

The Future of SAP IDM

$
0
0

I was recently made aware that SAP is planning some changes in how they are going forward with SAP IDM. SAP is moving IDM development from Norway to Bulgaria during the third quarter of this year. While it is not unusual for companies to make these changes, there are some questions and concerns that have been passed my way as someone closely identified with SAP IDM.

 

So what does this mean for IDM? Overall, SAP is planning on further embedding their IDM initiatives to the Cloud and analytics, which we will see in future releases of SAP IDM. One would assume that this would mean a tighter integration with HANA, which should be well received in SAP-centric organizations.

However what is most concerning to me is that it has proven difficult to obtain any official statements regarding the road-map or the changes in the SAP IDM team. However Gerlinde Zibulski, Director Product Management Security at SAP HQ in Walldorf has been most helpful to me in providing information about how this will affect the product. I had a chance to have a discussion with Gerlinde, who was able to answer some of the questions that were on my mind and that I’ve heard from the SAP IDM community.

 

  • What are the plans for development of the Attestation module? – It’s out and available in IDM 7.2 via Service Pack. This module will remain accessible via REST only as an option for those customers who do not plan to integrate with GRC.
  • What are the plans for the future of the GRC integration? – SAP remains committed to the integration of IDM and GRC. The roadmap indicates that there will be tighter integration with HANA and analytic \s for GRC.
  • Will SAP continue to support the non-SAP connectors and the change generic connector architecture that allows for connectivity to non-Landscape Systems? - Yes, this is a core part of SAP IDM and there are no plans to change this.
  • How do these changes affect the overall road-map? – Not at all. If you’re interested in learning more about the road-map, you can view it here. (SMP Login required)

 

Gerlinde pointed me towards the official SAP IDM road-map which confirmed that HANA is indeed a large part of IDM’s future. Additionally, the inclusion of Analytics to understand all incoming security information coming from GRC. One of the other interesting things about SAP IDM’s future is its integration with the cloud. As the IT and ERP world is all about the cloud right now, it should be interesting to see what comes about.

She also confirmed to me that SAP IDM 8 is on track to be released just before d-code 2014. There are also plans in place to hire more developers in the expanded Sofia office to work on SAP IDM.

 

To be fair, SAP has done an excellent job with its recent CEI program for the purpose of obtaining feedback from customers and consultants so that they may better understand how SAP IDM is used in the field. I look forward to attending and participating in more of these sessions in the future. (I’d also be very interested in seeing a CEI for GRC, but more on that another time)

 

Finally, I would like to send my best wishes to the Trondheim Labs (formerly MaXware) team. For almost 20 years, they have been working on IDM and VDS, while assisting with the definition and execution of what Identity Management means to the SAP Landscape and the overall Business Enterprise. I have been proud to be associated with them as a co-worker and as a partner. I know whatever you work on will be vastly enhanced by your participation. I wish you all the best in your future endeavors and hope that we get to work again in the future.

 

If anyone has questions regarding SAP IDM, they may contact her via email. Additionally, ideas for the future of SAP IDM can always be logged at the SAP IDM Idea Place.

IDM and provision queue counters & statistics gathering

$
0
0

IDM performance metrics and counters

 

I've made an experimental procedure that collects a number of what I think are useful counters about the provisioning queue and the state of the IdM system. It can be used to detect bottlenecks, generate performance statistics, load predictions, and be a good utility for troubleshooting a system you have a halt situation or if the queue is moving along. It's using stuff, pivot and some other query types I've been testing for my own sake, and as such it may not be optimized.

 

It is also currently SQL Server only and as I'm leaving SAP the chances that I will be porting this to Oracle, DB2 or HANA are minimal. You are most welcome to do so yourself, as well as modifying and improving it. The query this procedure uses are mostly already documented in my other blogs, and I've also added some that reads CPU usage that I found online at SQLPerformance.wordpress.com.

 

The idea is that this procedure is run every X seconds, minutes or hours and the result can be used to build graphs displaying what is going (if anything) on during a performance test, deployment or when troubleshooting system performance. I'll to show how the results look first, then how it is used afterwards. Interpreting the results will be up to you and the situation you're in, but in metrics like "ready to run" and "running" it is better to have many "running" and few "ready to run" etc.

 

This will be one of my very last blog posts here on SDN and I hope you liked it :-)

 

A test scenario and example data

 

I have a test job that runs 1000 entries through an obstacle course of conditional, switch and ordered tasks, with some child tasks and wait for functions thrown in for good measure. . It takes about 4 minutes to complete so during the test I gather statistics every 5 seconds using this little statement that I ran in SQL Server Management Studio:

 

DECLARE @STOPTIME DATETIME
DECLARE @INTERVAL VARCHAR(8)
SET @STOPTIME = '2014-08-06 14:50:00'
SET @INTERVAL = '00:00:05'
WHILE GETDATE() < @STOPTIME
BEGIN  BEGIN TRANSACTION  exec mc_ProvQStats 0,0,0 -- Parameters for gather stats,no listing  COMMIT  WAITFOR DELAY @INTERVAL
END

You could also execute the procedure from a job scheduled to run every X seconds, but then you depend on the system actually running and having the capacity to run the job at the specific intervals during load, which might not work during a performance test or a halt situation.

 

The result produces "interesting" stuff like this when put into Excel:

statsTestQueueProcessing.png

 

Here is a sample run where I killed the dispatcher in the middle showing the stats flat lining as nothing was processed for a minute or two and that the active dispatcher count changed from 1 to 0in the a graph for the active dispatcher count metric...

statsTestQueueProcessingDeadDispatcher.png

 

Using the procedure

 

mc_ProvQStats <OP>, <LISTLEVEL>,<RETURN ROWS>

 

OP = 0 - Gather statistics only, return no data

OP = 1 - Gather statistics, return the values

OP = 2 - Don't gather statistics, return data as specified by parameters LISTLEVEL and RETURN ROWS

 

LISTLEVEL = 0 - return nothing (mc_provqueuestats_desc.mShow = 0)

LISTLEVEL = 1 - return entries where mc_provqueuestats_desc.mShow <= 1

LISTLEVEL = 2 - return entries where mc_provqueuestats_desc.mShow <= 2

 

RETURN ROWS = Number of rows to return, newest collection first.

 

Examples:

Just collect stats: exec mc_ProvQStats 0,0,0

Collect stats and return them: exec mc_ProvQStats 1,0,0

List last 10 collections; exec mc_ProvQStats 2,1,10

 

statsUsageExample.png

 

 

 

The counters and Metrics collected

 

This is the list of metrics and counters it collects and can display. It is stored in the table mc_provqueuestats_desc

 

Starting analysisThe datetime when this analysis-run was started and the parameters used. Not useful at the moment but I had plans :-)
Host CPU usageTotal CPU usage on the host computer. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt
SQL Server CPU usageCPU usage by the SQL process. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt
Provision queue sizeThe total number of all entries in the queue
Provision queue unique auditsThe number of unique audits in the queue
Task execution deltaNumber of tasks executed/added to ext audit previous measurement
Ordered tasks in queueThe total number of ordered tasks in the queue
Ordered tasks ready to run in queueThe total number of ordered tasks ready to run  in the queue
Unordered tasks in queueThe total number of unordered tasks in the queue
Unordered tasks ready to run in queueThe total number of unordered tasks ready to run in the queue
Conditional tasks in queueThe total number of conditional tasks in the queue
Conditional tasks ready to run in queueThe total number of conditional tasks ready to run in the queue
Switch tasks in queueThe total number of switch tasks in the queue
Switch tasks ready to run in queueThe total number of switch tasks ready to run in the queue
Approval tasks in queueThe total number of approval tasks in the queue
Approval tasks ready to run in queueThe total number of approval tasks ready to run in the queue
Attestation tasks in queueThe total number of attestation tasks in the queue
Attestation tasks ready to run in queueThe total number of attestation tasks ready to run in the queue
Action tasks in queueTotal number of tasks in the queue
Action tasks ready to run in queueTotal number of tasks ready to run in the queue
Jobs ready to runThe number of jobs that are ready to run
Jobs runningThe number of currently running jobs
Provision jobs runningThe number of currently running provision jobs
Provision jobs ready to runThe number of provision jobs ready to run in the queue
Runtimes allowed by systemThe maximum number of runtimes allowed. 0 means no total system limit but it can still be limited per dispatcher
Runtimes activeThe number of jobs marked as running
Threshold violation deltaThe number of new entries in the execution threshold log since previous statistic collection
Jobs in error stateThe number of jobs that are in error state and cannot be run
Dispatchers activeThe number of dispatchers reported active the last <reload frequency> seconds
Dispatchers inactiveThe number of dispatchers not reported active the last <reload frequency> seconds

 

The mc_provqueuestats_desc table

 

ColumnValue
mItemThe Item name
mTypeThe item data type, C=Counter(INT), DT=DateTime, V=Varchar
mDescDescription of the item
mShowNULL=don't show when listing results, numbers indicate the report level to include it on.
mOrder(identity), column order in listing

 

Sample data:

mItemmTypemDescmShowmOrder
Starting analysisDT
The datetime when this analysis-run was started
11
Host CPU usageCTotal CPU usage on the host computer. -1 indicates missing grant12
SQL Server CPU usageCCPU usage by the SQL process. -1 indicates missing grant,13
Max auditid valueCThe maximum auditid value at the time of measurementNULL4
Provision queue sizeCThe total number of all entries in the queue16

 

The mc_provqueuestats table

 

The metric data is collected in this table and it is also quite simple:

 

ColumnValue
mIdID for metric collection run
mDTDateTime for collection of value
mItemItem name for collected metric - MUST MATCH with mc_provqueuestats_desc.mItem
mValueOptional VARCHAR value collected
mCountOptional INT value collected
mDateTimeOptional DATATIME value collected

 

Sample data:

mIdmDTmItemmValuemCountmDateTime
12014-08-06 14:39:55.010Starting analysis0:0NULL2014-08-06 14:39:55.010
12014-08-06 14:39:55.063Host CPU usageNULL10NULL
12014-08-06 14:39:55.063Provision queue sizeNULL0NULL
22014-08-06 14:40:00.070Starting analysis0:0NULL2014-08-06 14:40:00.070
22014-08-06 14:40:00.110Host CPU usageNULL10NULL
22014-08-06 14:40:00.110Provision queue sizeNULL0NULL
32014-08-06 14:40:05.110Starting analysis0:0NULL2014-08-06 14:40:05.110
32014-08-06 14:40:05.150Host CPU usageNULL10NULL
32014-08-06 14:40:05.150Provision queue sizeNULL0NULL

(Note that each collection (mId) will really have about 33 rows of data)

 

 

The procedures, tables and optional indexes and grants

 

Optional indexes

 

Some of the metrics require additional indexes to work. If you enable these, make sure that you change line 12 in the procedure mc_ProvQStats from

SET @P_GOTINDEXES = 0

To

SET @P_GOTINDEXES = 1

 

Don't do this change without the following indexes as that will cause some of the metrics collection queries to do tablescans, which are bad.

 

------------------------------------
-- ADDITIONAL INDEXES             --
-- Update procedure mc_ProvQStats --
-- SET @P_GOTINDEXES = 1 to use   --
------------------------------------
CREATE NONCLUSTERED INDEX [IX_MXI_EXT_AUDIT_DATE] ON [dbo].[MXP_Ext_Audit]
(  [Aud_datetime] ASC
)
GO
CREATE NONCLUSTERED INDEX [IX_MC_EXEC_STAT_DT] ON [dbo].[mc_exec_stat]
(  [mcDateTime] ASC
)
GO

 

CPU Usage statistics, additional grant

 

To get the SQL Server hos CPU states the procedure must run as a user that has been granted VIEW SERVER STATE. So run the following for oper, RT or the logon you're using

 

GRANT VIEW SERVER STATE TO mxmc_rt

 

Creating the tables

 

if exists (select * from sysobjects where id = object_id(N'[dbo].[mc_provqueuestats]') and  OBJECTPROPERTY(id, N'IsTable') = 1)    drop table [dbo].[mc_provqueuestats]
GO
CREATE TABLE [dbo].[mc_provqueuestats](  [mId] [int] NOT NULL,  [mDT] [datetime] NULL,  [mItem] [varchar](255) NULL,  [mValue] [varchar](512) NULL,  [mCount] [int] NULL,  [mDateTime] [datetime] NULL
) ON [PRIMARY]
GO
GRANT SELECT, INSERT, DELETE ON dbo.mc_provqueuestats TO mxmc_rt_role
GO
CREATE NONCLUSTERED INDEX [IX_MCPROVQSTAT_IDITDT] ON [dbo].[mc_provqueuestats]
(  [mId] ASC,  [mItem] ASC,  [mDT] ASC
)
GO
if exists (select * from sysobjects where id = object_id(N'[dbo].[mc_provqueuestats_desc]') and  OBJECTPROPERTY(id, N'IsTable') = 1)    drop table [dbo].[mc_provqueuestats_desc]
GO
CREATE TABLE [dbo].[mc_provqueuestats_desc](  [mItem] [varchar](255) NULL,  [mType] [varchar](2) NULL,  [mDesc] [varchar](512) NULL,  [mShow] [int] NULL,  [mOrder] [int] IDENTITY(1,1) NOT NULL
) ON [PRIMARY]
GO
GRANT SELECT ON dbo.mc_provqueuestats_desc TO mxmc_rt_role
GO
CREATE NONCLUSTERED INDEX [IX_MCPROVQSTATDESC_ITDE] ON [dbo].[mc_provqueuestats_desc]
(  [mItem] ASC,  [mDesc] ASC,  [mShow] ASC
)
GO
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Starting analysis','DT','The datetime when this analysis-run was started','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Host CPU usage','C','Total CPU usage on the host computer. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('SQL Server CPU usage','C','CPU usage by the SQL process. -1 indicates missing grant, GRANT VIEW SERVER STATE TO mxmc_rt','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Max auditid value','C','The maximum auditid value at the time of measurement',NULL)
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc)   VALUES ('Max extaudit datetime value','DT','The maximum datetime in the extended audit at the time of measurement')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision queue size','C','The total number of all entries in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision queue unique audits','C','The number of unique audits in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Task execution delta','C','Number of tasks executed/added to ext audit previous measurement','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Ordered tasks in queue','C','The total number of ordered tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Ordered tasks ready to run in queue','C','The total number of ordered tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Unordered tasks in queue','C','The total number of unordered tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Unordered tasks ready to run in queue','C','The total number of unordered tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Conditional tasks in queue','C','The total number of conditional tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Conditional tasks ready to run in queue','C','The total number of conditional tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Switch tasks in queue','C','The total number of switch tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Switch tasks ready to run in queue','C','The total number of switch tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Approval tasks in queue','C','The total number of approval tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Approval tasks ready to run in queue','C','The total number of approval tasks ready to run in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Attestation tasks in queue','C','The total number of attestation tasks in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Attestation tasks ready to run in queue','C','The total number of attestation tasks ready to run in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Action tasks in queue','C','Total number of tasks in the queue','2')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Action tasks ready to run in queue','C','Total number of tasks ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs ready to run','C','The number of jobs that are ready to run','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs running','C','The number of currently running jobs','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision jobs ready to run','C','The number of provision jobs ready to run in the queue','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Provision jobs running','C','The number of currently running provision jobs','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Runtimes allowed by system','C','The maximum number of runtimes allowed. 0 means no total system limit but it can still be limited per dispatcher','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Runtimes active','C','The number of jobs marked as running','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc)   VALUES ('Threshold violation max timestamp','DT','The highest datatime of execution threshold logged')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Threshold violation delta','C','The number of new entries in the execution threshold log since previous statistic collection','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Jobs in error state','C','The number of jobs that are in error state and can not be run','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Dispatchers active','C','The number of dispatchers reported active the last <reload frequency> seconds','1')
INSERT INTO [mc_provqueuestats_desc] (mItem,mType,mDesc,mShow) VALUES ('Dispatchers inactive','C','The number of dispatchers not reported active the last <reload frequency> seconds','2')
GO

Creating the procedure

 

CREATE PROCEDURE [dbo].[mc_ProvQStats]    @P_OP int,       -- 0=Just add stats to table, 1=Add stats and list collected data, 2=List previous collected data  @P_LISTLEVEL INT,-- 0=Dont list, 1=basic, 2=all  @P_LISTROWS  INT -- Number of rows to list
AS  DECLARE   @L_C INT,@L_I INT, @L_V VARCHAR(255), @L_DT DATETIME, @L_MID INT, @L_QUEUESIZE INT, @L_QS_ATTST INT, @L_QS_APPR INT, @L_QS_SW INT, @L_QS_CDTL INT, @L_QS_UNORD INT, @L_QS_ORDRD INT, @L_QS_ACT INT,  @L_AUDITID INT, @L_CHILDAUDIT INT, @L_CHILDAUDITINQUEUE INT, @L_CHILDAUDITMSG VARCHAR(255),@L_LEVEL VARCHAR(255),  @L_QAUDITREF INT,@L_QACTIONID INT,@L_QACTIONTYPE INT,@L_QMSKEY INT,@L_QWAITCOUNT INT,@L_QEXECTIME DATETIME,@L_QUPDTIME DATETIME,@L_QREPOSITORYID INT,@L_QMSG VARCHAR(255),  @L_COLUMNS NVARCHAR(MAX), @L_QUERY NVARCHAR(MAX), @L_LIMITER VARCHAR(255), @ts_now bigint, @L_HOSTCPU INT, @L_SQLCPU INT, @P_GOTINDEXES INT  SET @P_GOTINDEXES = 0    SET NOCOUNT ON  SET @L_MID = (SELECT ISNULL(MAX(MID),0)+1 FROM mc_provqueuestats WITH(NOLOCK))  IF @P_OP NOT IN (0,1,2) AND @P_LISTLEVEL NOT IN (1,2)  BEGIN  print 'Illegal operation ' + convert(varchar,@P_OP) + ' - 0=List only)' --, 1 (retry without provisioning), or 2 (retry with provisioning))'  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mValue) VALUES (@L_MID,getdate(),'Invalid parameter(s)',cast(@P_OP as varchar)+':'+cast(@P_LISTROWS as varchar))  RETURN  END  IF @P_OP IN (0,1)  BEGIN  -----------------------  -- GATHER STATISTICS --  -----------------------  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mValue,mDateTime) VALUES (@L_MID,getdate(),'Starting analysis',cast(@P_OP as varchar)+':'+cast(@P_LISTROWS as varchar),getdate())  --------------------------------  -- SETTINGS AND MISC COUNTERS --  --------------------------------  -- Get max allowed runtimes  SET @L_C = (SELECT CAST(VARVALUE as INT) FROM MC_GLOBAL_VARIABLES WITH(NOLOCK) WHERE VARNAME = 'MX_MAX_CONCURRENT_RUNTIME_ENGINES')  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Runtimes allowed by system',@L_C)  -- Get total running runtimes  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE=2)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Runtimes active',@L_C)  -- THESE REQUIRE ADDITIONAL INDEXES...  IF @P_GOTINDEXES=1  BEGIN  --Execution threshold violation delta  SET @L_DT = (SELECT mDateTime FROM mc_provqueuestats WITH(NOLOCK) WHERE mid = @L_MID -1 AND mItem = 'Threshold violation max timestamp')  IF @L_DT IS NOT NULL SET @L_C = (SELECT COUNT(mcDateTime) FROM mc_exec_stat WITH(NOLOCK) WHERE mcDateTime > @L_DT)  ELSE SET @L_C = 0  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Threshold violation delta',@L_C)  --Execution threshold violation max timestamp  SET @L_DT = (SELECT MAX(mcDateTime) FROM mc_exec_stat)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mDateTime) VALUES (@L_MID,getdate(),'Threshold violation max timestamp',@L_DT)  -- Get current max ext-audit time  SET @L_DT = (SELECT MAX(AUD_DATETIME) FROM MXP_EXT_AUDIT WITH(NOLOCK))  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mDateTime) VALUES (@L_MID,getdate(),'Max extaudit datetime value',@L_DT)  SET @L_DT = (SELECT mDateTime FROM mc_provqueuestats WITH(NOLOCK) WHERE mid = @L_MID -1 AND mItem = 'Max extaudit datetime value')  IF @L_DT IS NOT NULL SET @L_C = (SELECT COUNT(AUD_DATETIME) FROM MXP_EXT_AUDIT WITH(NOLOCK) WHERE AUD_DATETIME > @L_DT)  ELSE SET @L_C = 0  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Task execution delta',@L_C)  END  -- JOBS IN ERROR STATE  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = -1)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs in error state',@L_C)  -- JOBS READY TO RUN  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = 1 AND PROVISION = 0 AND ScheduledTime < getdate())  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs ready to run',@L_C)  -- RUNNING JOBS  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WITH(NOLOCK) WHERE STATE = 2 AND PROVISION = 0)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Jobs running',@L_C)  -- PROVISION JOBS READY TO RUN  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS J WITH(NOLOCK)WHERE STATE=1 AND PROVISION = 1 AND JOBGUID IN   (SELECT JOBGUID FROM MXP_TASKS T WITH(NOLOCK)WHERE T.TASKID IN (SELECT DISTINCT(P.ActionID) FROM MXP_PROVISION P WITH(NOLOCK) WHERE P.ActionType=0 AND P.State=2) )  )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision jobs ready to run',@L_C)  -- RUNNING PROVISION JOBS  SET @L_C = (SELECT COUNT(*) FROM MC_JOBS WHERE STATE = 2 AND PROVISION = 1)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision jobs running',@L_C)  -- DISPATCHERS APPEARING TO BE INACTIVE. Does not refresh exactly at reload time, so double it  SET @L_C = (SELECT count(*) from MC_Dispatcher WITH(NOLOCK) WHERE last_visited < dateadd(ss,(RELOAD_FREQUENCY*-2),getdate()) )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Dispatchers inactive',@L_C)  -- DISPATCHERS APPEARING TO BE ACTIVE.   SET @L_C = (SELECT count(*)  from MC_Dispatcher WITH(NOLOCK) WHERE last_visited > dateadd(ss,(RELOAD_FREQUENCY*-2),getdate()) )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Dispatchers active',@L_C)  -- CPU Usage  BEGIN TRY  SELECT @ts_now = cpu_ticks / (cpu_ticks/ms_ticks)  FROM sys.dm_os_sys_info  SELECT top 1   @L_HOSTCPU = CASE WHEN system_cpu_utilization_post_sp2 IS NOT NULL THEN system_cpu_utilization_post_sp2 ELSE system_cpu_utilization_pre_sp2 END, -- AS system_cpu_utilization,  @L_SQLCPU = CASE WHEN sql_cpu_utilization_post_sp2 IS NOT NULL THEN sql_cpu_utilization_post_sp2 ELSE sql_cpu_utilization_pre_sp2 END-- AS sql_cpu_utilization  FROM   (  SELECT record.value('(Record/@id)[1]', 'int') AS record_id, DATEADD (ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime,  100-record.value('(Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS system_cpu_utilization_post_sp2,  record.value('(Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS sql_cpu_utilization_post_sp2 ,   100-record.value('(Record/SchedluerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS system_cpu_utilization_pre_sp2,  record.value('(Record/SchedluerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS sql_cpu_utilization_pre_sp2  FROM (SELECT timestamp, CONVERT (xml, record) AS record FROM sys.dm_os_ring_buffers   WHERE ring_buffer_type = 'RING_BUFFER_SCHEDULER_MONITOR' AND record LIKE '%<SystemHealth>%') AS t  ) AS t ORDER BY record_id desc  END TRY  BEGIN CATCH  SET @L_HOSTCPU = -1  SET @L_SQLCPU = -1  END CATCH  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Host CPU usage',@L_HOSTCPU)  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'SQL Server CPU usage',@L_SQLCPU)  -----------------  -- PROVQ STUFF --  -----------------  -- Get current max auditid  SET @L_C = ( SELECT MAX(AUDITID) FROM MXP_AUDIT WITH(NOLOCK) )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Max auditid value',@L_C)  SET @L_QUEUESIZE = ( SELECT COUNT(*) FROM MXP_PROVISION WITH(NOLOCK) )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision queue size',@L_QUEUESIZE)  SET @L_C = ( SELECT COUNT(DISTINCT(AUDITREF)) FROM MXP_PROVISION WITH(NOLOCK) )  INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Provision queue unique audits',@L_C)  DECLARE C_PQ_ALL CURSOR LOCAL FAST_FORWARD READ_ONLY FOR   SELECT COUNT(P.MSKEY) C,T.ActType ActionType FROM MXP_ActionType T WITH (NOLOCK) LEFT OUTER JOIN MXP_PROVISION P  WITH (NOLOCK)ON T.ActType=P.ACTIONTYPE AND P.MSKEY IS NOT NULL GROUP BY T.ActType  OPEN C_PQ_ALL  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I  WHILE (@@Fetch_status = 0)  BEGIN  IF @L_I =  0 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Action tasks in queue',@L_QUEUESIZE)  IF @L_I = -1 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Ordered tasks in queue',@L_QUEUESIZE)  IF @L_I = -2 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Unordered tasks in queue',@L_QUEUESIZE)  IF @L_I = -3 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Conditional tasks in queue',@L_QUEUESIZE)  IF @L_I = -4 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Switch tasks in queue',@L_QUEUESIZE)  IF @L_I = -5 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Approval tasks in queue',@L_QUEUESIZE)  IF @L_I = -6 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Attestation tasks in queue',@L_QUEUESIZE)  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I  END  CLOSE C_PQ_ALL  DEALLOCATE C_PQ_ALL  DECLARE C_PQ_ALL CURSOR LOCAL FAST_FORWARD READ_ONLY FOR   SELECT COUNT(P.MSKEY) C,T.ActType ActionType FROM MXP_ActionType T WITH (NOLOCK) LEFT OUTER JOIN MXP_PROVISION P  WITH (NOLOCK) ON T.ActType=P.ACTIONTYPE AND P.STATE=2 AND P.MSKEY IS NOT NULL GROUP BY T.ActType  --SELECT COUNT(*) c ,ActionType FROM MXP_PROVISION WITH(NOLOCK) WHERE STATE=2 GROUP BY ActionType  OPEN C_PQ_ALL  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I  WHILE (@@Fetch_status = 0)  BEGIN  IF @L_I = 0 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Action tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -1 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Ordered tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -2 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Unordered tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -3 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Conditional tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -4 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Switch tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -5 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Approval tasks ready to run in queue',@L_QUEUESIZE)  IF @L_I = -6 INSERT INTO mc_provqueuestats (mid,mdt,mitem,mCount) VALUES (@L_MID,getdate(),'Attestation tasks ready to run in queue',@L_QUEUESIZE)  FETCH NEXT FROM C_PQ_ALL INTO @L_QUEUESIZE, @L_I  END  CLOSE C_PQ_ALL  DEALLOCATE C_PQ_ALL  END  ---------------------  -- LIST STATISTICS --  ---------------------  IF @P_OP IN (1,2) AND @P_LISTLEVEL > 0 -- List current or all  BEGIN  SET @L_QUERY = 'SELECT @L_COLUMNS = STUFF((SELECT '','' + QUOTENAME(mItem) FROM [mc_provqueuestats_desc] where mShow <= '+CAST(@P_LISTLEVEL as varchar)+' order by mOrder FOR XML PATH(''''), TYPE).value(''.'', ''NVARCHAR(MAX)''),1,1,'''') '  --print @L_QUERY  exec sp_executeSQL @L_QUERY,N'@L_COLUMNS NVARCHAR(MAX) OUT',@L_COLUMNS OUT  --PRINT @L_COLUMNS  IF @P_OP = 1 SET @L_LIMITER = 'WHERE mId = '+cast(@L_MID as varchar)+' AND'  ELSE SET @L_LIMITER = ' WHERE '  set @L_QUERY = 'SELECT TOP '+CAST(ISNULL(@P_LISTROWS,1) AS VARCHAR)+' mId, ' + @L_COLUMNS + ' FROM             (select S.mId,S.mItem,  case  when D.mType = ''V'' then cast(mValue as varchar)  when D.mType = ''C'' then cast(mCount as varchar)  when D.mType = ''DT'' then convert(varchar,mDateTime,120)  else ''0''  end as mValue                from mc_provqueuestats S left outer join mc_provqueuestats_desc D on S.mItem = D.mItem '+ISNULL(@L_LIMITER,' ')+' D.mShow <= '+CAST(ISNULL(@P_LISTLEVEL,0) as varchar)+'           ) main            pivot             ( max(mValue)              for mItem in (' + @L_COLUMNS + ')            ) piv order by mid desc';  print @L_QUERY  execute(@L_QUERY);  END
GO

Queries and useful(?) stuff, infodump

$
0
0

Queries and other useful stuff, infodump


See The Future of SAP IDM by Matt Pollicove for more information but in short; IdM is moving from my office in Trondheim to Sofia. This entry will simply be an infodump of some of the IdM related and non-IdM related queries I've thought about using in future blogs, have used in support-cases and things I've found elsewhere online that I thought would be useful at a later time. This might be a bit heavy on execution plan stuff and might not be useful for "plain" IdM solution implementation but I've seen some of you guys getting pretty advanced so...


Please excuse the lack of (or in some cases misleading) descriptions, formatting, testing and organization. I've tried to keep this limited to harmless queries, but don't use them if you don't understand what they do, or atleast use them in a non-critical system while figuring them out.


Thank you all for the feedback and the great learning experience this community has offered!


Best regards, thanks and goodbye,

Per "Chris" Christian Krabsetsve

:-)




The blogs that are already published that are still in progress will not be finished by me. Anyone who wants to take the texts, partial or whole, to finalized or improve is welcome to do so.

IDM SQL Basics #1: Queries against the Identity Store

IdM SQL Basics #2: Locating problem queries

IDM SQL Basics #3: Testing and improving queries


On queue processing, or the lack thereof. Part #1

On queue processing, or the lack thereof. Part #2

On queue processing, what (just) happened? Part #3


IDM and provision queue counters & statistics gathering


Misc queries, might work on either ora or sql or both.

 

Provisioning/queue

Actions ready to run group by actiontype

select A.Name as actiontype,COUNT(mskey) as availableEntries from MXP_Provision P, mxp_actiontype A
where P.State=2 and A.ActType=P.ActionType group by A.Name

Provisioning actions(jobs) that could be running but are not

select T.TaskName,COUNT(mskey) as availableEntries from MXP_Tasks T,MXP_Provision P where P.ActionID=T.TaskID and
P.ActionType=0 and P.state=2 and T.JobGuid not in (select JobGuid from mc_jobs where State=1)
group by taskname

Actions ready to run grouped by taskname

select t.taskname,count(p.mskey) from MXP_Provision P with(nolock) , mxp_tasks T with(nolock)
where T.taskid = P.actionid group by T.taskname

Actions ready to run grouped by action type name,taskid

select T.TaskName ,A.Name as actiontype,COUNT(mskey) as available
from MXP_Provision P, mxp_actiontype A, MXP_Tasks T
where P.State=2 and A.ActType=P.ActionType and T.taskid=P.actionid group by T.taskname,A.Name

Running provision jobs and regular jobs

select case when Provision=1 then 'provision' when provision=0 then 'job' else 'huh' end "type", count(jobid) running
from mc_jobs where state=2 group by provision

Provision queue entries ready to process/run grouped by actiontype and name

select T.Name,COUNT(mskey) as available from MXP_Provision P, mxp_actiontype T 
where P.State=2 and T.ActType=P.ActionType group by T.Name

Provision queue entries ready to process/run grouped by actiontype and name with statename

select count(P.mskey),t.taskname,s.name from mxp_provision P, mxp_Tasks T, mxp_state S
where S.StatID=P.State and T.taskid = P.ActionID  group by T.taskname, S.name

Task usage, list number of times a task has been executed since xx.yy.zz

select T.taskname,COUNT(A.aud_ref) from MXP_Ext_Audit A, MXP_Tasks T
where T.TaskID = A.aud_task and A.Aud_datetime > '2012-06-20'
group by T.TaskName
order by COUNT(A.aud_ref) desc

Display number of jobs and provision jobs available for a dispatcher

SET NOCOUNT ON
declare @sj int, @jj int, @sp int, @jp int, @dn varchar(20), @dv varchar(20)
SET @dn = 'dispatcher name' -- dispatcher name
SET @dv = '8.0' -- version, not important
exec mc_dispatcher_check @dn,@dv,1,1,1,1,@sj output,@jj output,@sp output,@jp output
print 'Jobs and provisioning jobs available for '+@dn
print 'WinRT jobs:'+cast(@sj as varchar)
print 'javaRT jobs:'+cast(@jj as varchar)
print 'WinRT prov jobs:'+cast(@sp as varchar)
print 'JavaRT prov jobs:'+cast(@jp as varchar)

 

 

Job processing


Use the logs to summarize the number of entries a job has processed, average processing time and some more.

select J.name,sum(L.Num_Adds) adds,SUM(L.NUM_MODS) mods, SUM(timeused) tUsed, (sum(L.Num_Adds)+SUM(L.NUM_MODS))/SUM(timeused) entryPerSec
from mc_logs L with (nolock), mc_jobs J with (nolock) where J.JobId=L.JobId and TIMEUSED > 0 and L.LOGDATE > '2012-06-20'
group by J.name
order by (sum(L.Num_Adds)+SUM(L.NUM_MODS))/SUM(timeused) desc

 

select LA.jobname,LA.JobId,case when J.provision = 1 then 'action' else 'job' end as "jobtype",
sum(LA.TotalEntries) totalEntries,sum(LA.TimeUsed) totalTime
,round(cast(sum(LA.TotalEntries) as float)/cast(sum(LA.TimeUsed) as float),2) entriesPerSecond
from mcv_logall LA, mc_jobs J
where LA.JobId = J.JobId
group by LA.jobname,LA.jobid, J.Provision

 

select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime
,Round(sum(TotalEntries) /sum(TimeUsed),2) entriesPerSecond
from mcv_logall
group by jobname,jobid
order by entriesPerSecond

 

select jobname,JobId,sum(TotalEntries) totalEntries,sum(TimeUsed) totalTime,
round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2) entriesPerSecond
from mcv_logall group by jobname,jobid
order by round(cast(sum(TotalEntries) as float)/cast(sum(TimeUsed) as float),2)  asc

 

 

 

Provisioning audit

 

Some audit stuff (oracle). Execution time for task X executed after date Y

select taskid,to_char(posteddate,'YYYY-MM-DD') "date",count(auditid) "numExecs",AVG(round(statusdate-posteddate,2)*24*60*60) "avgTimeToComplete"
from mxp_audit
where taskid = 20 and provstatus > 1000 and postedDate > to_date('2014-01-01','YYYY-MM-DD')
group by taskid,to_char(posteddate,'YYYY-MM-DD')
order by taskid,to_char(posteddate,'YYYY-MM-DD')

Some audit stuff (SQL Server). Execution time for task X executed after date Y

select avg(datediff(SS,posteddate,statusdate)) Seconds,cast(posteddate as date) Date
from MXP_AUDIT
where taskid = 1406 and posteddate > '2013-06-01'
group by cast(posteddate as date)
order by cast(posteddate as date)

Some audit stuff (oracle). Executed task for user X

select A.auditid, A.AuditRoot, A.RefAudit auditParent, A.userid, A.StartedBy, A.taskid, T.taskname, A.mskey, A.PostedDate, A.StatusDate, A.provstatus, A.LastAction, A.msg
from MXP_AUDIT A, MXP_Tasks T where A.TaskId = T.TaskID
and A.msKey = (select mcmskey from idmv_entry_simple where mcMskeyValue = 'ADMINISTRATOR')
order by auditroot,RefAudit

Some audit stuff (SQL Server), mainlylisting seconds from task being posted to its completion

select taskid,convert(varchar(10),posteddate,20) Date,count(auditid) as numExecs,avg(datediff(ss,A.posteddate,A.statusdate)) AvgTimeToComplete
from mxp_audit A
where taskid = 1 and posteddate > '2014-02-01' and ProvStatus > 1000
group by taskid,convert(varchar(10),posteddate,20)
order by taskid,convert(varchar(10),posteddate,20)

Some audit stuff (SQL Server), mainlylisting seconds from task being posted to its completion inc. count of substasks

select auditid,postedDate,taskid,datediff(ss,A.posteddate,A.statusdate) seconds,   (select count(aud_ref) from MXP_Ext_Audit AE with(nolock) where a.AuditID=ae.Aud_ref)
from mxp_audit A with(nolock) order by posteddate

Average time between posting and completion of task between two dates

select avg(datediff(ss,A.posteddate,A.statusdate)) seconds
from mxp_audit A where postedDate >='2013-12-05 16:49:35' and postedDate <'2013-12-05 16:49:40'

Number of executions of tasknames after  specified date

select       count(a.aud_ref),t.taskname,       convert(varchar,A.aud_datetime,20)
from       mxp_ext_audit A with(nolock), mxp_tasks T with(nolock)
where       a.Aud_task = T.TaskID and       (aud_datetime > '2013-11-18 12:00')
group by t.taskname,convert(varchar,A.aud_datetime,20)

List number root tasks started in minute intervals for a period of time

-- Create helper table for intervals
declare @intervaltbl table(st datetime,et datetime)
declare @it datetime
declare @interval int
set @interval = 5 -- minutes
-- Interval Start time
set @it = '2014-05-27 10:00:00'
-- Interval End time
while @it <= '2014-05-27 12:30:00'
begin    insert into @intervaltbl values (@it,dateadd(mi, @interval, @it))    set @it = dateadd(mi, @interval, @it)  -- 5 minute interval, change as needed
end
-- count per interval
select it.st startinterval, it.et endinterval, count(*) troottaskstarted  from @intervaltbl it, MXP_AUDIT A with(nolock)  where A.PostedDate between it.st and it.et
group by it.st,it.et

List tasks and the number of execs per task in 10 minute intervals for a timeframe

select count(aud_ref) taskexecutions, t.taskname, left(CONVERT(varchar,aud_datetime,21),15) 
from mxp_ext_audit AE with(nolock), MXP_Tasks T with(nolock)
where T.taskid = AE.aud_task 
and   Aud_datetime between '2013-04-19 00:01' and '2013-04-19 03:30'
group by T.taskname, left(CONVERT(varchar,aud_datetime,21),15)


List how many times each individual task was run in a time-interval

-- Create helper table for intervals
declare @intervaltbl table(st datetime,et datetime)
declare @it datetime
declare @interval int
set @interval = 5 -- minutes
-- Interval Start time
set @it = '2014-05-27 10:00:00'
-- Interval End time
while @it <= '2014-05-27 12:30:00'
begin    insert into @intervaltbl values (@it,dateadd(mi, @interval, @it))    set @it = dateadd(mi, @interval, @it)  -- 5 minute interval, change as needed
end
-- Do a count per interval
select it.st startinterval, it.et endinterval, T.taskname, count(*) TimesStarted  from @intervaltbl it, MXP_EXT_AUDIT A with(nolock), MXP_Tasks T with(nolock)  where A.Aud_datetime between it.st and it.et and A.Aud_task = T.taskid
group by it.st,it.et, T.taskname
order by it.st, T.taskname

 

an example of a query extracting details of tasks executed for a user

SELECT        dbo.MXP_Ext_Audit.Aud_ref AS Auditref, dbo.MXP_Ext_Audit.Aud_OnEntry AS EntryId, dbo.MXP_Ext_Audit.Aud_datetime AS Time, MXP_Tasks_1.TaskName AS Task,                          dbo.MXP_ProvStatus.Name AS Status, dbo.MXP_Tasks.TaskName AS [End task], dbo.MXP_Ext_Audit.Aud_Approver AS Approver,                          dbo.MXP_Ext_Audit.Aud_Info AS Info, dbo.MXP_Ext_Audit.Aud_StartedBy AS [Started by], dbo.MXP_AUDIT.MSG AS Message,                          dbo.MC_REPOSITORY.rep_name AS Repository
FROM dbo.MXP_Ext_Audit WITH (nolock) INNER JOIN
 dbo.MXP_AUDIT ON dbo.MXP_Ext_Audit.Aud_ref = dbo.MXP_AUDIT.AuditID INNER JOIN
 dbo.MXP_ProvStatus ON dbo.MXP_AUDIT.ProvStatus = dbo.MXP_ProvStatus.ProvStatus INNER JOIN
 dbo.MXP_Tasks ON dbo.MXP_AUDIT.LastAction = dbo.MXP_Tasks.TaskID INNER JOIN
 dbo.MXP_Tasks AS MXP_Tasks_1 ON dbo.MXP_Ext_Audit.Aud_task = MXP_Tasks_1.TaskID LEFT OUTER JOIN
 dbo.MC_REPOSITORY ON dbo.MXP_AUDIT.repository = dbo.MC_REPOSITORY.rep_id
WHERE (dbo.MXP_Ext_Audit.Aud_ref IN
 (SELECT AuditID
 FROM dbo.MXP_AUDIT AS MXP_AUDIT_1 with (nolock)
 WHERE        (MSKey =
 (SELECT mcMSKEY
 FROM dbo.idmv_entry_simple with (nolock)
 WHERE        (mcMSKEYVALUE = 'TEDA')))))
ORDER BY Auditref, Time

Initial Load data stuff

 

Using the passes from Initial Load (or using initial load with the To IdStore passes disabled) read the repository data to temporary tables; users, roles, assignments, stop the job from doing anything else.

 

Look at the mapping of users from the repository to the IdStore ACCOUNT%REP.NAME% attribute. This you can use  to

Find users from the repository tempoarary table not in the idstore

select REP.* from 
sap%$rep.$NAME%User REP 
left outer join idmv_value_basic IDSVAL on REP.accountname = IDSVAL.SearchValue and IDSVAL.ATTRNAME='ACCOUNT%$rep.$NAME%'
where IDSVAL.SearchValue IS NULL 

Find IdStore users in the IdStore not in the repository

select E.mcmskeyvalue
from idmv_value_basic IDS
inner join idmv_entry_simple E on E.mcmskey = ids.mskey and E.mcEntryType='MX_PERSON'
left outer join mxmc_rt_u.sap%$REP.$NAME%user REP on IDS.SearchValue=REP.accountname
where   ids.attrname = 'sap%$REP.$NAME%user' 
and REP.accountname IS NULL

 

Identity Store Flattening


select mskey,
  max (case when AttrName = 'MSKEYVALUE' then aValue end ) mskeyvalue,  max (case when AttrName = 'DISPLAYNAME' then aValue end ) DisplayName,  max (case when AttrName = 'MX_ENTRYTYPE' then aValue end ) EntryType
from idmv_value_basic
where attrname in ('MSKEYVALUE','DISPLAYNAME','MX_ENTRYTYPE')
group by mskey

Using pivot:


select * from
(
  select mskey,avalue,attrname from idmv_value_basic where attrname in ('MSKEYVALUE','DISPLAYNAME','MX_ENTRYTYPE')
) s
pivot
(  max(avalue) for attrname in (MSKEYVALUE,DISPLAYNAME,MX_ENTRYTYPE)
) p


Also using the STUFF + for xml path to flatten things is cool TSQL &amp;#8211; Concatenate Rows using FOR XML PATH() | Sql And Me


IdM 7.1 example, easy to adapt:

select 'My roles:'+stuff((select ','+searchvalue      from mxiv_sentries where attrname = 'MX_AUTOROLE' and mskey =128108
for xml path('')),1,1,'') 
as MyRoles



SQL Server

 

Show running queries

SELECT database_id,st.text, r.session_id, r.status, r.command, r.cpu_time,r.total_elapsed_time
FROM sys.dm_exec_requests r CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st order by database_id

 

Finding execution plan handle (prepared or adhoc, or filter for just one of them as needed):

SELECT objtype, p.size_in_bytes, t.[text], usecounts, p.plan_handle
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) t
WHERE objtype IN ('Prepared', 'Adhoc')
and T.[text] like '%mxi_values%'

 

SELECT plan_handle,UseCounts, Cacheobjtype, Objtype, TEXT, query_plan
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
WHERE TEXT like '%mcuniqueid%mcothermskey%mcotherentrytype%mcdisplayname%mcreason%mcvalidfrom%mcvalidto%mcExecState%mcExecStateHierarchy%'

 

SELECT qplan.query_plan AS [Query Plan],qtext.text
FROM sys.dm_exec_query_stats AS qstats
CROSS APPLY sys.dm_exec_query_plan(qstats.plan_handle) AS qplan
cross apply sys.dm_exec_sql_text(qstats.plan_handle) as qtext
where text like '%mcuniqueid%mcothermskey%mcotherentrytype%mcdisplayname%mcreason%mcvalidfrom%mcvalidto%mcExecState%mcExecStateHierarchy%';

Retrieve an execution plan using plan handle (see above)

If forced to use RT account you might need someone to run this for you first

GRANT SHOWPLAN TO mxmc_user_u
SELECT * FROM sys.dm_exec_query_plan (0x06000600ACC0D612F05E7C5F0200000001000000000000000000000000000000000000000000000000000000);

Clear plan cache and more

DBCC FREEPROCCACHE

 

Check memory states

DBCC MEMORYSTATUS

 

Show statistics for table/index

DBCC SHOW_STATISTICS (mxi_values,ix_mxi_values_perf)

 

 

List indexes and index usage stats

SELECT DB_NAME(DATABASE_ID) AS DATABASENAME, OBJECT_NAME(B.OBJECT_ID) AS TABLENAME,       INDEX_NAME = (SELECT NAME FROM   SYS.INDEXES A WHERE  A.OBJECT_ID = B.OBJECT_ID AND A.INDEX_ID = B.INDEX_ID),       USER_SEEKS, USER_SCANS, USER_LOOKUPS, USER_UPDATES
FROM   SYS.DM_DB_INDEX_USAGE_STATS B INNER JOIN SYS.OBJECTS C ON B.OBJECT_ID = C.OBJECT_ID
WHERE  DATABASE_ID = DB_ID(DB_NAME()) AND C.TYPE <> 'S'

 

 

SELECT   PVT.TABLENAME, PVT.INDEXNAME, [1] AS COL1, [2] AS COL2, [3] AS COL3,         [4] AS COL4, [5] AS COL5, [6] AS COL6, [7] AS COL7, B.USER_SEEKS,         B.USER_SCANS, B.USER_LOOKUPS
FROM     (SELECT A.NAME AS TABLENAME,A.OBJECT_ID,B.NAME AS INDEXNAME,B.INDEX_ID,D.NAME AS COLUMNNAME,C.KEY_ORDINAL          FROM   SYS.OBJECTS A INNER JOIN SYS.INDEXES B ON A.OBJECT_ID = B.OBJECT_ID                 INNER JOIN SYS.INDEX_COLUMNS C ON B.OBJECT_ID = C.OBJECT_ID AND B.INDEX_ID = C.INDEX_ID                            INNER JOIN SYS.COLUMNS D ON C.OBJECT_ID = D.OBJECT_ID AND C.COLUMN_ID = D.COLUMN_ID          WHERE  A.TYPE <> 'S') P         PIVOT         (MIN(COLUMNNAME)          FOR KEY_ORDINAL IN ( [1],[2],[3],[4],[5],[6],[7] ) ) AS PVT         INNER JOIN SYS.DM_DB_INDEX_USAGE_STATS B           ON PVT.OBJECT_ID = B.OBJECT_ID AND PVT.INDEX_ID = B.INDEX_ID AND B.DATABASE_ID = DB_ID()
ORDER BY TABLENAME, INDEXNAME;

 

Update statistics for a table

update statistics mxi_values

 

List system processes/sessions with state/login/db and more

SELECT
spid,status,sid,hostname,program_name,cmd,cpu,physical_io,blocked,dbid,
convert(sysname, rtrim(loginame)) as loginname,spid as 'spid_sort',
substring( convert(varchar,last_batch,111) ,6 ,5 ) + ' ' + substring( convert(varchar,last_batch,113) ,13 ,8 ) as 'last_batch_char',last_batch, DB_NAME(dbid)
from master.dbo.sysprocesses (nolock)
order by loginname

 

Missing Indexes hints

SELECT * FROM sys.dm_db_missing_index_details
SELECT * FROM sys.dm_db_missing_index_group_stats
SELECT * FROM sys.dm_db_missing_index_groups
SELECT * FROM sys.dm_db_missing_index_columns(1)
SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;

 

Statistics, Timing, IO, CPU, XML, PLAN…

 

set statistics io on

set statistics time on

set statistics profile on

SET SHOWPLAN_TEXT ON

SET SHOWPLAN_ALL ON

SET STATISTICS XML ON

 

 

Backup, transaction log shrinking and other stuff

BACKUP LOG [mxmc_db]
TO  DISK = N'C:\temp\mxmcdbfullcopy'
WITH NOFORMAT, INIT,  NAME = N'mxmc_db-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

 

select name,recovery_model_desc from sys.databases
Alter database mxmc_db SET Recovery simple
select name,size from sys.database_files
DBCC SHRINKFILE (N'mastlog' , 1)
Alter database mxmc_db SET Recovery full

Clearing all transactionlogs on dev and test-systems

 

SET NOCOUNT ON
CREATE TABLE #TransactionLogFiles (DBName VARCHAR(150), LogFileName VARCHAR(150) )
DECLARE
DBList CURSOR FOR SELECT name FROM master..sysdatabases WHERE NAME NOT IN ('master','tempdb','model','msdb','distribution')
DECLARE
@DB VARCHAR(100)DECLARE @SQL VARCHAR(8000)
OPEN DBList
FETCH NEXT FROM DBList INTO @DB
WHILE @@FETCH_STATUS <> -1
BEGIN      SET @SQL = 'USE ' + @DB + ' INSERT INTO #TransactionLogFiles(DBName, LogFileName) SELECT ''' + @DB + ''', Name FROM sysfiles WHERE FileID=2'      EXEC(@SQL) FETCH NEXT FROM DBList INTO @DB
END
DEALLOCATE DBList
DECLARE TranLogList CURSOR FOR
SELECT DBName, LogFileName FROM #TransactionLogFiles
DECLARE @LogFile VARCHAR(100)
OPEN TranLogList
FETCH NEXT FROM TranLogList INTO @DB, @LogFile
WHILE @@FETCH_STATUS <> -1
BEGIN      PRINT @DB      --SELECT @SQL = 'EXEC sp_dboption ' + @DB + ', ''trunc. log on chkpt.'', ''True'''         SELECT @SQL = ' ALTER DATABASE ' + @DB + ' SET AUTO_SHRINK ON '
-- ALTER DATABASE [dbname] SET RECOVERY SIMPLE      EXEC (@SQL)      SELECT @SQL = 'USE ' + @DB + ' DBCC SHRINKFILE(''' + @LogFile + ''',''truncateonly'') WITH NO_INFOMSGS'      EXEC (@SQL)      --SELECT @SQL = 'EXEC sp_dboption ' + @DB + ', ''trunc. log on chkpt.'', ''False'''           SELECT @SQL = ' ALTER DATABASE ' + @DB + ' SET AUTO_SHRINK OFF '         EXEC(@SQL) FETCH NEXT FROM TranLogList INTO @DB, @LogFile END
DEALLOCATE TranLogList
DROP TABLE #TransactionLogFiles

 

 

 

List a tables constraints, w. optional only default values

SELECT SCHEMA_NAME(schema_id) AS SchemaName,       OBJECT_NAME(parent_object_id) AS TableName,       OBJECT_NAME(OBJECT_ID) AS NameofConstraint
FROM sys.objects WHERE OBJECT_NAME(parent_object_id) = 'MXP_TASKS'
-- and type_desc = 'DEFAULT_CONSTRAINT'

 

Drop all constraints for a table

BEGIN TRANSACTION
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += N'
ALTER TABLE ' + SCHEMA_NAME(schema_id)    + '.' +OBJECT_NAME(parent_object_id) +    ' DROP CONSTRAINT ' + OBJECT_NAME(OBJECT_ID) + ';'
FROM sys.objects where OBJECT_NAME(OBJECT_ID) = 'MXP_TASKS'
PRINT 'Dropping constraints: ' + @sql;
EXEC sp_executesql @sql;                -- COMMIT

 

Find tables containing specific column type

select TABLE_NAME,COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH, IS_NULLABLE
from INFORMATION_SCHEMA.COLUMNS IC
where COLUMN_NAME = 'aLong'

 

Refresh statistics for ALL tables in system, and basic exception handling

DECLARE @statName varchar(255)
DECLARE @lastUpd datetime
DECLARE @tableName varchar(255)
DECLARE @updName varchar(512)
DECLARE @OPTION int
DECLARE cStatsList CURSOR LOCAL FAST_FORWARD FOR       SELECT t.name tablename,s.name statname,STATS_DATE(s.[object_id], s.stats_id) AS lastupd       FROM sys.stats s JOIN sys.tables t ON s.[object_id] = t.[object_id]       WHERE t.is_ms_shipped = 0 ORDER BY t.name,s.name       OPEN cStatsList       FETCH NEXT FROM cStatsList INTO @tableName,@statName,@lastUpd       WHILE (@@FETCH_STATUS = 0)       BEGIN              PRINT 'Refreshing statistics '+@statName+' for table '+@tableName+' which was last updated '+ISNULL(convert(varchar,@lastUpd),'never (probably an empty table)')              SET @updName = @tableName+'('+@statName+')'              BEGIN TRY                     exec ('UPDATE STATISTICS '+@updName+'WITH FULLSCAN')              END TRY              BEGIN CATCH                     PRINT 'Unable to refresh statistics '+@statName+' for table '+@tableName              END CATCH              FETCH NEXT FROM cStatsList INTO @tableName,@statName,@lastUpd       END
CLOSE cStatsList
DEALLOCATE cStatsList

List tables and their  indexes

SELECT o.name AS tablename ,i.name AS indexname ,i.type_desc AS indextype ,STATS_DATE(i.[object_id], i.index_id) AS statisticsdate
FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id]
WHERE o.TYPE = 'U' -- Only get indexes for User Created Tables
AND i.name IS NOT NULL
ORDER BY o.name ,i.TYPE

 

List tables indexes with last update time

SELECT t.name tablename,s.name statname,STATS_DATE(s.[object_id], s.stats_id) AS lastupd
FROM sys.stats s JOIN sys.tables t ON s.[object_id] = t.[object_id]
WHERE t.is_ms_shipped = 0 ORDER BY t.name,s.name

 

List tables indexes with more detail

SELECT
t.name AS TableName,
c.name AS ColumnName,
s.name AS StatName,
STATS_DATE(s.[object_id], s.stats_id) AS LastUpdated,
s.has_filter,
s.filter_definition,
s.auto_created,
s.user_created,
s.no_recompute
FROM sys.stats s
JOIN sys.stats_columns sc ON sc.[object_id] = s.[object_id] AND sc.stats_id = s.stats_id
JOIN sys.columns c ON c.[object_id] = sc.[object_id] AND c.column_id = sc.column_id
JOIN sys.tables t ON s.[object_id] = t.[object_id]
WHERE t.is_ms_shipped = 0
ORDER BY t.name,s.name,c.name;

 

Testing a prepared statement issued by webui

declare @p1 int
set @p1=0
exec sp_prepexec @p1 output,N'@P0 int,@P1 int,@P2 int,@P3 int,@P4 int',N'
select distinct  top 200 mskey from idmv_jmx_entries VALS WITH (NOLOCK) where Attr_id IN (@P0,@P1,@P2,@P3) AND disabled=0
AND EXISTS (SELECT mcMskey FROM mxiv_entry E WHERE mcEntryTypeId=@P4 and mcMskey=VALS.mskey  and ((mcACEntry=0) or (mcACentry=1 AND
((23 IN (SELECT MemberMskey FROM idmv_members WHERE EntryMskey=E.mcMSKEY)) OR (23 IN (SELECT OwnerExpandedMskey FROM idmv_owners
WHERE EntryMSKEY=E.mcMSKEY)))) or ( mcACEntry=2 AND 23 IN (SELECT OwnerExpandedMskey FROM idmv_owners WHERE EntryMSKEY=E.mcMSKEY))))
',2,4,59,60,4
select @p1

 

Used and available table space

SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;

 

Display role/priv hierarcies on versions of IdM 7.2 that has full structure table

select U.mcDisplayName, A.mcDisplayName assignment,L.mcAssignedDirect isDirect,SR.mcParentMskeyassignmentParentMskey,AP.mcDisplayName assignmentParentName,SR.mcRootMskey assignmentRootMskey, AR.mcDisplayNameassignmentRootName
from idmv_entry_simple U
inner join mxi_link L on L.mcThisMSkey = U.mcMSkey andL.mcOtherEntryType in (select ocId from mxi_objectclasses whereocName='MX_PRIVILEGE')
left outer join idmv_entry_simple A ON A.mcMskey = L.mcOtherMSKEY
left outer join mxi_structure_root SR ON SR.mcChildMskey = A.mcMSKEY
left outer join idmv_entry_simple AP ON AP.mcMSKEY = SR.mcParentMskey
left outer join idmv_entry_simple AR ON AR.mcMSKEY = SR.mcRootMskey
where U.mcEntryType='MX_PERSON' -- and U.mcMskeyValue = 'some.user'
order by U.mcMskey,A.mcMskey,ap.mcMSKEY,ar.mcMSKEY desc

 

Display role/priv hierarcies on versions of IdM 7.2 that don't have full structure table by creating a temporary table

DECLARE @MSKEY int, @MSKEYV VARCHAR(255),@ROOTMSKEY int,@level int,@MAXLEVEL int,@Mylevel int, @Lnum int
SET @MAXLEVEL = 20
-- Not setting below values returns all users.
SET @MSKEY = NULL -- or add user mskey, such as 58
SET @MSKEYV = NULL -- or add user mskeyvlaue, such as 'User.Test.A'
-- Create temporary table
CREATE TABLE #STRUCT (mcChildMskey INT, mcRootMskey INT, mcParentMskey INT, mcEntryType INT, mcLevel INT)
-- Insert first level links
insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)    select A.mcThisMskey, A.mcOtherMskey, A.mcOtherMskey, A.mcthisentrytype, 1 from mxi_link A with (nolock)    where not exists (select 1 from mxi_link B with (nolock) where  A.mcOtherMskey = B.mcThisMskey and B.mcLinkType = 1 AND B.mcLinkState = 0)    and   A.mcLinkType = 1 AND A.mcLinkState = 0    and   A.mcAttrId in (select attr_id from mxi_attributes with (nolock) where AttrName = 'MXREF_MX_ROLE')    and   A.mcotherentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName = 'MX_ROLE')    and   A.mcthisentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName in ('MX_ROLE', 'MX_PRIVILEGE'))  -- Other level links  set @Mylevel = 2  while @Mylevel < @MAXLEVEL  begin  insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)  select A.mcThisMskey, A.mcOtherMskey, A.mcOtherMskey, A.mcthisentrytype, @Mylevel from mxi_link A  with (nolock)  where A.mcLinkType = 1 and A.mcLinkState = 0  and   A.mcattrid in (select attr_id from mxi_attributes with (nolock) where AttrName = 'MXREF_MX_ROLE')  and   A.mcotherentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName = 'MX_ROLE')  and   A.mcthisentrytype in (select ocid from mxi_objectclasses with (nolock) where ocName in ('MX_ROLE', 'MX_PRIVILEGE'))  and   A.mcOtherMskey in (select B.mcChildMskey from #STRUCT B with (nolock) where B.mcLevel = @Mylevel - 1)  and   not exists (select 1 from #STRUCT C with (nolock) where A.mcThisMskey = C.mcChildMskey and A.mcOtherMskey = C.mcRootMskey and A.mcOtherMskey = C.mcParentMskey)  set @Lnum = @@Rowcount  -- Build inherit link structure  insert into #STRUCT (mcChildMskey, mcRootMskey, mcParentMskey, mcEntryType, mcLevel)  select distinct A.mcChildMskey, B.mcRootMskey, A.mcParentMskey, A.mcEntryType, 0 from #STRUCT A with (nolock)  inner join #STRUCT B with (nolock) on A.mcParentMskey = B.mcChildMskey  where A.mcLevel > 1  and   not exists (select 1 from #STRUCT C with (nolock) where A.mcChildMskey = C.mcChildMskey and B.mcRootMskey = C.mcRootMskey and A.mcParentMskey = C.mcParentMskey)  if @Lnum = 0  begin  break  end  set @Mylevel = @Mylevel + 1  end  -- Update sub tree nodes to level 0.  update #STRUCT set mcLevel = 0 where mcLevel > 1  select U.mcDisplayName, A.mcDisplayName assignment, L.mcAssignedDirect isDirect,SR.mcParentMskey assignmentParentMskey,AP.mcDisplayName assignmentParentName, SR.mcRootMskey assignmentRootMskey, AR.mcDisplayName assignmentRootName  from idmv_entry_simple U  inner join mxi_link L on L.mcThisMSkey = U.mcMSkey and L.mcOtherEntryType in (select ocId from mxi_objectclasses where ocName='MX_PRIVILEGE')  left outer join idmv_entry_simple A ON A.mcMskey = L.mcOtherMSKEY  left outer join #STRUCT SR ON SR.mcChildMskey = A.mcMSKEY  left outer join idmv_entry_simple AP ON AP.mcMSKEY = SR.mcParentMskey  left outer join idmv_entry_simple AR ON AR.mcMSKEY = SR.mcRootMskey  where U.mcEntryType='MX_PERSON'  AND ((@MSKEYV IS NOT NULL AND U.mcMskeyValue = @MSKEYV) OR (@MSKEY IS NOT NULL AND U.mcMskey = @MSKEY) OR (@MSKEY IS NULL AND @MSKEYV IS NULL))  order by U.mcMskey,A.mcMskey,ap.mcMSKEY,ar.mcMSKEY desc
-- Clean up
DROP TABLE #STRUCT

 

 

ORACLE

 

Measuring execution time of query with milliseconds:

set time on timing on;
select 1 from DUAL;

Use F5 (Run script) to get timing, result example_

      1

Elapsed: 00:00:00.002

 

List number of sessions per username/host

select s.username as username,
(case when grouping(s.machine) = 1 then '**** All Machines ****' else s.machine end) AS machine,
count(*) as session_count
from v$session s, v$process p
where s.paddr   = p.addr and s.username is not null
group by rollup (s.username, s.machine) order by s.username, s.machine ;

List username, state, host etc for logged in sessions

 

SELECT username, seconds_in_wait, machine, port, terminal, program, module, service_name FROM v$session WHERE type = 'USER';

Add states and some more stuff:

 

SELECT sess.machine,sess.process, sess.status,sess.state, sess.username, sess.schemaname,sess.wait_time,to_char(sess.sql_exec_start,'YYYY-MM-DD HH24:MI:SS'), sess.blocking_session,sql.sql_text
FROM v$session sess, v$sql sql
WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by status

 

 


Listing sessions and blocking sessions

List running queries with basic information:
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER'
SELECT sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by blocking_session,sid

List statements that are blocking others

SELECT sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sess.sid in (SELECT sess.blocking_session
FROM v$session sess
WHERE sess.blocking_session is not null and sess.type = 'USER' and schemaname LIKE 'MXMC%')
SELECT t.status, t.start_time,sess.username,sess.sid,sess.machine,sess.process,sess.blocking_session,sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sql.sql_text
FROM v$session sess, v$sql sql , V$TRANSACTION t WHERE sql.sql_id(+) = sess.sql_id and t.ses_addr = sess.saddr and sess.sid in
(SELECT sess.blocking_session FROM v$session sess WHERE sess.blocking_session is not null and sess.type = 'USER' and schemaname LIKE 'MXMC%')

 

 

List running queries with additional information

select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address and t.hash_value = s.sql_hash_value and s.status = 'ACTIVE' and s.username <> 'SYSTEM' order by s.sid,t.piece

 

SELECT sess.process, sess.status, sess.username, sess.schemaname,sess.wait_time,sess.sql_exec_start,sess.blocking_session,sql.sql_text
FROM v$session sess, v$sql sql
WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER' and schemaname LIKE 'MXMC%' order by status

 

select sess.USERNAME, sess.sid, sqlt.sql_id, sqlt.sql_text from v$sqltext_with_newlines sqlt, V$SESSION sess
where sqlt.address = sess.sql_address and sqlt.hash_value = sess.sql_hash_value and sess.status = 'ACTIVE'
and sess.username like 'MVIEWS%' order by sess.sid,sqlt.piece

 

set lines 400
column "Time left" format A15
column "Time spent" format A15
column message format a100
column username format a20
select inst_id, sid, serial#, username, context,
trunc(time_remaining / 60,2)||' Minutes' "Time left",
trunc(elapsed_seconds / 60,2)||' Minutes' "Time spent",
round(sofar/totalwork*100,2) "% Completed",
message
from gv$session_longops
where sofar != totalwork
and totalwork != 0
order by 1

 

This seems broken, but could be useful if checked further

SELECT * FROM (select username,opname,sid,serial#,context,sofar,totalwork ,round(sofar/((totalwork+1)*100),2) "% Complete" from v$session_longops) WHERE "% Complete" != 100;

 

List queries averaging more than  X ms worktime

set lines 400
SELECT cast(ROUND(A.ELAPSED_TIME / A.EXECUTIONS / 1000000) as number(10)) SQL_AVG,       cast(A.SQL_ID as varchar(20)) SQL_ID,       cast(DBMS_LOB.SUBSTR(A.SQL_FULLTEXT,300) as varchar(300)) SQL_TEXT,       cast(A.EXECUTIONS as number(10)) EXECCOUNT,        cast(NVL(S.PROGRAM,A.MODULE) as varchar(20)) PNAME,       cast(NVL(S.USERNAME,A.PARSING_SCHEMA_NAME) as varchar(20)) USERID  FROM  V$SQLAREA A, V$SESSION S  WHERE A.SQL_ID = S.SQL_ID(+)    AND A.EXECUTIONS > 0    AND ROUND(A.ELAPSED_TIME / A.EXECUTIONS / 1000000) > 10

 

 

Profiling procedures

 

SQL Plus part:

sqlplus system/abcd1234 AS SYSDBA
@D:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\tracetab.sql
@D:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\dbmshptab.sql
grant execute on DBMS_HPROF to mxmc_oper

 

Rest can be done in SQL Dev

CREATE OR REPLACE directory PLSHPROF_DIR as 'C:\temp';
grant READ, WRITE on directory PLSHPROF_DIR to MXMC_OPER;
grant create table to MXMC_OPER;
grant create sequence to MXMC_OPER;
grant unlimited tablespace to MXMC_OPER;

Then profile procedures from SQL Developer or run file:

DECLARE
v_runid NUMBER;
BEGIN
v_runid := DBMS_HPROF.ANALYZE (LOCATION => ‘PROFILE_DATA’,
FILENAME => ‘myteststuff.txt’);
DBMS_OUTPUT.PUT_LINE(‘Run ID: ‘ || v_runid);
END;

More info: http://dbaora.com/plsql-hierarchical-profiler-oracle-database-11g-release-2-11-2/

 

 

Undo tablespace and transactions  stuff

SELECT SUM(a.bytes) "UNDO_SIZE"  FROM v$datafile a, v$tablespace b, dba_tablespaces c
WHERE c.contents = 'UNDO' AND c.status = 'ONLINE' AND b.name = c.tablespace_name AND a.ts# = b.ts#;

 

select sum(used_ublk) from v$transaction;

 

select * from dba_data_files where tablespace_name='UNDOTBS1';

 

select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
from v$undostat order by end_time;

 

SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,         TO_CHAR(END_TIME, 'MM/DD/YYYY HH24:MI:SS') END_TIME,         UNDOTSN, UNDOBLKS, TXNCOUNT, MAXCONCURRENCY AS "MAXCON"         FROM v$UNDOSTAT WHERE rownum <= 144;
select cast(file_name as varchar2(60)) filename,cast(tablespace_name as varchar2(10)) tblspc,cast(bytes as varchar2(10)) tblsize,
cast(autoextensible as varchar2(5)) isAutoExt, cast(maxbytes as varchar2(12)) maxSize,cast(increment_by as varchar2(6)) autoIncr
FROM DBA_DATA_FILES;

List index states

select index_name,status,num_rows,sample_size,last_analyzed,visibility from all_indexes where owner = 'MXMC_OPER' ORDER BY last_analyzed

 

Gather statistics for a table

exec dbms_stats.gather_table_stats('MXMC_OPER','mxi_values',no_invalidate=>false);
begin  dbms_stats.gather_schema_stats (ownname    => 'mViews_oper', estimate_percent=>dbms_stats.auto_sample_size, cascade=>TRUE);
end;

 

Index state, size, samplesize …

select index_name,status,num_rows,sample_size,last_analyzed,visibility
from all_indexes where owner = 'MXMC_OPER' --and status='UNUSABLE';

 

Execution Plan Stuff

 

List plans for given schema

SELECT * FROM
(SELECT  sql_fulltext,sql_id,child_number,disk_reads,  executions,first_load_time,last_load_time
FROM    v$sql where parsing_schema_name = 'MXMC_OPER'
ORDER BY elapsed_time DESC)
WHERE ROWNUM < 40

 

Plan for adhoc statement:

EXPLAIN PLAN for
select distinct mskey from (
SELECT mskey from mxi_values VALS where  SEARCHVALUE='MX_PRIVILEGE' AND
ATTR_ID=1  AND disabled=0 AND mskey in
(select mskey from mxi_values VALS where searchvalue  like
'%uncritical%EEP%' AND Attr_id IN (2, 4, 18))
) where rownum < 200;
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

 

Plan for adhoc statement with output formatting

EXPLAIN PLAN FOR
select distinct(mskey) from mxmc_oper.mxiv_values where searchvalue = 'POSITION ID' and attr_id in (select attr_id from mxi_attributes where attrname = 'Z_MX_ROLE_TYPE');
SELECT *
FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format => 'BASIC'));

Explain plan setting statement id and getting plan:

EXPLAIN PLAN
SET STATEMENT_ID = 'SAPTEST_1' FOR
select * from (  SELECT mskey from idmv_jmx_entries VALS where mskey in    (select distinct mskey from mxi_values where SearchValue ='LAMBERT' AND Attr_ID=60)  AND mskey in    (select distinct mskey from mxi_values where SearchValue ='BOSKAMP' AND Attr_ID=59)  AND disabled=0
);
SET PAGESIZE= 1000;
SET LINESIZE= 500;
SET LONG=100000;
COL xplan format a100;
SELECT dbms_xplan.build_plan_xml(statement_id => 'SAPTEST_1') AS XPLAN FROM dual;

 

Find the plan for a statement:

SELECT  --sql_fulltext,sql_id,cpu_time,elapsed_time,user_io_wait_time,disk_reads,executions  *
FROM    v$sql where sql_fulltext like '%select distinct mskey from (SELECT mskey from mxiv_values VALS where%'

Get plan by planid:

select DBMS_SQLTUNE.REPORT_SQL_MONITOR(type=>'HTML',report_level=>'ALL',sql_id=>'1zf0h9sw688bx') as report FROM dual;

History of executions of plan

select * from DBA_HIST_SQLSTAT where sql_id = 'a2zqghuq57f2y'

Executions for a timeframe

select *
from   dba_hist_sqlstat t, dba_hist_snapshot s
where  t.snap_id = s.snap_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate)

 

select *
from   dba_hist_sqlstat t, dba_hist_snapshot s
where  t.snap_id = s.snap_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate)
select cast(st.sql_text as varchar(400)), sum(t.elapsed_time_delta/1000000)/sum(t.executions_delta)
from   dba_hist_sqlstat t, dba_hist_snapshot s, dba_hist_sqltext st
where  t.snap_id = s.snap_id and st.sql_id = t.sql_id
and    t.dbid = s.dbid
and    t.instance_number = s.instance_number
and    s.begin_interval_time between trunc(sysdate)-1 and trunc(sysdate) -- yesterday's stats
group  by cast(st.sql_text as varchar(400))

Empty/clear buffer, pools, execution plans

ALTER SYSTEM FLUSH BUFFER_CACHE;
alter system flush shared_pool;

 

Search using locally defined var

DEFINE vSearchString = "'%QUALITY%'";
select distinct mskey from (SELECT mskey from mxi_values VALS where SEARCHVALUE='MX_PERSON' AND ATTR_ID=1  AND disabled=0
AND mskey in (select mskey from mxi_values VALS where searchvalue like &&vSearchString AND Attr_id IN (2,4,5))) where rownum< 200;

 

var vSearchString varchar2(30);
exec :vSearchString := '%QUALITY%STA%';
--print vSearchString;
select distinct mskey from (SELECT mskey from mxi_values VALS where SEARCHVALUE='MX_PERSON' AND ATTR_ID=1  AND disabled=0
AND mskey in (select mskey from mxi_values VALS where searchvalue like :vSearchString AND Attr_id IN (2,4,5))) where rownum< 200;

Ramp-Up Start of SAP Identity Management 8.0

$
0
0

With the new release of SAP Identity Management 8.0 , we are including the following enhancements:

  • Innovative Design-time Developer Studio as Eclipse plug-in
  • Extended integration capabilities with SAP products
  • Improved security concept

For more information about becoming a ramp-up customer for SAP Identity Management 8.0, visit the SAP Service Marketplace.

SAP TechEd Wrapup

$
0
0

I was lucky to attend TechEd && d-code last week and it was an amazing experience. From a SAP IDM point of view things are looking pretty good.

This was a different type of event for me since I was there this year as a speaker. I had the chance to present ITM 118: SAP Identity Management – Experience from a Customer Implementation Project. I spoke about some best practices for planning and organizing the start of your SAP IDM project, everything that should happen before anything gets involved.  I hope those that were able to attend got something from it.

 

I was also interviewed by ASUG regarding SAP IDM (Shameless plug, you can view it here.) this was a new and interesting experience for me as well, and I might add, a lot of fun! I really enjoy being able to share my passion about the product and the nature of Identity Management.

 

I also had the chance to meet (and re-meet) some great people in the SAP and ASUG community! Tammy Powlas, Marilyn Pratt , and  Jason Cao among others. I also got to see some old friends from past companies and past projects, always good to catch up with folks!

 

Ok enough about me, now let me tell you about what was happening in IDM Land during the event.

 

I was able to attend a hands on session for learning about using REST and IDM with SAP IDM 8.0, quite ably led by Ralitsa Chipeva While we were still using a Beta product (The images for the event were made months ago, which is quite normal) I still got a good feel from the product. It was not terribly slow, showed some new features, while keeping some old ones as well. It had its little quirks, but I think those should be resolved by Ramp-up release in November, 2014, and certainly cleaned up by GA release late Q1, 2015.

 

Most of my learning this year came from the show floor. Alexander Zubev gave a great presentation on how to optimize SQL statements in IDM (Tip to the event organizers, rather than VPN to Germany, bring the demo systems local to the event, or at least use a local image.) We also received the latest information regarding the SAP IDM (and SSO) roadmap from Gerlinde Zibulski Looks like there’s some good stuff coming along for IDM, including HANA as a backend database and integration with cloud services coming in 2015.

 

I also had a fantastic one-on-one session with Gerlinde, which was more of a chance for me to tell her about what I’ve been seeing out in the field. As always, Gerlinde was receptive and interested, asking a lot of questions and pulling out details and concepts that I had not considered. She also reminded me that anyone who has comments and suggestions regarding SAP IDM should log into the Idea Place page for IDM. This is the absolute best place to get your ideas noticed by SAP IDM Product Management. It is another fantastic resource from SAP where ideas can be discussed, evaluated and processed. Please take some time to review the suggestions already posted there and maybe post one of your own.

 

Finally, as a person who is further into his 40s than he would sometimes like to admit, this year’s entertainment, Huey Lewis and the News, was a lot of fun! Nothing like having a chance to rock out to some of my favorite tunes!

 

All of this make me look forward more than ever to SAP Tech Ed && d-code 2015!

VDS throwing Access is Denied

$
0
0

Hi All,

 

For our new project purpose, VDS is
freshly installed and when I configured new configuration and saved anything it
is throwing error as below screenshot.

VDS.jpg

 

In order to eliminate the above
error,

 

I checked in installation files
there I found a file named, .vdssettings file I renamed it to vdssettings and
saved.

 

After that this error is not coming.


SAP Identity Management 8.0 Release Highlights

$
0
0

We have just released SAP Identity Management 8.0 with the following main improvements:

  • Innovative design-time IDM Developer Studio as Eclipse plug-in
  • Extended integration capabilities with SAP products
  • Improved security concept

SAP Identity Management Developer Studio is an Eclipse plug-in that provides the environment for developing the configuration for the identity management solution. It is a replacement for the Identity Center Management Console.

The Identity Management Developer Studio offers improved security model and usability as well as a multiuser environment so that users can work on different configuration packages at the same time without overwriting each-others’ changes.

Configuration packages are delivered as part of the Identity Management core components. They contain frameworks and connectors that can be imported to the Identity Management database to provide the basic functionality as provisioning and integration with SAP Access Control.

A package is the smallest part of the configuration that is maintained as a unit. Changes to packages are version controlled, logged and reversible. Scripts and constants are included in a package and distributed with the package during transport. Global Scripts and global constants are replaced by packages

Transport of configuration is done on a package level. Each package is transported separately as a single XML file and there is no need for complex transport mechanism.

Developer Studio now supports Visual Workflow Design which allows you visualize conveniently and to drag and drop processes in a workflow diagram. Of course the tree view is still available.

IDM_8_0_Eclipse_DevStudio_Screenshot.png

 

The Provisioning framework for SAP Identity Management 8.0 provides set of templates to use to connect SAP systems to SAP Identity Management and to set up the jobs and processes for provisioning the corresponding users and the corresponding assignments.

The Provisioning framework for SAP Identity Management 8.0 is also distributed in packages. Each package has a specific purpose, as its name implies. There are for example an engine package, a package for a specific connector, a package for notifications, a package for User Interface forms and a custom package.

SuccessFactors connector is used for a communication between SAP Identity Management and SuccessFactors systems. The connector allows SAP Identity Management to integrate SuccessFactors systems into centralized user management scenarios. It is delivered as a separate package in the Provisioning Framework for SAP Identity Management 8.0.

When determining your system landscape, you can use Success Factors as a leading Human Capital Management (HCM) system and as a target system for provisioning. You can read more about SuccessFactors connector in Ralitsa's blog article.


Repository management - repository types are introduced so that common parameters can be defined and can be shared and management of repositories is now possible in the Identity Management Administration User Interface so that development and management can be separated. Also repositories can be Enabled and Disabled manually or automatically e.g. when a repository is not available.

 

There is a new utility for dispatcher management and there are new states for the dispatchers so that you can suspend and resume dispatchers remotely.

New filter mechanism is introduced – dynamic groups which can be used for automatic assignment of privileges.


Minor changes:

Jobs are stored and run within an identity store. Can select identity store “-Self-”

Roles, Privileges and Dynamic Groups are managed by regular forms in Identity Management User Interface (instead of MMC)


The following are removed

  • The Windows Runtime Engine
  • The Identity Center Management Console is replaced by SAP Identity Management Developer Studio which is an Eclipse plug-in that provides the environment for developing the configuration for the identity management solution.
  • Configuration Copy Tool is obsolete as the configurations are managed and transported with new packaging concept.

 

There are some changes in the terminology:

There are forms which define the screen layout and access control instead of “UI Task”

Processes are introduced and replace former “Task”. Only processes can be used as event tasks and can be called from other processes

Process type is introduced for easier definition of the usage of the process E.g. Validate-Add-Process, Entry-Modify-Process

Repository job is defined as a job which requires a repository to run and has a defined repository type

 

Check also SAP Identity Management Overview presentation by Regine Schimmer.

SAP Identity Management 8.0 Developer Studio Eclipse Plug-in

$
0
0

As outlined in my previous blog, one of the major improvements in SAP Identity Management 8.0 is the new design time Developer Studio which is an Eclipse plug-in which replaces Identity Center Management Console. And although this change might seem big the feedback we got from early users was that for the users with background in earlier versions it was easy to get used to the new one and in addition were pleased by some of the features Eclipse brings with it e.g. to be able to work with more than one entity at the same time, JavaScript code coloring and code completion.

 

In the SAP Identity Management 8.0 Release Highlights blog I mentioned about the improved security model and now will make a short detour to elaborate a bit on that.

 

Developer Studio connects to the IdM Developer Studio Service running on SAP NetWeaver AS Java. The service then uses AS Java UME for authentication, that’s why you should have the respective users in the UME and in addition it is required that this user also exists in the Identity Management database with the same username as in the UME. Then the service connects to the Identity store(s) and verifies all incoming requests so that the developer actually no longer needs to know the database credentials. All authorized IdM developers are stored in the IdM database in the configuration tables e.g. MC_USERS but not in the Identity store meaning that a developer does not have MX_PERSON entry. These tables in IdM database are only writable by mxmc_admin user which is created during install or upgrade and thus prevent developers from modifying them. This is done for security reasons – as developers have access to the identity store and might easily break the security and authorization model. Therefore, the mxmc_rt user which is used by the runtime components also does not have the permissions to modify that.

IDMDevStudioUsercreateInEclipse.png

Having created a Developer admin user you are ready to go.

Once you open Eclipse you would need to go to Window/Open Perspective/Other and switch to SAP Identity Management Developer Studio perspective to be able to take full advantage of the IdM Development Studio plug-in features.

Then you would create Identity store. The Developer Admin user can also add developer users given that they are created in the UME first. Developer administrator can add or remove users, create identity store, create package, modify identity store schema and manage the package access rights. As Developer administrator you have the ability to suspend/resume or stop a dispatcher remotely from the Developer Studio Eclipse plug-in – another feature that was appreciated by our early customers as you do not need operating system level access to the runtime machine. Of course you can use the new dispatcher utility.

Developers have access to one or more packages. If a developer is owner of a package, she can grant access to other users. Developer can modify anything, but Layout Developer can only modify form layout. Import authorization would allow overwriting by importing. View authorization allows read only. Access to each package can be modified by the administrator or the owner. This can be done only if the package is checked in.

Then you would import the SAP Provisioning Framework packages and it is worth to mention few here:

  • Engine - contains the core provisioning flow which is responsible for triggering the necessary processes (Provision, De-provision and Modify) as well as common scripts used by all packages so most other packages depend on it.
  • At least one connector package - The package for each connector contains the specific processes for provisioning to a specific system e.g. com.sap.idm.connector.abap.idmpck which includes ABAP specific repository type, processes, jobs and scripts.
  • Custom package (com.sap.idm.connector.custom.idmpck) – is package with default settings and configurations, default constants and scripts and customers can make customizations in this package so that it is less likely to modify SAP delivered packages and preserve their easy upgradability in future. Connectors have extension points which can be implemented in these custom packages. Currently extension points are a given set and we would need feedback to see what else has to be added as extension points.
  • One of the new connectors we deliver with this release is SuccessFactors connector and you will find a bit more in this blog article of Ralitsa Chipeva.
  • Forms package - The forms package contains the definition of all User Interface tasks for CRUD operations (create, read, update, delete) on different entry types
  • Notification package - The notification package contains the notification task and the notification templates that are used to send notifications from the SAP Provisioning framework, approval and attestation tasks

PackagesInEclipse.png

Finally the workflow diagram editor allows you to view existing processes in a more convenient way and model new ones with the mouse. Actually “drag-and-drop” implies drag, but actually you do not need to drag, rather you click on the palette element and then put it where you wish – usually over an arrow. I found this convenient once got used to it. Also it has got nice Auto layout function.

SAP Identity Management 8.0: SuccessFactors Connector

$
0
0

SAP Identity Management 8.0 - the new major version of the product was just released and now is available for download on the SAP Service Marketplace.

As can be expected of a new major release - the product contains many enhancements along with some key new features.

 

Starting with the IdM Designtime -  IdM Developers can now benefit from a completely new development environment based on Eclipse - the new IdM Development Studio.  Going further - in the new version IdM continues to extend its integration capabilities with SAP products introducing a SuccessFactors connector (SFSF connector). I should also mention here improvements like  the new security concept and new process editor, as well as some renovations in repository and dispatcher management.

 

Of all the new features above in this blog I would like to focus on an overview of the SuccessFactors connector. We will take  a look at the two business scenarios that are supported as well as go into some more details about the new packaging concept and how the SFSF

connector is delivered as part of the SAP Provisioning Framework.

 

SuccessFactors, an SAP Company, is a global provider of cloud-based human capital management (HCM) software with its cloud offering Employee Central (EC).

On one side - like any HCM system, Employee Central is the leading system of record for employee data.  On the other side - SAP Identity Management is the leading system of record about identities, their data and privileges. So far in version 7.2 SAP IdM offered identity management processes around the on-premise SAP HCM solution. In IdM 8.0 we provide similar processes for Employee Central.  With the SuccessFactors connector we cover two main scenarios:

  • Identity propagation if initiated from SuccessFactors
  • Identity propagation is initiated from SAP Identity Management

In  the first scenario  SFSF EC is a central employee system and we have enabled identity propagation to be initiated from SuccessFactors. (Figure 1). For this we have implemented a pull mechanism in delta mode via scheduled Initial / Delta Load Job.

 

 

Scenario1.jpg

Figure 1

 

The other scenario that is targeted is the one where we have IdM as a Central system that initiates identity propagation. Here SAP IdM uses the SuccessFactors role-based permission concept and is used to assign users to roles and groups in SuccessFactor in a similar way like for any other system in the landscape (Figure 2). This way customers of SAP IdM are able to extend their SAP IdM deployment with SuccessFactors. This scenario could be used regardless of whether SFSF Employee Central is the leading HCM system or not.

Scenario2.jpg

Figure 2

 

 

The SuccessFactors  connector is shipped as a separate package in SAP Identity Management 8.0's Provisioning Framework. Introducing the new packaging concept in IdM 8.0 (for more details you can check what Fedya postedhere) now you can consume it via importing the package that is called com.sap.idm.connector.sfsf - com.sap.idm.connector.sfsf.idmpck.

 

As you can see on Figure 3 the connector consists of the following components:

  • An SFSF repository type.
  • Jobs and processes that control provisioning from and to SuccessFactors.

To configure the SuccessFactors connector, you have to create a repository and run an Initial Load job.

For more information and details about the configuration you can take a look at the new extended Configuration Guide that we have prepared for IdM 8.0.

Package.JPG

Figure 3

 

Further interesting information for the IdM 8.0 release could be found in theSAP Identity Management 8.0 Release Highlightsand inSAP Identity Management 8.0 Developer Studio Eclipse Plug-in

SAP Identity Management 8.0 Now in Ramp-Up

$
0
0

The latest version of the SAP Identity Management solution is now available for ramp-up customers.
Release 8.0 offers a new Eclipse-based developer studio and a connector for SuccessFactors, among other new features and enhancements.

 

For more information, check the following resources:

 

 

Join our upcoming Live Expert Session on December 15 to hear more about the new release. Please note that an s-user ID is required for registration.

For a hands-on experience, we’re offering two-day delta workshops. Visit our training portal for current schedules. The next classes will be offered in Walldorf on December 11-12 (in German) and on March 30-31 in Berlin (in English).

 

If you would like to participate in the ramp-up phase, please register on the SAP Service Marketplace.

IDM-HCM Integration: "How to add a new attribute"

$
0
0

Ever wanted to add a new attribute in your HCM-IDM integration?

Well here is how it can be performed. Just follow the steps below. This is a valid case for any attribute you need in your integration.

 

1) You need to login to your HCM system and execute SQ01

 

sq01.PNG

From this view, you should pick up your Query that is used when you extract data to the IDM. Then select Change and Basic List.

The basic list contains all attributes that could be used for a user. The ones that are in use are marked with green. You can also see the technical name of the attribute.

2) Select the attribute you wish to be used by clicking the checkbox "List Fields" and save your changes.

SQ01_2.PNG

3) What you need to do next is to maintain field assignments for Data Export - by using transaction HRLDAP_MAP. You need to select the Global Work Area checkbox and enter the User Group and Name that you are using. If you've been using the official IDM-HCM integration documentation on your setup that would be /SAPQUERY/L1 for the group and LDAP_QUERY for the query. Here you should maintain the Query Fld/Description/Attribute Grp/Attrib.Name accordingly.


4) Then you need to maintan the LDAP Server Mappings. You execute LDAP transaction - then select your LDAP Server and then Mapping.mapping.PNG


Maintain the names of the field "Fld" and the Attribute names.

You are now done from the HCM part. Now follows the IDM part.


1) You need to navigate to the HCM Staging Area and expand the attributes in the Identity Store Schema.

Make sure your new attribute is correctly represented here.

HCM.PNG

What is important to know here - the attribute that you create here should have the Entry Type MX_HCM_Employee selected. (Properties of the Attribute > Entry Type Tab). Compare already created attributes to your setup if you can't find this option.


2) Last but not least you should add a line into the To Identity Store Pass usually named Write HCM Employee To SAP Master which will write down your record into the IDM itself - Destination Tab.

IDM LAST.PNG

Make sure that the Attribute you are using exists in the IDM, otherwise you will get an error during the extract. If it is a completely new Attribute (MX_something) you need to create it from the IDM side too.

Viewing all 172 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>