Quantcast
Channel: ATeam Chronicles
Viewing all 180 articles
Browse latest View live

OAAM OTP code generation configuration

$
0
0
OAAM refers to the following properties for the One Time Password code generation
    • bharosa.uio.default.otp.generate.code.length = 5
      bharosa.uio.default.otp.generate.code.characters = 1234567890
  • The property bharosa.uio.default.otp.generate.code.length defines the length of the OTP code to generate. The property bharosa.uio.default.otp.generate.code.characters contains a string of characters that the OTP code can contain. OAAM API randomly chooses any character from the string defined in bharosa.uio.default.otp.generate.code.characters to generate the OTP code. If you need an OTP code of 5 characters, OAAM API will randomly pick one character at a time from the string of declared characters and append to the OTP string.

    For example, bharosa.uio.default.otp.generate.code.characters can contain 1234567890 or abcdefgh or 1234567890abcdefghijklmnopqrstuvwxyz or 1234567890ABCDWXYZ!@#$%^&*.

    If bharosa.uio.default.otp.generate.code.characters contains 1234567890ABCDWXYZ!@#$%^&*, the generated OTP code may look like &1A@$ or 12345 or XAW12.

    You may require OAAM to generate the same OTP code every time for your testing. In such a case, you can define the bharosa.uio.default.otp.generate.code.characters to a two letter string containing just 1 character. For example, if you want OAAM to generate the OTP code 11111 every time, you can define bharosa.uio.default.otp.generate.code.characters to 11. Make sure that you have at least 2 characters in the bharosa.uio.default.otp.generate.code.characters.


    How to (correctly) make manual edits to oam-config.xml

    $
    0
    0

    Introduction

    Occasionally, it is necessary to make changes to OAM 11g configuration by directly updating the oam-config,xml file, rather than using the OAM console.  In this post, we describe the correct way to make changes to this file. This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

    Editing oam-config.xml

    Correctly making durable edits to oam-config,xml involves the following steps, which must always be followed exactly.
    1. 1. Shut down the entire domain, including the WebLogic Admin Server and all OAM Managed Servers
    2. 2. Locate the correct "oam-config.xml" file. This will be located on the Admin Server node in the following location: <DOMAIN_HOME>/config/fmwconfig/oam-config.xml
    3. 3. Make a backup of the existing file prior to editing
    4. 4. When editing the file, be sure to increment the version number by 1 to ensure that the changes are not overwritten by the console. See below for details on how to do this.
    5. 5. Once the change has been saved, restart the WebLogic Admin Server, followed by the OAM Managed Servers.
    6. 6. As a verification step, check <DOMAIN_HOME>/config/fmwconfig/oam-config.xml on each of the OAM Managed Server nodes to ensure that the updated version has propagated correctly.

    Incrementing the version number

    The location of the version number that needs to be incremented is highlighted in the following snippet from the oam-config.xml file. The version number will occur near the top of the file.
    <Setting xmlns="http://www.w3.org/2001/XMLSchema" Name="NGAMConfiguration" Type="htf:map">
    ...
    <Setting Name="Version" Type="xsd:integer">175</Setting>
    ...
    </Setting>
    In our case, we would need to increment the number from "175" to "176" prior to saving oam-config.xml.

    OAM WebGate connections through firewalls

    $
    0
    0

    Introduction

    In this post, we investigate a complication that can occur if you require a firewall between your WebGate agents and your OAM 11g servers within your deployment topology. We provide some guidance related to how to configure your WebGates in this case. This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

    The problem we are trying to solve

    Imagine a fairly-typical scenario where an organisation has a number of web servers, within a demilitarized zone (DMZ) that they want to protect with OAM. WebGate plugins will, of course, need to be deployed to these web servers and those plugins will need to establish and maintain Oracle Access Protocol (OAP) connections back to one or more OAM 11g Servers.  It is quite likely that these OAP connections will need to pass through a firewall (or two) on their journey from the WebGates to the OAM servers, which is not, in itself, too much of a problem. An issue frequently does occur, though, in the case that the firewall imposes either a maximum connection TTL (time to live) or idle timeout - and to be honest, most firewalls will do this as a matter of course. OAP, as has already been widely discussed, is a long-lived protocol and as such, the standard behaviour of an OAP client (such as WebGate) is to initially establish a number of connections to a server and then use those connections repeatedly over a long period of time. OAP clients in general (and WebGates in particular) typically do not react well to an established OAP session being "torn down" ungracefully - and this is exactly what a typical firewall will do if, according to its configuration, a connection has exceeded either its maximum TTL, or its inactive timeout. The term "ungracefully" above simply means the scenario where the firewall does not send a TCP connection reset message to the client when invalidating the connection. Should you have a well-mannered firewall that does notify the WebGate in this way once it does close the connection, all should be fine and you can probably stop reading at this point. The reality, though, is that most firewalls will not fit into this "polite" category. When a firewall does invalidate or tear down an OAP connection that WebGate "thinks" is still good, the first time the WebGate attempts to use that connection, the request will obviously fail, Depending on various configuration parameters and the number of connections available, the WebGate may indeed recover from this situation without any major impact to end users (apart from somewhat degraded performance as the WebGate hunts around for a good connection within its pool). It is always possible, though, that all connections could be invalid at once, such as the case of a system that sees little traffic outside of office hours having all of its connections timed out due to inactivity overnight and in this event, there will be a definite impact to end users, with requests failing until WebGate has managed to re-establish its connection pool.

     How to prevent the firewall closing connections

    The way to avoid this problem is to ensure that the firewall is never given cause to close a WebGate connection - in other words, ensuring that WebGate connections never exceed the configured TTL or inactivity timeout as defined at the firewall. This is achieved by configuring a maximum connection lifespan, or TTL, at the WebGate side that is less than the firewall's maximum TTL or idle timeout. As an example, let's assume that our firewall imposes an idle timeout of 30 minutes for TCP connections. In this case, we would need to configure WebGate to automatically re-establish any connection older than, say, 25 minutes in order to ensure that the firewall would never need to time out one of its OAP connections. This is done by altering a WebGate setting called "Max Session Time". Now, we need to have a bit of a discussion about this particular setting, for a number of reasons. The first is that it really isn't very well named, considering what it does; it has nothing to do with sessions, but everything to do with connections back to the OAM server and how long they will be allowed to last before being re-established. It should, correctly, be called something like "Max Connection Time" and perhaps in a later version of OAM it will be. As of the time of writing, though (when OAM 11.1.2.1 is the most recent version) we will have to live with the current name. Perhaps more confusing, though, is the fact that, over the various incarnations of the 11g OAM product, the OAM Console page that allows this WebGate parameter to be defined has been changed repeatedly - consider the screenshots below: MaxSessionTime As we can see above, the OAM Console UI, across several releases, has changed the expected unit of time in which this parameter is specified, starting with no unit at all, then moving to hours and then to seconds. What's more, the default value tends to vary as well, depending on the version you are using and the mechanism that was used to create the initial WebGate profile. The reality of the situation, though, is that you can and should ignore the unit of time that is reflected in the UI, because the default unit for this setting is (and always has been) hours. That probably worth repeating and highlighting, just to be completely clear:

    In all OAM 11g versions up through the current release, 11.1.2.1, the default unit for Max Session Time is hours, regardless of what is reflected in the OAM Console UI.

    This means that the default maximum TTL for a WebGate connection in OAM 11.1.2.0 and 11.1.2.1 is, in fact 3600 hours! We did say it was meant to be a long-lived connection... Understanding the default value and the default unit is great, of course, provided that your firewall is (or can be) configured to allow connections to last (or remain idle) for at least an hour. This is often not the case, though.

    But what if my firewall timeout is less than an hour?

    The good news that that OAM 11g WebGates support a user defined parameter that can be used to change the unit used for Max Session Time. In order to change the unit from "hours" to "minutes", add the following to the "User Defined Parameters" section in the WebGate profile:

    maxSessionTimeUnits=minutes

     Once you've done that, then whatever number you've entered in the "Max Session Time" box will be interpreted in minutes, rather than hours (again, regardless of what the UI label tells you). When the change is reflected in the WebGate's ObAccessClient.xml file, you should see entries similar to the following (these reflect the correct settings for our "25 minute" example above.
    ...
    <SimpleList>
            <NameValPair
                ParamName="maxSessionTime"
                Value="25"></NameValPair>
        </SimpleList>
    ...
    
    <userDefinedParameters>
    <name>maxSessionTimeUnits</name>
    <value>minutes</value>
    </userDefinedParam>
    Remember again - if the maxSessionTimeUnits parameter is not specified, then maxSessionTime will be interpreted in hours.

    Seeing the effect

    Once you've made the appropriate changes, it's always a good idea to verify that things are working as expected. In order to do this, you should increase the log level of your WebGate to at least "INFO" and then filter out lines from the WebGate log file (oblog.log) containing the string "CONN_MGMT". That will allow you to monitor the connections that are opened and closed by WebGate over time. I include a log snippet from my own system (when I set the Max Session Time value to 2 minutes) to highlight the messages to look out for. Note that, just to increase confusion further, the timeout value in the log is printed in seconds, rather than minutes or hours.
    2013/12/05@18:30:59.01090       15928   15999   CONN_MGMT       INFO    0x00001C04      /ade/brmohant_17700080/ngamac/src/palantir/aaa_client/src/watcher_thread.cpp:504        "Session expired"       Connection^object{ObConnectionAAA:0x7F2390019D80{_socket=object{ObSocket:0x0241C020{_sock=17}{_my_addr=}{_my_port=0}{_remote_addr=192.168.56.245}{_remote_port=5576}{_use_blocking_calls=false}{_timeout=10000}{_req_pending=0}}}{_state=ObConnUp}{_priority=1}{_debug=false}{_host=oamr2ps1.oracle.com}{_port=5576}{replyMapSize=0}{_seqno=9}{_isSpare=false}{_createTime=1386268079}{_closedTime=0}{_retries=0}}      Maximum Session Time^120        Current Time^1386268259
    2013/12/05@18:30:59.01573       15928   15999   CONN_MGMT       INFO    0x00001C02      /ade/brmohant_17700080/ngamac/src/palantir/aaa_client/src/watcher_thread.cpp:474        "New connection opened to Access Server"        Connection^object{ObConnectionAAA:0x7F2390306270{_socket=object{ObSocket:0x023E8390{_sock=19}{_my_addr=}{_my_port=0}{_remote_addr=192.168.56.245}{_remote_port=5576}{_use_blocking_calls=false}{_timeout=10000}{_req_pending=0}}}{_state=ObConnUp}{_priority=1}{_debug=false}{_host=oamr2ps1.oracle.com}{_port=5576}{replyMapSize=0}{_seqno=0}{_isSpare=true}{_createTime=1386268259}{_closedTime=0}{_retries=0}}
    2013/12/05@18:30:59.01584       15928   15999   CONN_MGMT       INFO    0x00001C04      /ade/brmohant_17700080/ngamac/src/palantir/aaa_client/src/watcher_thread.cpp:504        "Session expired"       Connection^object{ObConnectionAAA:0x7F239028F770{_socket=object{ObSocket:0x023D3000{_sock=18}{_my_addr=}{_my_port=0}{_remote_addr=192.168.56.245}{_remote_port=5576}{_use_blocking_calls=false}{_timeout=10000}{_req_pending=0}}}{_state=ObConnUp}{_priority=1}{_debug=false}{_host=oamr2ps1.oracle.com}{_port=5576}{replyMapSize=0}{_seqno=9}{_isSpare=false}{_createTime=1386268079}{_closedTime=0}{_retries=0}}      Maximum Session Time^120        Current Time^1386268259
    2013/12/05@18:31:59.10580       15928   15999   CONN_MGMT       INFO    0x00001C02      /ade/brmohant_17700080/ngamac/src/palantir/aaa_client/src/watcher_thread.cpp:474        "New connection opened to Access Server"        Connection^object{ObConnectionAAA:0x7F239028F770{_socket=object{ObSocket:0x0241C020{_sock=17}{_my_addr=}{_my_port=0}{_remote_addr=192.168.56.245}{_remote_port=5576}{_use_blocking_calls=false}{_timeout=10000}{_req_pending=0}}}{_state=ObConnUp}{_priority=1}{_debug=false}{_host=oamr2ps1.oracle.com}{_port=5576}{replyMapSize=0}{_seqno=0}{_isSpare=false}{_createTime=1386268319}{_closedTime=0}{_retries=0}}
    As a closing note, remember to reduce the log level of your production WebGates again once you've verified that the correct connection time setting is in force.

    OIM ICF based connector filter error

    $
    0
    0

    Introduction

    Recently, I was helping a customer in an OIM project go live when we ran an “Active Directory User Target Recon Job” with an AD Connector (11.1.1.6) and a regular expression filter to select just a subset of users.

    Main Article

    To our surprise, every time we executed the job, we got a strange error: java.lang.VerifyError: (class: org/codehaus/groovy/runtime/ArrayUtil, method: createArray signature: ()[Ljava/lang/Object;) Illegal type in constant pool. ICFFilter This message might be misleading at a first glance; it suggests errors with classloading or class compilation issues. My customer was running OIM 11.1.1.5.7 with a Hotspot JVM, so we started investigating issues related to JVM errors and looked for related messages in the logs. We discovered that the actual root cause for this error was lack of space for the Code Cache generation. The way the hotspot (and its JIT compiler) works, it uses non heap space to host a variety of objects that are considered as part of the JVM mechanics. Those objects are not created on the heap space (defined by -Xms and -Xmx) but in separated spaces: Permanent Generation (or PermGem) and the Code Cache. The Code Cache is used by the JIT compiler to store compiled pieces of byte code that are executed regularly and chosen by the JVM to be compiled into native code. When the Code Cache space is full we see errors like java.lang.VirtualMachineError: out of space in CodeCache for adapters in the logs and that indicates that the JVM is having trouble to find space in the Code Cache. In situations like these, the recommended solution is increase the Code Cache size, by bumping it up with the following JVM option -XX:ReservedCodeCacheSize. Try setting this at 256m and increase it if you still see the issue happening, like -XX:ReservedCodeCacheSize=256m and so on. Bounce the servers each time you make changes to the JVM parameters. Although this issue was found during AD connector deployment, it might happen with other connectors that are also based on the ICF framework. All of them have the same capabilities around reconciliation filtering based on regular expressions.

    OAM LDAP connections through firewalls

    $
    0
    0

    Introduction

    In a previous post, we discussed some of the complications that can occur when a firewall is placed between WebGates and OAM Servers in a typical deployment. This post follows on from that discussion, to explore an analogous topic- firewalls between the OAM Server and the LDAP Identity Store. This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

    The problem we are trying to solve

    Without repeating the discussion from the previous post, the problem, in a nutshell, is preventing an over-eager firewall from tearing down an LDAP Identity Store connection that OAM still needs to use. Should this happen, OAM requests sent over that connection will fail, leading to degraded performance as the LDAP connection is re-established and the operation re-tried. Once again, the solution lies in configuring OAM's LDAP Connection Pool to refresh connections on its own accord, by appropriately setting the Identity Store's connection TTL (time to live) as below.

    This procedure works for OAM versions from 11.1.1.5 up to and including 11.1.2.1.

     

     How to prevent the firewall closing connections

    The way to avoid this problem is to ensure that the firewall is never given cause to close an LDAP connection - in other words, ensuring that LDAP connections never exceed the configured TTL or inactivity timeout as defined at the firewall. This is achieved by configuring a maximum LDAP connection lifespan, or TTL, at the OAM side that is less than the firewall's maximum TTL or idle timeout. This is achieved by adding the MaxConnectionReuseTime setting to the Identity Store configuration in oam-config.xml, as per the below snippet. The value is specified in seconds.
    <Setting Name="LDAP" Type="htf:map">
            <Setting Name="E9ABCBCF59F0CDEC56" Type="htf:map">
              .....         
              <Setting Name="LDAP_URL" Type="xsd:string">ldap://idstore.example.com:389</Setting>
              <Setting Name="ReferralPolicy" Type="xsd:string">follow</Setting>
              <Setting Name="GroupCacheSize" Type="xsd:integer">10000</Setting>
              <Setting Name="MaxConnectionReuseTime" Type="xsd:string">1740</Setting>
              <Setting Name="UserIdentityProviderType" Type="xsd:string">OracleUserRoleAPI</Setting>
            </Setting>
           .....
    </Setting>
    In the above example, MaxConnectionReuseTime has been set to 1740 seconds, or 29 minutes. This would be an appropriate setting for a firewall that times connections out after 30 minutes, since the OAM TTL should always be lower than that enforced by the firewall. Be sure to set MaxConnectionReuseTime to an appropriate value for your own environment.

    Be sure to follow the correct procedure for making manual edits to oam-config.xml, as described in this post

     As a closing comment, be aware that you may additionally need to configure your LDAP server to enforce a connection TTL; in this case, though, the server-side timeout should be higher than that set by the firewall - and obviously also higher than OAM's MaxConnectionReuseTime.

    Multi-Data Center Implemenation in Oracle Access Manager

    $
    0
    0
    For obvious reasons, there is a high demand for Multi-Data Center (MDC) topology; which is now supported in Oracle Access Manager (OAM) 11g.  This post discusses some of the features of MDC as well as provide some detail steps on how to clone a secondary data center.  This post is based on R2PS1 code base.  With PS2 there are some new features I will cover below.  Here is the PS2 document library for reference.

    Main Article

    Here is an conceptual topology for an MDC deployment. mdc-pic1 This should be pretty self-explanatory.  Notice the Global Load Balancers (GLBR); both the New York and London data centers must be front-ended with a GLBR for MDC support.  This allows the a user request to be routed to a different data center when:
    • The data center goes down.
    • There is a load spike causing redistribution of traffic.
    • Certain applications are deployed in only one data center.
    • WebGates are configured to load balance within one data center but failover across data centers.

    Deployment

    There are two parts to deploying MDC. The first part is 'cloning' the configuration from the master site to a secondary site using the Test-to-Production (T2P) process. The second part is to enable the MDC configuration so that each partner site is aware of each other.   This post will only cover the T2P procedure.  T2P is not new; however, many of our legacy OAM customers may not be familiar with T2P.  I will describe the commands I executed to clone a master site to a secondary site using T2P. More details on T2P can be found in Oracle Fusion Middleware guide here. MDC supports both active-active and active-passive/stand-by scenarios.  The following prerequisites must be satisfied before deploying Multi-Data Centers:
    • All Data Center clusters must be front ended by a single Load Balancer.
    • Clocks on the machines in which Access Manager and agents are deployed must be in sync. Non-MDC Access Manager clusters require the clocks of WebGate agents be in sync with Access Manager servers. This requirement applies to the MDC as well. If the clocks are out of sync, token validations will not be consistent resulting in deviations from the expected behaviors regarding the token expiry interval, validity interval, timeouts and the like.
    • The identity stores in a Multi-Data Center topology must have the same Name.
    High-level Steps:
    • The first Data Center is designated as Master and will be cloned (using T2P tools) for additional Data Centers.
    • All configuration and policy changes are propagated from the Master to the Clone using the WLST commands provided as part of the T2P Tooling.
    • Each Data Center is a separate WebLogic Domain and the install topology is the same.
    Below are the steps I used to clone a master data center of two OAM servers in a cluster to a secondary data center. For more details on the scripts I used , please check the documentation here. Detailed steps:

    The two steps below are only required for OAM version R2PS1.  Exporting/importing the schema in PS2 is no longer required.  There is a new feature called 'Automatic Policy Synchronization' (APS).  Click here to learn more.

    • Export the OPSS schema from the 'master' DB instance.  Set the ORACLE_HOME to the db home directory and execute the 'expdp' command.

    export ORACLE_HOME=/u01/DB/product/11.2.0/dbhome_1/bin

    ./expdp system/welcome1@db11g DIRECTORY=DATA_PUMP_DIR SCHEMAS=STMTEST_OPSS DUMPFILE=export_TEST.dmp PARALLEL=2 LOGFILE=export.log

     Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /u01/db11g/admin/db11g/dpdump/export_TEST.dmp

     
    • Import the OPSS schema to the secondary/cloned DB.  Make sure that the schema on the secondary/cloned DB instance is loaded via RCU. Load both the OAM and OPSS schema on the secondary DB instance and note down the schema names.

    ./impdp system/welcome1@orcl DIRECTORY=DATA_PUMP_DIR DUMPFILE=export_TEST.dmp PARALLEL=2 LOGFILE=import.log remap_schema=STMTEST_OPSS:STMPROD_OPSS remap_tablespace=STMTEST_IAS_OPSS:STMPROD_IAS_OPSS TABLE_EXISTS_ACTION=REPLACE

     
    • On the 'master' machine you need to copy the binaries. Server state is immaterial. Make sure to create the /oam_cln_log directory first.  I also recommend you create a separate directory to store MDC related artifacts; for example /u01/MDC_FILES.

    cd /u01/IAM1/Middleware/oracle_common/bin

    copyBinary.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pbin.jar -sourceMWHomeLoc /u01/IAM1/Middleware -idw true -ipl /u01/IAM1/Middleware/oracle_common/oraInst.loc -silent true -ldl /u01/MDC_FILES/oam_cln_log

     
    • On the 'master' machine you need to copy the configuration. Both the administration server and all managed servers need to up and running. The Weblogic server must also be in production mode.

    copyConfig.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pConfig.jar -sourceDomainLoc /u01/IAM1/Middleware/user_projects/domains/IAMDomain -sourceMWHomeLoc /u01/IAM1/Middleware -domainHostName iam1.us.oracle.com -domainPortNum 7001 -domainAdminUserName weblogic -domainAdminPassword /u01/MDC_FILES/t2p_domain_pass.txt -silent true -ldl /u01/MDC_FILES/oam_cln_log_config

      The following commands are to be executed on the 'clone' machine.
    • Copy the following files from the master environment: oamt2pbin.jar, oamt2pConfig.jar, pasteBinary.sh, oraInst.loc and cloningclient.jar.  The oamt2pbin and oamt2pConfig jar files should have been created with the copy commands above.  The cloningclient.jar, pasteBinary.sh and oraInst.loc can be found within the /oracle_common directory.
    • Using the pasteBinary.sh script will copy the binary data (oamt2pbin.jar) to the new server.  No Oracle software with the exception of the Java should be installed on the new machine.  In this example, a place-holder directory /u01/IAM1 and /u01/MDC_FILES/oam_cln_log needs to exists before running the command below.

    ./pasteBinary.sh -javaHome /home/oracle/java/jdk1.7.0_10 -al /u01/MDC_FILES/oamt2pbin.jar -tmw /u01/IAM1/Middleware -silent true -idw true -esp false -ipl /u01/MDC_FILES/oraInst.loc -ldl /u01/MDC_FILES/oam_cln_log -silent true

    • Next we need to extract a move plan file.  This file allows you to modify some of the details of the new environment.  The script is called 'extractMovePlan.sh' and is located under /oracle_common/bin.

    ./extractMovePlan.sh -javaHome /home/oracle/java/jdk1.7.0_10 -al /u01/MDC_FILEYou should now be able to start the Administration/OAM servers on the clone machine.S/oamt2pConfig.jar -planDirLoc /u01/MDC_FILES/moveplan/

      Once the 'moveplan.xml' was created, I changed the following:
    • All host names endpoints.  For example, my Master host name was iam1.us.oracle.com, I changed this to iam2.us.oracle.com.  If you have multiple components on the same machine make sure you modify all properties that make sense in you deployment.
                         <configProperty>
                            <name>Listen Address</name>
                            <value>iam2.us.oracle.com</value>
                            <itemMetadata>
                                <dataType>STRING</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
     
    • WLS machine name and Node Manager host name.
                <configGroup>
                    <type>MACHINE_CONFIG</type>
                    <configProperty id="Machine1">
                        <configProperty>
                            <name>Machine Name</name>
                            <value>IAM2</value>
                            <itemMetadata>
                                <dataType>STRING</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
                        <configProperty>
                            <name>Node Manager Listen Address</name>
                            <value>iam2.us.oracle.com</value>
                            <itemMetadata>
                                <dataType>STRING</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
                        <configProperty>
                            <name>Node Manager Listen Port</name>
                            <value>5556</value>
                            <itemMetadata>
                                <dataType>INTEGER</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
                    </configProperty>
                </configGroup>
     
    • Schema owners.  Make sure you change both the OPSS and OAM schema configuration property.
                        <configProperty>
                            <name>User</name>
                            <value>MDC2_OPSS</value>
                            <itemMetadata>
                                <dataType>STRING</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
    
                        <configProperty>
                            <name>User</name>
                            <value>MDC2_OAM</value>
                            <itemMetadata>
                                <dataType>STRING</dataType>
                                <scope>READ_WRITE</scope>
                            </itemMetadata>
                        </configProperty>
     
    • Now we paste the configuration on the target/clone machine using the 'moveplan.xml' we just modified.

    ./pasteConfig.sh -javaHome /home/oracle/java/jdk1.7.0_10 -archiveLoc /u01/MDC_FILES/oamt2pConfig.jar -targetMWHomeLoc /u01/IAM1/Middleware -targetDomainLoc /u01/IAM1/Middleware/user_projects/domains/IAMDomain -movePlanLoc /u01/MDC_FILES/moveplan/moveplan.xml -domainAdminPassword /u01/MDC_FILES/t2p_domain_pass.txt -ldl /u01/MDC_FILES/oam_cln_log_paste -silent true

      You should now be able to start the Administration/OAM servers on the secondary/cloned machine.  

    OIM monitoring check-list

    $
    0
    0

    Introduction

    Systematic monitoring of OIM deployments helps to reduce risk of both technical and security related issues. It also can help to avoid performance degradation that can happen because of data growth over time. This post presents a set of topics about OIM and WebLogic monitoring, and it presents tools that can be used for both monitoring and diagnostic.This list is not intended to replace any official product documentation, instead, it should be used in conjunction with it. This is another post in the OIM academy series. You can check the complete series here.

    OIM Features

    • OIM scheduler: scheduled tasks are an essential feature of OIM. Administrators should check for things like failed tasks, long running tasks, unnecessary tasks that can be disabled, and others.
    • Open provisioning tasks: when provisioning tasks fail, they are assigned to the system administrator group (unless configured differently) and will show up in the open tasks list. The open tasks list should be checked frequently to make sure that tasks are not accumulating. Growing number of open tasks might be a symptom of an environmental problem. OIM is also capable of sending notifications out when a task fails, but the task needs to be configured for that.
    • Pending approval tasks: approval tasks that are pending for longer than expected might be a symptom of a problem. For example: notifications are not going out of OIM/SOA and approvers are not aware of the pending tasks. It also could be a symptom of communication problems between SOA and OIM.
    • Non-processed reconciliation events: accumulation of events in ‘Data Received’ or ‘Event Received’ status might be a symptom of a problem. Check here a complete list of event status. Administrators should periodically check the reconciliation events to make sure they are being correctly processed.
    • Pending audit events: when the audit events creation rate is higher than the audit events processing rate, events will start accumulate in the AUD_JMS table. The ‘Issue Audit Message’ scheduled task must be properly configured to handle the load. Accumulations of events in the AUD_JMS table can also be a symptom of event processing failure. Administrators should monitor the table growth and space consumption on the database side.
    • Data growth: OIM transaction data will grow over time if proper archival and purge processes are not in place. Make sure that the processes are in place, that their frequency is according to the expectation around data growth. The archival and purge processes take care of four different types of data and it is documented here:
      • Orchestration: data related to the transactions that happens over users, roles, organizations and provisioning.
      • Request: data related to requests raised in OIM.
      • Reconciliation: data generated by the connectors and the reconciliation engine.
      • Provisioning: data related to the connectors provisioning tasks.

    Tools

    • DMS Metrics: OIM uses Oracle Dynamic Monitoring Service feature to report internal metrics. A lot of metrics are available through DMS, including average, maximum and minimum execution time for provisioning adapters, client API login, event handlers and scheduled tasks. DMS metrics are accessible through http://admin_server:port/dms, WebLogic domain administrator credentials can be used to access it. DMS metrics can, among other things, be used to find bottlenecks in OIM operations. More information about DMS is found here.
    • Diagnostic dashboard: the dashboard is a tool that provides diagnostic of an OIM deployment. It runs as a separate Web application deployed to OIM server/cluster. It does not bring any considerable performance impact to OIM. Instructions on how to deploy Diagnostic Dashboard are found here.

    Infrastructure

    • WebLogic resources: it is also important to monitor WebLogic resources
      • Data-sources: are the data sources well sized? WebLogic console offers a page that contains a set of data source usage numbers like peak number of connections in use, average number of connections in use, number of leaked connections, and so on
      • JMS queues: make sure that the number of pending messages is not growing over time
      • Cluster: WebLogic console offers live cluster information like frequency of servers dropping from the cluster and others.
      • Stuck threads: WebLogic is capable of notifying administrators of threads that have been running for longer than a specific threshold. Although WebLogic considers such threads as stuck, it does nothing to address possible issues. Long running threads might be an indication of problems
    • JVM and OS: as any other Java based application, it is important to monitor the operating system and the JVM resources to make sure that the CPU, memory, IO and other factors are not imposing performance penalties. There are plenty of tools that can be used for that, but this is a subject to another post.
    This post was created with the help of my colleagues Rob Otto and Pulkit Sharma. Thanks to them for sharing their ideas.

    Oracle Access Manager – What’s new in PS2

    $
    0
    0

    Introduction

    Oracle Access Manager 11gR2 – PS2 is now out!  This post will cover some of the new features in PS2.

    There are six new features I will discuss:

    • Dynamic Authentication
    • Persistent Login (Remember Me)
    • Policy Evaluation Ordering
    • Delegated Administration
    • Unified Administration Console
    • Session Management
      • Granular Idle Timeout
      • Client Cookie based Session

    Main Article

    Dynamic Authentication

    Dynamic authentication is the ability to define what authentication scheme should be presented to a user base on some condition.   For example, if a user is using a specific browser, say ‘FireFox’, then present them with a specific scheme only for Firefox users.  Here are some screen shots:

    authPolicy1

    Select the ‘Advance Tab’

     

    authPolicy2

    Specify the condition and define what scheme you want.

     

    Persistent Login (Remember Me)

    Persistent Login is the ability to let users login without credentials after the first-time login.  This feature is disabled by default and can be set at the application domain level.  Again here are some screen shots:

    persistLogin1

     

     

    persistLogin2

     

    persistLogin3

     

    persistLogin4

     

     

    Policy Evaluation Ordering

    The out-of-the -box algorithm is based on the “best match” algorithm for evaluating policies.  In PS2 you now have the option to specify a custom order for policies for a particular application domain.  Also if you are doing a migration from 10g the policy order is maintained.

    policyOrder1

     

    policyOrder2

    Delegated Administration

    Ah our old friend is back!  For those of you who remember; in older versions of OAM (10g and prior) you had the ability to select users who can administer their own application domains.  In PS2, there is a new role called ‘Application Domain Admin Role’.  These users now have full access to application domains.  Also the migration from 10g will preserve the admin configuration.  This is supported via the UI as well as the REST API.

    DelagatedAdmin1

     

    DelagatedAdmin2

     

    Unified Administration Console

    The console screen has a new look; a new single ‘Launch Pad’ screen with services that are enabled based on user roles.  The tree navigation has been removed.

    launchPad1

    Session Management
    Granular Idle Timeout

    You now have the ability to set idle session timeout’s at the application domain level; this will override the global settings.  In this example, the idle session timeout is set to fifteeen minutes as the global setting; whereas it is set to five minutes in the application domain.

    globalTimeout1

     

    globalTimeout2

     

    Client Cookie based Session

    Cookie based sessions are more scalable such that all session data is maintain on the client side (browser).  This is designed for very large deployments where server side sessions can be more expensive; making the server stateless.  This is very similar to OAM 10g; however, this will not support the following:

    • Session Management, session limits
    • Identity Context
    • Granular Timeout
    • Session attribute based on authorization policies

     

    Additional features

    This is just a short list of improvements in PS2.  Other enhancements include:

    • Upgrade Enhancements
    • Install/Patching Automation for IDM
    • Multi-Data-Center Deployment.  You can read more here.
    • Automated Replication
    • Performance Enhancements
    • SHA-2 Encryption for Webgates
    • IPV6 Support
    • Customized Error Pages
    • Complete convergence for Federation – Service Provider(SP) & Identity Provider(IDP)

    I want to thank our OAM PM, Venu Shastri for providing this list of new features.


    Strategies for managing OAAM to OAM connections in production

    $
    0
    0

    Many Oracle Access Management 11g customers opt to deploy a combination of Oracle Access Manager and Oracle Adaptive Access Manager using the Advanced Integration option. This combination of product features can provide strong, adaptive authentication and fraud mitigation for online applications. In this post, we examine a number of strategies for configuring the connectivity between these components in order to provide scalability and high availability for production deployments.

    The information in this post applies to the 11g R2 versions of OAAM and OAM only ( at the time of writing, 11.1.2.0, 11.1.2.1 and 11.1.2.2).

     

    Before continuing, readers are advised to consult the Appendix C of the Oracle Fusion Middleware Integration Guide for Oracle Identity Management Suite (11.1.2.2 release here) to familiarize themselves with the Advanced Integration option, in terms of its features, benefits and configuration steps. This post will concentrate only on the configuration of the necessary parameters controlling the OAP communication pool between OAAM and OAM.

    The problem we are trying to solve

    When OAM and OAAM are deployed using the Advanced Integration pattern, the two product components play different role during the authentication process. Through the use of the OAAM Authentication Scheme in OAM, the process of collecting credentials (and thus handling the entire authentication flow with the user’s browser) is handled by OAAM. The actual authentication (or, in fact, credential validation) step is still performed by OAM via a back-channel OAP (Oracle Access Protocol) call from OAAM. OAAM uses its configured logic to collect username and password from the user, with the aid of virtual strong authentication devices, fraud detection rules and the like. Once it has collected these credentials, it uses an embedded OAM Access SDK client (or custom AccessGate) to pass these credentials to the OAM server. OAM validates the credentials against its configured LDAP identity store and returns the result to OAAM. Should the authentication succeed, OAAM then generates a Delegated Authentication Protocol (DAP) token and redirects the user back to OAM with this token in order to create the necessary OAM session.

    In order to ensure sufficient performance and availability for production deployments, it is thus critical to ensure that this OAP connection mechanism between OAAM and OAM is correctly configured to meet the applicable requirements.

    How OAAM manages connections to OAM

    Unlike OAM webgates, which are completely configurable via the webgate profile in the OAM console (which in turn generates the ObAccessClient.xml file), OAM Access SDK clients (such as OAAM) do not use the webgate profile for anything other than basic authentication to the OAM server. What this means is that while the webgate ID and password are important, OAAM will essentially ignore any other settings on the webgate profile – in particular, those settings controlling the number of primary and secondary OAP connections that should be created against each OAM server, which allow for load balancing and high availability when configuring webgates. Instead, OAAM’s connection pool is configured via a number of OAAM properties, which provide somewhat less flexibility in terms of support for load balancing. We’ll explore these properties below, before discussing a number of strategies that can be used to ensure a production-ready deployment. Please also see Appendix C of the Oracle Fusion Middleware Administrator’s Guide for Oracle Adaptive Access Manager (11.1.2.2 release here)

     

    • oaam.uio.oam.webgate_id – defines the webgate ID used by OAAM. This defaults to IAMSuiteAgent and should not be changed.
    • oaam.oam.csf.credentials.enabled – this property, when set, uses the Fusion Middleware Credential Store Framework (CSF) to securely store password, such as the webgate password. This should always be set to true.
    • oaam.uio.oam.security.mode – defines the communication security between OAAM and OAM, can be either 1 (open), 2 (simple) or 3 (cert). Open is the default.
    • oaam.uio.oam.host – defines the primary OAM hostname to which OAP connections should be established.
    • oaam.uio.oam.port – defines the OAP port for the primary OAM host (this defaults to 5575)
    • oaam.uio.oam.secondary.host – defines the secondary, or failover, OAM hostname. OAP connections will only be established to this host if connections to the primary OAM host fail.
    • oaam.uio.oam.secondary.host.port – defines the OAP port for the secondary OAM host (this defaults to 5575)
    • oaam.oam.oamclient.minConInPool – defines the minimum number of OAP connections that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server.
    • oaam.uio.oam.num_of_connections - defines the target (maximum) number of OAP connections to the primary OAM server that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server. The default value is 5.
    • oaam.uio.oam.secondary.host.num_of_connections - defines the target (maximum) number of OAP connections to the secondary OAM server that OAAM will maintain in its pool. This setting will obviously be respected by each OAAM server. The default value is 5.
    • oaam.oam.oamclient.timeout – the period in seconds that a request will wait for an available OAP connection before timing out. The default is 3600 seconds (1 hour) which is way too high and should always be reduced to not more than 60 seconds  in production.
    • oaam.oam.oamclient.periodForWatcher – defines the rest period (in seconds) for the OAAM Pool Watcher thread, a thread which periodically checks the health of connections in the pool. The default is 3600 seconds (1 hour)  which should probably be reduced to around 300 (5 minutes) for production deployments.
    • oaam.oam.oamclient.initDelayForWatcher -  defines the initial delay (in seconds) before the OAAM Pool Watcher thread starts to check connections. The default is 3600 seconds (1 hour)  which should probably be reduced to around 300 (5 minutes) for production deployments.

    Perusing the above properties, the immediate observation is that only a single primary and single secondary OAM server can be specified. This is obviously of limited usefulness for large-scale production deployments, where it is a fairly obvious requirement to want to load balance requests from OAAM across a number of OAM servers. Below, we explore a number of options that can work.

     Options for OAAM to OAM connection load balancing

    1: Override deployment-wide properties on a per-host basis

    In a deployment where the number of OAAM nodes matches the number of OAM nodes exactly, then a fairly sensible and robust load balancing approach is simply to allocate a single primary and a single secondary OAM server to each OAAM server. This can be achieved by overriding the deployment-wide oaam.uio.oam.host and  oaam.uio.oam.secondary.host settings on each individual OAAM host. In order to do this, first ensure that you delete the applicable property values from the OAAM database via the OAAM console. Then pass a unique value to each OAAM server instance at startup via a java property, e.g.

    -Doaam.uio.oam.host=<primary_host_name> and -Doaam.uio.oam.secondary.host=<secondary_host_name>

    Consider a deployment comprising two OAAM hosts (Host A and Host B) and two further OAM hosts (Host C and Host D). Using this approach, Host A would be configured with the following settings:

    oaam.uio.oam.host: Host C and oaam.uio.oam.secondary.host: Host D

    while Host B would be configured with

    oaam.uio.oam.host: Host D and oaam.uio.oam.secondary.host: Host C

    This configuration would ensure that both OAM hosts received an equivalent number of connections, thus providing load balancing, while also providing resilience in case either OAM server should become unavailable.

    This approach, though, would suffer from a number of drawbacks, including the following:

    • unsuitable for deployments where the number of OAM and OAAM nodes is asymmetric and not even.
    • manageability is reduced as OAAM console cannot be used to configure per-host parameter values.
    • would not scale much beyond two nodes while still providing high availability. The loss of more than one OAM node at any one time would potentially render certain OAAM nodes unusable.
    • no way to rebalance load across OAM nodes in case an OAAM node goes down.

    2: Use virtual hostnames

    The second option is similar to the first, in that it allows for the definition of a single primary and a single secondary OAM server for each OAAM server. In this case, though, rather than overriding domain-wide property values, the approach is to user virtual hostnames to define the OAM servers.

    For example, we would define the following:

    oaam.uio.oam.host: oam-primary.domain.com

    oaam.uio.oam.secondary.host: oam-secondary.domain.com

    We would then use the /etc/hosts file on each OAAM node to define exactly which physical OAM server IP address the virtual hostnames oam-primary and oam-secondary should resolve to. In our above scenario, OAAM HOST A would have entries in its hosts file mapping oam-primary to the IP address for OAM Host C and oam-secondary to the IP address for OAM Host D. HOST B would instead map oam-primary to the IP address for OAM Host D and oam-secondary to the IP address for OAM Host C.

    In cases where OAAM and OAM servers are co-located on the same hardware, we can use a shortcut and specify “localhost” as the oaam.uio.oam.host value.

    This approach provides pretty much exactly the same benefits as the first option and incurs the same drawbacks, with the possible exception that it may prove somewhat easier to manage in production. In particular, the fact that any of the virtual mappings could be changed dynamically (without needing to restart OAAM) would be a definite advantage of this strategy.

    3: Use an external load balancer

    Perhaps the most obvious solution to this problem is to insert some form of external load balancer between OAAM and OAM. In this case, OAAM is configured such that the oaam.uio.oam.host property points to the address of the load balancer, which then in turn distributes requests to the OAM servers according to whatever algorithm is desired. In this scenario, it does not even make sense to define the oaam.uio.oam.secondary.host property (unless there is a second, redundant load balancer in place) since it’s assumed that the load balancer itself will only route requests to active OAM nodes.

    This approach has a number of benefits when compared to options 1 and 2 above, including the following:

    • can be used to balance load from any number of OAAM servers to any number of OAM servers; there is no requirement for symmetry
    • better scalability beyond 2 nodes
    • better manageability via load balancer console, rather than host files/command-line switches

    These benefits do come at a cost, however, in terms of increased complexity within the deployment. There will obviously also be a physical cost to procuring and commissioning the necessary load balancing device.

    In addition, some caveats need to be mentioned at this point.

    Firstly, while it may seem an obvious point, it’s worth remembering that OAP is a long-lived, TCP-based protocol and thus the load balancer used must be able to handle such a protocol. OAP is not HTTP, thus an HTTP-only load balancer can not be used here.

    The fact that OAP connections are long-lived can introduce some unforseen complications, like the ones described in this excellent post by Chris Johnson. Unless the load balancer is able to dynamically rebalance connections, it is possible that an OAM server outage could result in an unbalanced connection load even after the troublesome server is brought back on-line. The only way to mitigate this situation would be to perform a managed rolling restart of the OAAM cluster once all the OAM servers are up again.

    The comments in this blog post about connection timeouts are also applicable; it is best to configure the load balancer so as not to time out idle/long-lived connections if possible. If not, these time-outs should be set for as long as possible, since we do not have the equivalent of the webgate “Max Session Time” parameter available through OAAM’s configuration properties. If it is not possible to avoid connection time outs, then as a mitigation, be sure to set the oaam.oam.oamclient.periodForWatcher property to a low enough value, to increase the likelihood that the OAAM pool watcher will detect and re-establish a timed-out connection before a real client request attempts to use it.

     4: Use a combination of the above

    While there is obviously no perfect answer or one-size-fits-all solution here, the most sensible approach may well be to combine the above options; a number of the more unpleasant side effects caused by load balancing OAP can be avoided by using a direct host connection (either option 1 or 2) for the primary OAM server connection. If a load balancer is available, it could be used as the secondary, thus allowing the solution to scale beyond two nodes without compromising availability.

    Part 2: Advanced Apache JMeter Stress Testing OAM and LDAP

    $
    0
    0

    Introduction

    In “Part 1: How To Load Test OAM11g using Apache JMeter” I talked about an example plan that could be used to load test OAM11g, which included some common configuration elements, some samplers for login, authorization, logout, and some listeners that provided result analysis.   In Part 2, I wanted to expand on an option to make JMeter send random logins and I will explain why, and then cover how to leverage JMeter to load test an LDAP server like OUD, OID, ODSE, or OVD.

    Main Article

    How to get JMeter to do Random Logins

    As I mentioned before in Part 1 the load test was simply going through a list of users and each would login.  As the test plan is run a second time OAM would have cached authorization for the purpose of improved performance, but each time you would run the load test it would be skewed a bit and the test results may be inconsistent and therefore harder to determine if tuning or other changes have improved results.  I want to note that OAM does not cache login data like passwords or usernames, the caching is more about policies for authorizations.

     

    So why care about randomizing login requests in a load test?  Randomizing in my opinion provides a way to try and simulate a little more realistic load test since in real life you would not expect Brian to login just before Pardha every single time.  So one option to avoid this dilemma is to restart the environment after each load test, and this is often a common practice, but that can cost a lot of time.  The approach I am proposing generates random logins in a little more realistic anyway, so why not do it?

     

    It would be really nice if the CSV Data Set Config element had an option to randomly select users from the list in the file, but unfortunately it doesn’t.  So I spent a little time trying to come up with a good way to accomplish random logins and it was not as easy as I had hoped.   I finally came up with an approach that works pretty well, though I would love to get feedback if there is another way to do it.  The approach I came up with has some additional benefits, more on that later. So to get started please download the OAM11g_AdvLoadTest.jmx.zip Test Plan and follow through the remaining sections.

     

    Split the User List

    The first step is to take the CSV file that holds all the list of users JMeter uses to iterate through logins and split it into chunks.   Each split list of users would be tied to a sampler, but more on this later.  So to get started complete the following steps.

     

    1. 1. Download the example import_users_list.ldif.zip LDIF, which contains a sample list of 5,000 users with random populated data.  I generated this using a Perl script.  If you have you own LDIF of users from the OAM Identity Store use that.  Keep in mind for the CSV you really only need the elements needed to login, for example uid and password.  Though if all of the User passwords are the same you can just include the usernames in the CSV file and hard code the password in JMeter User Defined Variables elements.
    2. 2. Run the following command against the LDIF to generate a single flat file of just usernames.  For your convenience you can also download the flat file Users.csv.tar.gz that basically contains a single column of all 5,000 usernames.Comments:  This command extracts only the uid: and outputs just uids.
      Command:  grep “uid:” import_users_list.ldif | awk ‘{print $2}’ >> Users.csv
      Output:  Users.csv file will be a file with only a list of all “uid” values.
    3. 3. Split the Users.csv file into 10 parts using the following commands.  The reason I selected 10 was it evenly divided the 5,000 users into 500 in each file.  Feel free to break the list up as you feel needed, but I think this is a reasonable division of users.Comments: This command lets you know the total number of lines in the file.
    4. Command: wc –l Users.csv
      Output:  5000 Users.csvComment:  This command splits the single large file into a group of files.
      Command:  split –d –l 500 Users.csv Users_
      Output:  10 separate files will be created Users_00, Users_01, Users_02, etc.Comment:  This command adds an extension to the files that were created.
      Command:  find . -name “Users_*” -exec mv “{}” “{}”.txt \;
      Output:  Same list of files created earlier except with *.txt extensions.
    5. For your convenience I have included all the split files in the same Users.tar.gz file.  If you have your own custom list of users feel free to follow the same steps.

     

    Setup the Login Sampler

    Download the advanced OAM11g_AdvLoadTest.jmx.zip file.  It is simpler to explain how the Login Sampler is setup by using an example you can modify it for your own use cases.

    Refer to the following graphic; you can see there are 10 Login samplers in my example, but you are welcome to adjust that for your own needs.  Each Login Sampler is made up of a Simple Controller, and underneath each there is one CSV Data Set Config (One of the CSV flat files we split earlier.), two HTTP Requests, and one Regular Expression Extractor found under the Portal Request.  The only difference in each Simple Controller like Login 00, Login 01, Login 02, etc. is the User flat file used for the CSV Data Set Config.  So for example “Login 00” uses the CSV flat file Users_00.txt, then in the next Login 001 sampler users Filename Users_01.txt and so forth.  Go ahead and open each one and compare.  Basically this is the only difference.

    jmeter_01_adv_csv_config

    Now if you select the top element named “Login”, you will see that this is a Random Controller.The Random Controller will randomly select one of the samplers at a time for each thread like Login 00, or Login 03, or Login 06, etc.

    jmeter_02_login_rand_controller

     

    For example when the Load Test starts, it will go to “Login” and then randomly pick one of the ten Login (nn) samplers and execute it, then go the AuhZ sampler to go through those samplers to request the Portal, submit the login, and finally finish with the Logout sampler to logout all the requests.  What is interesting is each time the Login Random Controller (Parent Login element) picks a random Login 00 or Login 05, etc., it not only picks a single user to login from the respective CSV flat file, but each time the same Login sampler is selected again, it keeps track of which User logged in and makes sure it picks the next User in the list.

     

    jmeter_03_rand_login

     

    Random Login Summary

    As you can see the test plan is similar to Part 1, but enhanced to provide a way to make logins more random that simulates a more real life login load.   So to summarize, each thread that is open by JMeter to execute a process to login, the Login Random Controller will make sure to run one login from one of the Login (nn) samplers and each new thread that the Random Controller picks that is the same sampler, JMeter makes sure the next user in the Sampler’s CSV file is used.  This way it makes sure each login is unique and we get a random selection of logins.  Brilliant!

     

    Load Testing LDAP servers using JMeter

    Now moving on to using JMeter to load test an LDAP server.  In any Identity and Access Management deployment there is a LDAP server deployed supporting all Internal and External user data among other things.  The LDAP server is a very critical part of the Identity and Access Management architecture.  In Part 1 the blog talked about Load Testing OAM, which is very important.  At the heart of the OAM authentication is the backend LDAP server.  Typically many other applications are using the LDAP Identity data for other reasons.   So load testing OAM alone is not going to fully load test the LDAP server.  Therefore it is a very good idea to stress test against the LDAP server directly to reflect the load you are expecting it to handle and then some.   With this in mind, JMeter is here to help again.  JMeter provides a free load testing software to solve this dilemma.

     

    Defining a LDAP Load Test

    A lot of people don’t realize that JMeter can send search, add, modify, and delete operations to an LDAP server like OUD, OID, ODSEE, or OVD, or any other LDAP server.  To keep things simple, I incorporated my LDAP Load Test sample in the same OAM11g_AdvLoadTest.jmx JMeter load test file.   This sample is in the “LDAP Thread Group” thread group, but feel free to copy the configuration into a stand-alone JMeter file.  You can easily do this by making a copy of the OAM11g_AdvLoadTest.jmx file, open it up, delete the OAM11g Thread Group and modify the remaining LDAP Thread Group.  Another approach is to disable the OAM11g Thread Group and make sure the LDAP Thread Group is enabled before using it.

     

    Elements of the LDAP Thread Group

    So to get started, using the example I included in the download follow the next sections to understand how to implement each type of LDAP operation.  I think this keeps things simple and then you can then take the examples and build your own test plan.   So here we go…

    CSV Data Set Config

    The first element called CSV Data Set Config basically sets up the CSV file that would contain all the user data values for all the respective attributes like uid, cn, givenname, sn, mail, and userpassword.  Note that attribute names are not in this CSV file, only the attribute values.  Also note that some values are duplicated depending on what you are doing.  For example in my JMeter example there is a mail_new and a mail_orig.  Both are mail values, but mail_new holds a new email address while the mail_orig holds the old or original value.  This is a simple trick to make the test plan add a new email address and then modify it using the mail_new value in the CSV file.  Please be creative as needed to making the test plan per your needs.

     

    jmeter_04_CSV_Data_Set_Config

    LDAP Bind

    The next few elements uses the LDAP Extended Request, but each one is configured slightly differently depending on how it is used.  For this element we use the LDAP Bind option. This provides a way to BIND to LDAP and opens a connection to the LDAP server.  Simply modify the fields as needed per your environment.

     

    jmeter_05_LDAP_Bind

    LDAP Add

    This element is used to Add an entry to LDAP.  The key here is to include the proper object class.  For example for a User in OUD, inetOrgPerson is the required object class to create a User.  For example a User in OUD is required to have the object class inetOrgPerson to create a User.  The specific object classes can vary between types of Directory Servers.  For example if using Active Directory the User object class would be person, a group would require the group object class.  Just modify this per your requirements.

     

    jmeter_06_LDAP_Add

    LDAP Search

    This element obviously searches for an entry.  The key to this element is to include the sub branch for the Search Base and not the entire namespace.  For example in the LDAP Bind element there is a field DN, and in my example I put “dc=oracle,dc=com”.  So within this element you only use ou=People, and the translation becomes ou=People,dc=oracle,dc=com for the entire search DN.

     

    jmeter_07_LDAP_Search

    LDAP Modify New

    This element will modify an entry.  You can include as many attributes as you want, though the key is to make sure to use the correct opCode.  The opCode determines the LDAP change type.  The opCode options are replace, add, or delete.  You can mix and match various operations for each attribute being modified.

     

    jmeter_08_LDAP_Modify_to_New

    LDAP Modify Orig

    This element is basically the same as the LDAP Modify New element, but instead it switches back the old mail address attribute value.  This is just a simple trick to stress the LDAP server to modify an entry and put the original value back.  In my example I replace the old mail attribute value with a new one, which ties back to my comment earlier on why in my CSV flat file I had a mail_new and mail_orig values.

     

    jmeter_09_LDAP_Modify_to_Orig

    LDAP Delete

    This element is a little different than the LDAP Modify elements previously mentioned.  The previous elements use the opCode delete.  However that delete operation is to delete an attribute.  This element will delete the entire entry.  As previously mentioned about the namespace, it is key to only use the sub branch ou=People and not the entire namespace like ou=People,dc=oracle,dc=com.

     

    jmeter_10_LDAP_Delete

    LDAP Unbind

    This element is important because it executes a LDAP Unbind to close the connection to the LDAP server.  If this element were left out of the test plan JMeter would have hundreds or thousands of open connections and none of them closed, thereby overwhelming the Operating System and the Directory Server pushing it to its limits.

     

    jmeter_11_LDAP_Unbind

    LDAP Load Test Summary

    I really only wanted to go over the building blocks of the JMeter LDAP Operation elements so that you would have the tools to build your own load test plan.  There are certainly other tools to load test LDAP, but I feel JMeter provides a very easy way to accomplish this and provides a fair one size fits all tool to do HTTP and LDAP testing.   Not a bad tool to keep in your bag.

     

    Summarizing the Advanced Apache JMeter Stress Testing  OAM and LDAP

    Hopefully combining a couple important advanced features of JMeter add more help in load testing projects that have deployments of Oracle’s IAM suite.  JMeter can certainly do many other things,;though my goal was to cover more of the essentials of developing load test plans for projects that lack load test tools.  I like to say this is the 80/20 rule where I hoped to cover 80% of what you need in regard to load testing from a tool perspective.  Happy load testing!

    Improve SSL Support for Your WebLogic Domains

    $
    0
    0

    Introduction

    Every WebLogic Server installation comes with SSL support. But for some reason many installations get this interesting error message at startup:

    Ignoring the trusted CA certificate “CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See www.entrust.net/legal-terms,O=Entrust, Inc.,C=US”. The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.

    This looks odd and many people ignore these error messages. However, if your strategy is to show real error messages only, you are quickly looking for a solution. The Internet is full of possible solutions. Some recommend to remove the certificates from the JDK trust store, some recommend to use a different trust store. But is this the best solution and what are the side effects?

    Main Article

    Our way to the solution starts by understanding the error message. Here it is again.

    Ignoring the trusted CA certificate “CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See www.entrust.net/legal-terms,O=Entrust, Inc.,C=US”. The loading of the trusted certificate list raised a certificate parsing exception PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11.

    The first sentence is the result while the second sentence explains the reason. Looking at the reason, we quickly find the “certificate parsing exception“. But what does “PKIX: Unsupported OID in the AlgorithmIdentifier object: 1.2.840.113549.1.1.11” tell us?

    • PKIX stands for the Public Key Infrastructure (X.509). X.509 is the standard used to export, exchange, and import SSL certificates.
    • OID stands for the Object Identifier. Object Identifiers are globally unique and organized in a hierarchy. This hierarchy is maintained by the standards bodies in every country. Every standards body is responsible for a specific branch and can define and assign entries into the hierarchy.

    With this background information we can lookup the number 1.2.840.113549.1.1.11 in the OID Repository (see References for the link) and get this result “iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1) sha256WithRSAEncryption(11)“.

    Combining the certificate information in the first sentence and the information from the OID lookup we have the following result:

    The certificate from CN=Entrust Root Certification Authority – G2,OU=(c) 2009 Entrust, Inc. – for authorized use only,OU=See www.entrust.net/legal-terms,O=Entrust, Inc.,C=US uses SHA256WithRSAEncryption which is not supported by the JDK!

    You will probably see more messages for similar or different encryption algorithms used in other certificates.

    The Root Cause

    These factors cause this (and similar) error messages:

    • By default the Java Cryptography Extension (JCE), that comes with the JDK, implements only limited strength jurisdication policy files.
    • The default trust store of the JDK that holds this and other certificates can be found in JAVA_HOME/jre/lib/security/cacerts.
    • WebLogic Server versions before 12c come with the Certicomm JSSE implementation. The Certicomm implementation will not be updated because the required JDK already comes with the standard SunJSSE implementation.

    The Problem

    The Certicomm implementation works perfectly with many SSL certificates but does not support newer and stronger algorithms. Removing certificates from the default trust store or using a new trust store works only if you do not need to install third party certificates, for example from well known Certificate Authorities.

    The Solution

    To remove these error messages and support newer SSL certificates we have to do these steps:

    • Upgrade the jurisdication policy files with the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy files. You can download the Unlimites Strength Jurisdication files that fit for your JDK version from the Oracle Technology Network (see References). Follow the installation instructions that come with the distribution.
    • Enable SunJSSE Support in WebLogic Server
      • Login to Weblogic console
      • Go to [Select your Server] -> SSL -> Advance
      • Set “Enable JSSE” to true.
    • Restart your domain completely (including NodeManager)
      • If you start your domains with a WLST script:

        CONFIG_JVM_ARGS=’-Dweblogic.ssl.JSSEEnabled=true -Dweblogic.security.SSL.enableJSSE=true’

      • If you start your domains with the scripts startWebLogic.sh, startManagedServer.sh, or startNodeManager.sh:

        JAVA_OPTIONS=’-Dweblogic.ssl.JSSEEnabled=true -Dweblogic.security.SSL.enableJSSE=true’

    Your Java and WebLogic environment is now ready to support newer SSL certificates!

    Enjoy!

    References

    OIM Clustering: Keeping separate environments separate

    $
    0
    0

    Oracle Identity Manager 11g incorporates several clustering technologies in order to ensure high-availability across its different components. Several of these technologies use multicast to discover other cluster nodes on the same subnet. For testing and development purposes, it is common to have multiple distinct OIM environments co-existing on the same subnet. In that scenario, it is essential that the distinct environments utilise separate multicast addresses, so that they do not talk to each other – if they do, they will confuse one another, and many things can go wrong. This problem is less common with production environments, since best practice dictates that the production environment should be on a separate subnet from development and test, and multicast traffic cannot transverse subnet boundaries without special configuration.

    Overview of OIM Clustering

    Here’s a rough diagram of the clustering components inside OIM:

    Quartz Scheduler Cluster

    Data Caching Cluster

    EclipseLink
    (11.1.2.0.x and earlier only)

    OSCache

    Application Server Cluster
    (WebLogic or WebSphere)

    There are three basic layers of clustering in OIM:

    • Application Server Clustering: This is the clustering layer of the underlying Java EE Application Server (Oracle WebLogic or IBM WebSphere). This is responsible for replication of the JNDI tree, EJBs, HTTP sessions, etc.
    • Data Caching: This provides in-memory caching of data to improve performance, while ensuring that database updates made on one node are propagated promptly to the others. OIM uses OSCache (OpenSymphony Cache) as the underlying technology for this.
    • Scheduler Clustering: This is used to ensure that in a cluster each execution of a scheduled job only runs on one node. Otherwise, if a job is scheduled to start at 9am, every node in the cluster might try to start it at the same time, resulting in multiple simultaneous executions of that job

    Clustering layers present in older versions only

    • In OIM 11gR1, and 11gR2 base release, OIM used EclipseLink data caching, which included its own multicast clustering layer. From OIM 11.1.2.1.0 onwards, while EclipseLink is still being used for data access, its caching features are no longer used, so this form of multicast clustering is no longer present.
    • As well as using JGroups for OSCache, OIM 9.x also used JGroups for a couple of additional functions (forcibly stopping scheduled tasks and diagnostic dashboard JMS test.) In OIM 11g, JGroups is now used for OSCache only.

    Underlying technologies used

    Different clustering components in OIM use different technologies:

    Component Technology Details
    Application Server Cluster Unicast or Multicast Consult Application Server documentation:
    EclipseLink
    (OIM 11.1.2.0.x and earlier only)
    • Multicast for node discovery
    • T3 JNDI for node-to-node communication (WebLogic)
    • RMI for node-to-node communication (WebSphere)
    Multicast is only used to find other nodes in the cluster. With WLS, JNDI connections are opened between the nodes for the cache coordination traffic. On WebSphere, RMI is used instead.
    OSCache
    • Multicast using JGroups package
    Quartz Scheduler
    • Database tables
    Unlike other clustering components, Quartz does not use direct network communication between the nodes. Database tables are used for inter-cluster communication

    Relevant Configuration Settings

    I’m only going to talk about the OIM-specific clustering settings here. So I won’t go into the configuration of the WebLogic/WebSphere clustering layer, only the data cache and scheduler clustering layers. All configuration relevant to these can be found in the /db/oim-config.xml file in MDS. So let’s discuss the settings in this file which are relevant to clustering.

    Setting Explanation
    <cacheConfig clustered=”…”> Must be set to true in a clustered install, and false for a single-instance install. This controls whether OSCache operates in a clustered mode.
    <cacheConfig>/<xLCacheProviderProps multicastAddress=””> Multicast address which is used for OSCache. (Also used by EclipseLink in versions 11.1.2.0.x and earlier; the same address is used for both.) Make sure this address is unique for each distinct OIM environment on the same subnet.
    <xLCacheProviderProps>/<properties> Can be used to manually override JGroups configuration used by OSCache. Not recommended.
    <schedulerConfig clustered=”…”> Must be set to true in a clustered install, and false for a single-instance install.
    <schedulerConfig multicastAddress=”…”> In OIM 9.x, JGroups was used to forcibly stop jobs. In OIM 11g, a different mechanism is used instead. This configuration setting is a left-over from OIM 9.x, and is now ignored. However, to avoid confusion, it is recommended to set this to the same multicastAddress as the xLCacheProviderProps above.
    <deploymentConfig>/<deploymentMode> In a clustered install, should be set to clustered; in a single instance, should be set to simple. This is used to control whether EclipseLink operates in a clustered mode.
    <SOAConfig>/<username> As its name implies, this is the username used by OIM to login to SOA. However, in OIM 11.1.2.0.0 and earlier, it also serves an additional purpose – on WebLogic, this username is used by EclipseLink clustering for inter-node communication. By default, this is weblogic; if you have renamed the weblogic user, you must change it; you are free to use another user if you wish, so long as they are a member of the Administrators group. (On WebSphere, this user is used for OIM-SOA integration only, not for EclipseLink clustering.)To change this, see “2.6 Optional: Updating the WebLogic Administrator Server User Name in Oracle Enterprise Manager Fusion Middleware Control (OIM Only)”. (If step 11 in those steps gives you a permissions error, just skip that step.)
    <SOAConfig>/<passwordKey> This is the name of the CSF Credential which stores the password for the <SOAConfig> user. You should never change this setting in oim-config.xml from its default of SOAAdminPassword, but you will need to change the corresponding CSF entry whenever you change that user’s password.

    What can go wrong

    As I’ve mentioned, it is important that you have the correct clustering configuration for your environment. If you do not, many things can go wrong. I don’t propose to provide an exhaustive list of potential problems in this blog post, but just give one example I recently encountered at a customer site.

    This customer was preparing to go live with Oracle Identity Manager 11.1.2.0. As part of their pre-production activities, they needed to document and test the procedure for periodic change of the weblogic password. They began by their testing by changing the weblogic password in one of their development environments. Restarting the OIM managed server, they saw this message multiple times in their WebLogic log: <Authentication of user weblogic failed because of invalid password>. They also found that the WEBLOGIC user in OIM was locked.

    What went wrong here? Well, several things were wrong in this environment:

    • They had <SOAConfig>/<username> set to weblogic, but they had not updated the SOAAdminPassword credential in CSF to the new weblogic password. This customer does not currently use any of the OIM functionality which requires SOA, so they normally leave their SOA server down, including for this test. You would think therefore that the <SOAConfig> would not be relevant to them; but, as I have pointed out above, it is also used for EclipseLink clustering.
    • Even though their development environments were single instance installs, they all had <deploymentConfig>/<deploymentMode> set to cluster instead of simple. As a result, EclipseLink clustering was active even though it did not need to be.
    • <cacheConfig>/<xLCacheProviderProps multicastAddress=””> was set to the same address in multiple development environments on the same subnet. As a result, even though these environments were meant to be totally separate, they were formed into a single EclipseLink cluster.

    So, what would happen, was that this environment (let’s call it DEV1) at startup would initialise EclipseLink clustering (since <deploymentConfig>/<deploymentMode> is set to cluster.) It would then add itself to the multicast group configured in <cacheConfig>/<xLCacheProviderProps multicastAddress=””>. At this point, DEV1 becomes visible to the other development environments (say DEV2 and DEV3). DEV2 tries to login to DEV1 over T3, using the <SOAConfig>/<username> user (weblogic) and the SOAAdminPassword password from CSF. However, the weblogic password having changed, both DEV2 and DEV3 will receive an invalid credential error, and DEV1 will experience <Authentication of user weblogic failed because of invalid password>. Setting <deploymentConfig>/<deploymentMode> to simple resolved this.

    How Oracle Identity Manager Uses MDS

    $
    0
    0

    Oracle Metadata Services (MDS) is an XML configuration store used by Oracle Identity Manager (OIM), as well as several other Oracle Middleware products. OIM first adopted MDS with the release of 11gR1. Prior to MDS, many Oracle Middleware products used  files on the filesystem as configuration stores, in various formats (XML, Java properties files, etc.). One of the purposes of MDS to create a standard configuration store across the Middleware stack. Not all configuration in OIM lives inside MDS, however: some of it is stored in the OIM schema database tables.

    One problem with the old approach, of storing configuration files on the filesystem (e.g. xlconfig.xml in OIM 9.x), is that in a clustered environment there is a risk of inconsistencies in configuration between the nodes, which may have deleterious effects. In OIM 11g, by storing the configuration in the MDS database schema, we eliminate this possibility.

    The strength of MDS is in storing XML format configuration files. While it can be used to store binary format files also, OIM does not use it for that purpose. For the connector JARs and plugin JARs in OIM, rather than storing them in MDS, we instead store them as BLOBs inside database tables in the OIM database schema. However, once again, the possibility of cluster inconsistencies which existed in 9.x (where these JARs lived on the filesystem) is eliminated.

    MDS is structured like a filesystem – it contains folders/directories (called “packages”) and documents (either XML format or binary).

    One of the unique features of MDS is that it supports XML querying. It can quickly search the MDS repository for all documents containing a given XML tag. This then allows configuration to be spread out across multiple packages (reflecting e.g. the module structure of the application). However, while this is very useful, it can become a trap for the unwary – some customers have accidentally uploaded backup files (e.g. EventHandlers.xml.bak) into their MDS repository, thinking that they will be ignored due to the .bak extension. However, OIM uses an MDS XML query to locate event handler definitions, which ignores the file names, hence their backup file gets loaded.

    When used with the database schema backend, MDS stores the change history. This feature has existed in MDS since the very beginning of MDS uptake in OIM, with OIM 11gR1 base release. However, until 11gR2, we did not use this functionality in OIM, nor did our tooling or documentation encourage customers to use it (they still could if they were aware of it.) With 11gR2, as we will discuss more shortly, that has changed.

    We split up the MDS repositories on a per-application basis – in 11gR1, this means that OIM, SOA and OWSM each get  their own MDS repository. Keeping the configuration of each application separate eases administration, and avoids any potential issues of interference between them. Rather than requiring three separate MDS schemas, MDS supports the partitioning of a single schema into several partitions. So in OIM 11gR1, we use 3 MDS partitions in our MDS schema. (In 11gR2, we add an additional partition to store configuration of the OIM ADF-based UI, as opposed to configuration of the core OIM server.)

    OIM 11gR2 brings a number of new features to OIM’s use of MDS. These are not actually new features as far as MDS is concerned – they existed in MDS in OIM 11gR1, and some other Oracle Middleware products were already using them. But OIM 11gR1 did nothing to leverage or exploit these MDS features, while OIM 11gR2 does so.

    MDS contains two key features related to version control. The first of these is the notion of labels. A label enables us to assign a memorable name to a version number. For example, suppose I am deploying a new version of some customization that I have developed for OIM, let’s say version 3. I might create a label called “v3-PreDeployment” to mark the state of the configuration immediately prior to the deployment, and another label called “v3-PostDeployment” to mark the state of the system immediately afterwards. If you are familiar with version control systems commonly used for source code – such as CVS, Subversion or Git – an MDS label is essentially the same thing as a tag.

    The second feature is the notion of sandboxes. This is equivalent to working copies in version control systems, although it also has some similarities to the notions of changesets or branches. You can create a sandbox to isolate work in progress from the live production configuration. The sandbox is only live for administrative/developer users; once you are satisfied with the sandbox, you can make it live by merging it in to the main configuration. MDS automatically creates pre and post merge labels for you.

    The other major area in which OIM 11gR2 takes up more MDS features is in the area of customization. The customization feature in MDS is designed to allow a clear separation between data which is shipped out of the box and data which is customized at a customer site. This helps avoid the common experience where applying a patch or upgrade to out-of-the-box data results in loss of customizations that then need to be reapplied.

    The customization feature is particularly used with ADF. We can store a page definition in MDS for some page in OIM, e.g. “Create User”. You can customize the page, but rather than directly modifying the out-of-the-box page definition, you create a customization XML which contains instructions on how to modify the out-of-the-box XML. (Rather than manually creating this customization XML, it is created for you when you use the ADF Composer component.) When ADF asks MDS for the page definition, MDS merges the customization XML with the out-of-the-box page definition. If in a patch or upgrade we change the page definition XML, we replace the base file, but your customization XML is untouched. This greatly reduces the risk of losing customizations during patch/upgrade.

    How To Display A Custom Error Page When the Access Server Is Down?

    $
    0
    0

    I have been asked several times over the years if there is a way to customize the following error message a User is presented in their Internet browser when the WebGate fails to contact any of the Access Servers.

    Oracle Access Manager Operation Error

    The WebGate plug-in is unable to contact any Access Servers.

    Contact your website administrator to remedy this problem.

    Though this error is without a doubt accurate, many clients would rather display something a little more friendly or have other reasons to change it.  Interestingly this error has been the same message going back to the early days of OAM when it was still Oblix.  Incidently there is a great My Oracle Support document 555137.1 that provides steps on how to customize the error message, but it refers to OAM 10g.  So this begs the question will this work with the newer OAM 11g, and more specifically 11g WebGates.  I am here to say, “Yes it does”, I have tested this and this article covers this option and a bit more.

    Lets Get Some Questions Out of the Way

    I have been asked if there are alternate ways of customizing the error message. For example if using Apache or OHS, why not put a customized message in the ErrorDocument 500 inside the httpd.conf file; http://httpd.apache.org/docs/2.2/mod/core.html – error document. Good question because the WebGate does use a parameter HTMLpage500 so you would think it can just be as simple as modifying the 500 error.  However I tried and this does not work and for good reasons. The WebGate becomes the main gatekeeper, so if it cannot communicate to any Access Servers to make security decisions it will basically shutdown and reject all traffic to make sure nothing is compromised hence why it immediately lets people know the OAM Access Servers are down…do something!!

    Then what about telling a load balancer in front of the web servers that have the WebGates to redirect to an alternate place? One problem is the load balancer is not intelligent enough to know if the WebGate is down. Another is if you could redirect, how is security taken care of? It is not as simple as it sounds.

    At this point the best option is to leverage the article I mentioned, I feel this option is the best right now and it is not that complicated to implement.  So let’s move on and find out how to do it.

     

    How to Create Your Custom Error Page

    As I mentioned earlier, there is an article OAM 10g: How To Display A Custom Error Page When the Access Server Is Down? / WebGate Plug-in is Unable to Contact any Access Servers (Doc ID 555137.1), which I spent a lot of time validating that it works for 11g WebGates, and as I said it does.  Though I don’t really provide any additional ways that the article shows on how to generate a custom error message, I do go a little beyond the article and explain an easy way to make the custom message.  So lets get into it now.

    1. Update the WebGate.xml file message tag ErrEngineDown
    The key to this customization is updating a file called WebGate.xml. It is located in the WebGate install path under <WebGate install path>/wg1_home/webgate/ohs/lang/en-us. Now I used “en-us” for English, but if your environment requires a different language, look under “/lang/” and you will see other languages and within each language is a WebGate.xml file. So make sure the correct WebGate.xml file is being updated. Use an editor and open the WebGate.xml file and search for ErrEngineDown. Now change the default ErrEngineDown message “The WebGate plug-in is unable to contact any Access Servers.” with the custom message you want. This text will be output to the error page when the WebGate cannot contact any of the Access Servers for some reason.  Be sure to make a backup of the original WebGate.xml file in case you need to revert back.

    2. Make your HTML Custom Error Page
    Next there is another message tag inside the WebGate.xml file called HTMLpage500.  This tag can actually hold a real HTML page that will be displayed for the end user, so it is a lot more flexible.  So the best thing is to create some basic HTML you want the custom page to display, and note if possible try to avoid images. If you require images the reference to the images will need to point to some other external web server because when the OAM servers are down, the WebGate will basically not even let the web server work because it is trying to stop anyone from getting in; this is a security feature by design. So go ahead and create the HTML page you want to display for the end user.  Note toward the end of this article there is a neat little trick you can do.

    3. Convert the HTML to XML
    Now in the previous step you created some custom HTML page, but before you insert this custom message into the WebGate.xml it needs to be converted to XML.  To better understand, the less than sign < needs to be converted to &lt; and a greater than sign > needs to be converted to &gt;.  This certainly could tediously be done manually, but a much easier way is take the HTML, copy and paste it into this online HTML to XML converter https://sites.google.com/site/infivivek/resourse-centre/online-resources/html-to-xml-converter,  click the Convert button, and whala you have XML. Once you have the XML version of the HTML, copy it and go to the next step.

    4. Update the WebGate.xml file message tag HTMLpage500 with the new XML.
    The custom HTML page created earlier that was converted to XML can now be pasted into the WebGate.xml file. Edit the WebGate.xml file and search for HTMLpage500. Just like the ErrEngineDown message value, you will replace the default message with the XML copied from the previous step. This XML will be presented by the WebGate when the Access Servers are unavailable as an HTML page.  Be sure to save the WebGate.xml file.

    5. Restart the Web Server
    For the WebGate.xml file to take affect restart the web server.

    6. Repeat
    Since each web server will have its own WebGate install, you will need to repeat the same process above and update each WebGate.xml.  Alternatively an easier option is to simply copy the WebGate.xml file to the other WebGate install locations, there is no path reference to the install so copying the file is a safe and easy option.  Your WebGate install may be installed in a common mount If using a UNIX operating system and if that is true you may not need to copy or update the WebGate.xml file across several web servers.

    7. Test the Custom Message by shutting down the OAM Access Servers
    Finally we need to make sure it works.  To test the custom message, login to a WebGate protected application or page. Then shut down all the OAM Access Servers so that the  WebGate will act on this error.  Once all the Access Servers are down, refresh the browser and you should see the new custom message.  If not, review the WebGate oblog.log file to see if there are any errors that can help troubleshoot, or make sure the WebGate.xml file is correct.

     

    Another HTMLpage500 option

    The MOS article 555137.1 has a good option that adds a little META http-equiv=”REFRESH” method redirect, which redirects in zero seconds the browser to an alternate web page all together.  This trick will work on all web browsers.  However It would require an alternate web server that is running, and that web server could not have a WebGate because if it did that web server would also stop serving up HTML content. It is a clever option and certainly something worth thinking about.

    Optional Actual HTML

    <HTML>
    <HEAD>
    <TITLE>OAM Engine Down</TITLE>
    <meta http-equiv=”REFRESH” content=”0;url=http://www.oracle.com/index.html”>
    </HEAD>
    <BODY>
    </BODY>
    </HTML>

    Converted HTML to XML

    &lt;HTML&gt;
    &lt;HEAD&gt;
    &lt;TITLE&gt;OAM Engine Down&lt;/TITLE&gt;
    &lt;meta http-equiv=&quot;REFRESH&quot; content=&quot;0;url=http://www.oracle.com/index.html&quot;&gt;
    &lt;/HEAD&gt;
    &lt;BODY&gt;
    &lt;/BODY&gt;
    &lt;/HTML&gt;

    In summary, this solution will provide a working option to create a custom error page when the OAM Access Servers are down. Feel free to play around with different messages and most of all test them to make sure they work.  Enjoy!

    Identity Propagation from OAG to REST APIs protected by OWSM

    $
    0
    0

    Introduction

    This post describes the necessary configuration for propagating an end user identity from OAG (Oracle API Gateway) to REST APIs protected by OWSM (Oracle Web Services Manager).

    The requirements are:

    1) Have a Java Subject established in the REST API implementation.

    2) Prevent direct access to the REST API, i.e., only OAG should be able to successfully invoke it.

    A recurrent question is how OWSM protects REST APIs and which types of tokens it supports when doing so.

    If we look at the current OWSM (11.1.1.7) predefined policies, we notice a policy named

    oracle/multi_token_rest_service_policy, described (verbatim) as:

    “This policy enforces one of the following authentication policies, based on the token sent by the client:

    HTTP Basic—Extracts username and password credentials from the HTTP header.

    SAML 2.0 Bearer token in the HTTP header—Extracts SAML 2.0 Bearer assertion in the HTTP header.

    HTTP OAM security—Verifies that the OAM agent has authenticated user and establishes identity.

    SPNEGO over HTTP security—Extracts Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) Kerberos token from the HTTP header.”

    In this specific use case, we are assuming the end user has already been authenticated by some other means before reaching OAG. In other words, we are assuming OAG gets some sort of token and validates the user locally, thus populating its authentication.subject.id attribute. This token OAG receives can be an OAM token, a Kerberos token, SAML token, you name it. It is matter of a design decision based on OAG’s client capabilities.

    In a use case like this, it’s very unlikely that OAG will have the end user password, which eliminates the HTTP Basic header option. The remaining three are all good candidates. In this post we deal with a SAML 2.0 Bearer token in the HTTP Header. Our flow ends up being something like this: OAG Client -> “some token” -> OAG -> SAML 2.0 Bearer -> OWSM -> REST API.

    We’re going to examine all necessary configuration in OAG, OWSM and in the REST API application. Buckle up, folks! And let’s do it backwards.

    Main Article

    REST API Web Application

    Here’s my REST API Implementation in all its beauty:

    package ateam.rest.impl;
    
    import java.security.Principal;
    import javax.security.auth.Subject;
    import javax.ws.rs.GET;
    import javax.ws.rs.Path;
    import javax.ws.rs.Produces;
    import weblogic.security.Security;
    
    @Path("/carmodels")
    public class CarModels {
        public CarModels() {
            super();
        }
    
        @GET
        @Produces("application/json")
        public String getModels() {
    
            Subject s = Security.getCurrentSubject();
            System.out.println("[CarModels] Principals established for the propagated user id:");
            for (Principal p : s.getPrincipals()) {
                System.out.println(p.getName());
            }
    
            String json = "{\"models\":[\"Nice Car\",\"Fast Car\",\"Lightweight Car\",\"Sports Car\",\"Lovely Car\",\"Family Car\"]}";
            return json;
        }
    }

    It prints out the user principals and gives back a list of cars. Simple as that.

    There’s a need for a servlet filter (plus a filter-mapping) to intercept requests to this API. Such a filter is provided by OWSM and works hand in hand with the policy we’ve briefly talked about previously.

    <filter>
        <filter-name>OWSM Security Filter</filter-name>
        <filter-class>oracle.wsm.agent.handler.servlet.SecurityFilter</filter-class>
        <init-param>
          <param-name>servlet-name</param-name>
          <param-value>ateam.rest.impl.Services</param-value>
        </init-param>
    </filter>
    
    <filter-mapping>
        <filter-name>OWSM Security Filter</filter-name>
        <servlet-name>ateam.rest.impl.Services</servlet-name>
    </filter-mapping>

    See that the filter mentions an specific servlet in <init-param>. This servlet simply exposes the REST API Implementation to be protected.

    package ateam.rest.impl;
    
    import javax.ws.rs.core.Application;
    import javax.ws.rs.ApplicationPath;
    import java.util.Set;
    import java.util.HashSet;
    
    @ApplicationPath("resources")
    public class Services extends Application {
        public Set<java.lang.Class<?>> getClasses() {
            Set<java.lang.Class<?>> s = new HashSet<Class<?>>();
            s.add(CarModels.class);
            return s;
        }
    }

    The servlet definition completes the necessary configuration in web.xml. Notice the servlet-class is actually Jersey’s ServletContainer.

    <servlet>
        <servlet-name>ateam.rest.impl.Services</servlet-name>
        <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
        <init-param>
            <param-name>javax.ws.rs.Application</param-name>
            <param-value>ateam.rest.impl.Services</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>

    OWSM

    We’re going to attach oracle/multi_token_rest_service_policy policy to all REST endpoints in the domain. But only the implementations with the setup shown previously are going to have requests intercepted.

    The way to attach the policy is via wlst, as shown:

    > connect('weblogic','*****','t3://<admin-server-name>:<admin-port>') 
    > beginRepositorySession()
    > createPolicySet('owsm-policy-set-multi-token','rest-resource','Domain("<domain-name>")')
    > attachPolicySetPolicy('oracle/multi_token_rest_service_policy')
    > commitRepositorySession()

    This is it. Notice that createPolicySet mentions ‘rest-resource’ as the resource type. This is key here.

    Before asserting the user identity in the incoming token and thus establishing the Java subject, ‘oracle/multi_token_rest_service_policy’ requires the following characteristics from the received token:

    • It has to be Base64 encoded.
    • It has to be gzipped.
    • It has to be digitally signed.

    #1 and #2 requires no configuration in OWSM. But for #3 we need to import OAG’s certificate into OWSM’s keystore so that the token can be properly validated. Export OAG’s certificate into a a file using OAG Policy Studio and then import it into OWSM’s default-keystore.jks using JDK’s keytool.

    > keytool -import -file ~/oag_cert.cer -keystore ./config/fmwconfig/default-keystore.jks -storepass <keystore-password> -alias oag_cert -keypass welcome1

    OAG

    The filter circuit in OAG has to create a SAML 2.0 Bearer assertion, sign it, gzip it, Base64 encode it and then add it to the Authorization HTTP header. Here’s the filter circuit.

     OAG_OWSM_Policy

    I now highlight the most relevant aspects of each filter:

     

    1) Create SOAP Envelope: this is just to please “Create SAML Authentication Assertion” filter. It expects an XML message. Here I use a SOAP envelope, but any simple XML document would work.

     Create_SOAP_Envelope

     

    2) Set Authentication Subject id as DN: the point here is that OWSM policy honors the Subject NameIdentifier format in the SAML Assertion. Therefore, if format is X509SubjectName, we need to make sure to set the subject value as the user Distinguished Name (DN). If the format is unspecified, sticking with the username is enough.

     Set_Subject_ID_as_DN

    Tip: You can set the format by setting the attribute authentication.subject.format. For example:

     Set_Subject_Format

    3) Create SAML Authentication Assertion: the following screenshots describe the filter.

     Create_SAML_Authentication_Assertion_Details

     

    Create_SAML_Authentication_Assertion_Location

     

    Create_SAML_Authentication_Assertion_ConfirmationMethod

     

    Create_SAML_Authentication_Assertion_Advanced

     

    4) Update Message: this step is necessary just to copy the saml.assertion attribute value created in the previous step to content.body, as expected by the next filter in the chain.

     Update_Message

    5) Sign SAML Assertion:

    Sign_SAML_Assertion_SigningKey

    Notice the Signing Key certificate. That’s the one to be exported and then imported into OWSM’s key store.

    Sign_SAML_Assertion_WhatToSign

    Sign_SAML_Assertion_WhereToPlace

    Sign_SAML_Assertion_Advanced_Additional

    Sign_SAML_Assertion_Advanced_Options

    Notice “Create enveloped signature” is checked. It is required by the OWSM policy.

     

    6) Retrieve SAML Assertion from Message:

    Retrieve_SAML_Assertion

    7) Gzip SAML Assertion (script): OAG has no filter to gzip messages. Therefore we rely on a script to do so. Notice it also Base64 encodes the message after gzipping it. The script outputs an attribute named data.base64, containing the assertion gzipped and encoded, ready to be sent.

    importPackage(Packages.java.util.zip);
    importPackage(Packages.java.io);
    importPackage(Packages.javax.xml.transform);
    importPackage(Packages.javax.xml.transform.dom);
    importPackage(Packages.javax.xml.transform.stream);
    importPackage(Packages.java.lang);
    importPackage(Packages.oracle.security.xmlsec.util);
    importPackage(Packages.com.vordel.trace);
    
    function invoke(msg)         {        
    
       var data = msg.get("saml.assertion");  
    
       var source = new DOMSource(data.get(0).getOwnerDocument());
       var baos = new ByteArrayOutputStream();
       var result = new StreamResult(baos);
       var factory = TransformerFactory.newInstance();
       var transformer = factory.newTransformer();
       transformer.transform(source, result);
       var tokba = baos.toByteArray();
    
       baos = new ByteArrayOutputStream();
       var gzos = new GZIPOutputStream(baos);
       gzos.write(tokba);
       gzos.flush();
       gzos.finish();
       var gzdata = baos.toByteArray();
    
       var b64 = new Base64(); 
       b64.setUseLineBreaks(false);
       var b64tok = b64.encode(gzdata);
    
       msg.put("data.base64", b64tok);
       return true;         
    }

    8) Add SAML Assertion to HTTP Header: the Authorization header mechanism must be set to “oit”, as shown:

     Add_SAML_Assertion_to_HTTP_Header

    9) Connect to Car Models Service:

     Connect_to_CarModels_Service

    At the end, this is what a good assertion would look like:

    <?xml version="1.0"?>
    <saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" ID="Id-cffa4f53f9490000090000004f131aad-1" IssueInstant="2014-04-17T16:01:19Z" Version="2.0">
      <saml:Issuer Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">www.oracle.com</saml:Issuer>
        <dsig:Signature xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" Id="Id-0001397750479781-ffffffffd55f69c1-1">
          <dsig:SignedInfo>
            <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
            <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
            <dsig:Reference URI="#Id-cffa4f53f9490000090000004f131aad-1">
              <dsig:Transforms>
                <dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
                <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
              </dsig:Transforms>
              <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
              <dsig:DigestValue>87KiwbLN11S3qwJw23Zm0Odh9QQ=</dsig:DigestValue>
            </dsig:Reference>
          </dsig:SignedInfo>
          <dsig:SignatureValue>UO6S7++uxuqqLPl4cege7vmZpQ1q6MXL51s/e/fDd74aZdrEOx+G1tqA4YQtVQIh
    fTuOcd1CtOyEUqOLNy9F4e87Ld/cqNcr8iWGlokPEPP153r19MIaWSYDslYq10xe
    cArsGeayx0PpWjXo0VSH+u26grsTWIY+YATuU7BcKnqrrWFjmRxHAK/towXtuiPL
    NtNYVgI6dPXVzJ+2lGSiZKBDBFoV9zUFE98kU0f050e3mq2x2BwvQ7MQUkPYyadt
    b+Ifn0Hcr77Fp7FYfM0gPAMt3X0Dm5qsrEo5WS47RkWDq6EEdQx9HFEQJMLdwABL
    xC8gNTETalZs73xUUQu2CA==</dsig:SignatureValue>
          <dsig:KeyInfo xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" Id="Id-0001397750479781-ffffffffd55f69c1-2">
            <dsig:X509Data>
              <dsig:X509Certificate>
    MIICtjCCAZ4CBgE9RZO/rjANBgkqhkiG9w0BAQUFADAaMRgwFgYDVQQDEw9TYW1wbGVzIFRlc3Qg
    Q0EwHhcNMTMwMzA3MTU1ODAwWhcNMzcwMTEwMTA1NjAwWjAjMSEwHwYDVQQDExhTYW1wbGVzIFRl
    c3QgQ2VydGlmaWNhdGUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQClSoXx8QPLrHMS
    Ff/5m3uLrDhxHycPYkamDCouu89mSKhD7aEZy3QS0mvZHvY2N1TmuQcdTuOgSE5qyT20mBEUVBnU
    1y4WLQqM5fKu0TmIAajtYWTOdTnSuwR3f9W4poSwRMDNkUb8gPiXZNHZiyzriRMus29ER61eYAdr
    XFlv5emXqi2ZK2bpBdtO6Q641TM9kUWB4ZyMqkGtRys9m2hNaXVR8e7r2WUrA9LEx3bRpku/OodI
    GS6Qy0C2vueHDrdLYhYGKfNIllagEXY+dBQI8t2qH7rXBmr16lYyKK8VYJqeud9/NCAxD78vzOLY
    0q6WaisVCa6FE/KpgpNF8sbZAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAH3W3yCTSORmIq5uhNTC
    Zvd6wPz+zNXT19GqKgeDgjYMJj/Bl8czRO9YZo5MslwHILLgDVdz+ux4nSS3mCZ+PQGO42p/6o6n
    IQ31yGzfYjTZ/1+1//CWyVtEhuSv5oFE+Le5mvkf1kNUrW4//qOXtfwXy/Hq09E9eaXlnBxUTHls
    cQkpfQW5bi5Go7FDUNpW5EXUgXrQ96qKWMMK7i1hm7r5o6TldxCq5ANlPo/sObFNooQDkBWSKJ5t
    GTtPiXO8kqYWdNBvnSRDk1Qqsn6fdFz485WB0e0pqWg2SuZa1026gIqtQPekJDQzTm0qvAnh/Aoh
    oKs1dNQxruBf+MFLisw=
            </dsig:X509Certificate>
          </dsig:X509Data>
        </dsig:KeyInfo>
      </dsig:Signature>
      <saml:Subject>
        <saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">cn=jane,cn=Users,dc=us,dc=oracle,dc=com</saml:NameID>
        <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"/>
      </saml:Subject>
      <saml:Conditions NotBefore="2014-04-17T16:01:18Z" NotOnOrAfter="2014-04-17T16:06:18Z"/>
      <saml:AuthnStatement AuthnInstant="2014-04-17T16:01:19Z">
        <saml:AuthnContext>
          <saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml:AuthnContextClassRef>
        </saml:AuthnContext>
      </saml:AuthnStatement>
    </saml:Assertion>

    Wrapping up…

    With this configuration in place, at runtime the REST API implementation writes the following in the server’s log for a user authenticated as jane:

    [CarModels] Principals established for the propagated user id:
    jane

    And any SAML assertion not signed by OAG is going to be promptly rejected by OWSM.

    See you next time!


    OAG/OES Integration for Web API Security: skin and guts

    $
    0
    0

    Introduction

    When it comes to defining a strategy for web API security, OAG (Oracle API Gateway) and OES (Oracle Entitlements Server) together present a very interesting choice and are a very powerful combination indeed.

    In this post we’re going to take a look at what each component brings in (the skin) and then get our hands on actually describing the integration in detail (the guts).

    OAG is designed to inspect and act on various types of messages that are delivered to it or just pass through it. It’s usually positioned to be deployed on the DMZ (the De-Militarized Zone) within corporate networks. As such, it can block malicious traffic, authenticate users with a variety of protocols, integrate with anti-virus products, perform message throttling, thus delivering only the good stuff to your intranet servers and also off-loading them, decisively contributing to achieve some IT operational SLAs. More than that, OAG can switch protocols and transform messages. For instance, an organization may have SOAP-based web services and want to expose them as REST without any re-writing. Or implement SAML federation without touching origin systems. Or talk Kerberos or OAuth with clients and speak SAML with back-end servers. Or use it as an FTP server so that incoming files are immediately sent to a processing pipeline. The possibilities are numerous. Having mentioned these few features and examples, it’s not unreasonable to think deploying OAG inside intranets. And that’s not unusual, actually. It is a nice bridge with obvious benefits.

    OES is designed to provide fine-grained authorization with externalized policies to client applications. It takes the coding of access decisions away from developers. Besides the obvious security pro, it shortens the change cycle, when a new security policy needs to be deployed. You simply avoid going through all the phases required for re-deploying your application just because of that change. It’s true the new policy needs testing, but that’s nowhere near when compared to what it takes to re-deploy a new application version. The time to market is drastically reduced. Now to the fine-grained part. OES can take a bunch of aspects in consideration when authorizing: the user identity, user roles, user attributes, context information about the request being made (like originating IP address), factors external to the request (like time of day, day of week, etc) and, of course, request data. Those combined makes it a very powerful authorization engine. It’s not coincidence that OES is the component behind OAM’s (Oracle Access Manager) authorization engine.

    While OAG itself brings in authorization capabilities, in this field OES offers a much richer model. And if the organization already employs OES elsewhere, integrating it with OAG makes a lot of sense, because we end up with a single and consistent approach for authorization across applications.

    Main Article

    The Integration

    OES basic architecture comprises a server and different client modules, called SMs (Security Modules). The server connects to a repository where policies are physically kept. The SMs are attached to client applications and connect either to OES server or to the repository directly, depending on their configured mode (I will touch up on this later). There are SMs available for Java, RMI, web services, Weblogic server, Websphere, JBoss, MS Sharepoint. When integrating with OAG, a Java SM is used. Despite its core being a C process, OAG forks up a JVM for some of its functions.

    The integration hook between OAG and OES is the “OES 11g Authorization” filter, as seen below:

    OAG_policy

     

    This is how OAG delegates authorization decisions to OES. Under the covers, an OpenAZ API authorization call is made to the SM. In the filter, we observe the following:

    Resource: the OES resource for which authorization is being requested. There’s an implicit formation rule here: <OES_APPLICATION_NAME>/<OES_RESOURCE_TYPE>/<OES_RESOURCE_NAME>

    Later on I show how these placeholders map to the OES policy.

    Action: the action supported by the policy. More later on the OES policy.

    Environmental/Context attributes: any extra information that you want to pass in to OES. These map to attributes in the OES policy. In this example, INVOKING_APPLICATION is an attribute used in a policy condition.

    Besides, OAG always passes what’s in the message authentication.subject.id attribute, who basically defines the authenticated user principal name within the executing OAG circuit instance.

    Ok, with these in mind, let’s look at what the OES policy looks like:

     

    OES_policy

     

    Read back again the OAG filter description and realize the mappings. What isn’t shown in the OES policy is <OES_RESORCE_TYPE> and the authenticated user.

    As for <OES_RESOURCE_TYPE>, it suffices saying that the target name /resources/empSalaries is of type restapi, as shown:

     

    OES_resource_type

     

    The authenticated users is implicit in the policy. And in this case, it is going to be authorized only if it has Managers as one of its application roles.

    So far, so good, but OAG can only talk to OES if it has the SM properly installed and configured.

     

    Installing and Configuring OES SM for OAG

    1 – Download and install OES Client in the same machine as OAG. It’s an Oracle installer and simply copies the binaries to a given location. Let’s call it OES_CLIENT_HOME.

    2 – Create the SM in OES Console. Bind the applications to it. In the screen shot below, oag_sm Security Module is bound to MyServices application.

     

    OES_SM_definition

    3 – Edit $OES_CLIENT_HOME/oessm/SMConfigTool/srmconfig.java.controlled.prp, pointing it to OES admin server. The following properties are to be edited:

    • oracle.security.jps.runtime.pd.client.policyDistributionMode=controlled-push <change this only if you want to have another distribution mode>
    • oracle.security.jps.runtime.pd.client.RegistrationServerHost=<OES Admin Server>
    • oracle.security.jps.runtime.pd.client.RegistrationServerPort=<OES Admin Server SSL port>

    OES supports 3 distribution modes: controlled-push, controlled-pull and uncontrolled.

    controlled-push means that policies are distributed from OES server to the SM. The SM provides a listener that the OES server connects to. In OES Console, once the Distribute button is clicked, policies are *immediately* distributed to the configured SMs. The Distribute button doesn’t actually distribute policies. It simply marks policies as “Ready for Distribution”. In the case of push, policies are distributed right after. From this point on, SM needs no connection at all to the OES server (it can even be shut down), since policies are all local to the SM.

     

    OES_policy_distribution

    I am only showing the image to illustrate. Trying to distribute policies at this point is going to generate an error, because the SM isn’t yet registered with OES server.

     

    controlled-pull means that the SM pulls policies on a defined frequency (default 10 min) directly from the *OES repository* into its local cache. It doesn’t get policies through OES server. But still, in other to be pulled, policies do need to be marked “Ready for Distribution” in OES Console.

    uncontrolled means that SMs read policies directly from OES repository on demand and updates the local cache if the requested policy is not available locally. And the SM still pulls new policies and changes from OES repository periodically. As a result, OES repository is supposed to be up and reachable by SMs at all times. Policies don’t need to be marked “Ready for Distribution”, i.e., there’s no further control once policies are created/changed in OES.

    Controlled distribution modes are only supported if the repository is database-based.

     

    4 – Run

    $OES_CLIENT_HOME/oessm/bin/config.sh -smConfigId <sm_name> -prpFileName $OES_CLIENT_HOME/oessm/SMConfigTool/smconfig.java.controlled.prp

    <sm_name> MUST match the SM you’ve created in step 2 before. In this case, oag_sm.

    In this step you’re basically enrolling the SM, but no configuration gets written to OES server. A couple of files are generated at this point under $OES_CLIENT_HOME/oes_sm_instances/<sm_name>. All of them are important, but the most relevant one is ./config/jps-config.xml, where you can find the following configuration data about the SM. The DistributionServicePort is a random port number picked at enrollment time used by the SM to listen for policy distribution events.

    <serviceInstance name="pdp.service" provider="pdp.service.provider">
                <description>Runtime PDP service instance</description>
                <property name="oracle.security.jps.runtime.pd.client.policyDistributionMode" value="controlled-push"/>
                <property name="oracle.security.jps.runtime.pd.client.sm_name" value="oag_sm"/>
                <property name="oracle.security.jps.runtime.pd.client.SMinstanceType" value="java"/>
                <property name="oracle.security.jps.runtime.pd.client.RegistrationServerURL" value="https://slc05ylp.us.oracle.com:3002/pd-server"/>
                <property name="oracle.security.jps.runtime.pd.client.DistributionServicePort" value="16933"/>
                <property name="oracle.security.jps.pd.client.sslMode" value="two-way"/>
                <property name="oracle.security.jps.pd.client.ssl.identityKeyStoreFileName" value="/scratch/fmwapps/oes11gps2_client/oes_sm_instances/oag_sm/security/identity.jks"/>
                <property name="oracle.security.jps.pd.client.ssl.trustKeyStoreFileName" value="/scratch/fmwapps/oes11gps2_client/oes_sm_instances/oag_sm/security/trust.jks"/>
    </serviceInstance>

    5 – Create a file named jvm.xml under $OAG_INSTALL_HOME/apigateway/conf with the following contents. It tells OAG about OES jar files and the Security Module name. It also defines system properties used by OAG, like log4j properties file (used for dumping out SM’s authorization decisions)

    <ConfigurationFragment>
    <Environment name="JRE_HOME" value="/scratch/fmwapps/oag_11.1.2.2.0/apigateway/Linux.x86_64/jre" />
    <!-- OES Settings -->
    <Environment name="OES_CLIENT_HOME" value="/scratch/fmwapps/oes11gps2_client" />
    <Environment name="SM_NAME" value="oag_sm" />
    <Environment name="INSTANCE_HOME" value="$OES_CLIENT_HOME/oes_sm_instances/$SM_NAME" />
    <!-- Add OES Client to classpath -->
    <ClassPath name="$OES_CLIENT_HOME/modules/oracle.oes.sm_11.1.1/oes-client.jar" />
    <VMArg name="-Doracle.security.jps.config=$INSTANCE_HOME/config/jps-config.xml"/>
    <VMArg name="-Djava.util.logging.config.file=$JRE_HOME/lib/logging.properties"/>
    </ConfigurationFragment>

    6 – Restart the API Gateway process. No need to restart the OAG Node Manager.

    At this point, OAG’s “OES 11g Authorization” filter can be safely invoked. The filter is natively aware of jvm.xml settings, so that no external resource is required to be configured in OAG Policy Studio.

    If you’ve really followed through, at this point you might be wondering how come the filter is going to work if policies were not distributed (supposing we’re in some controlled mode). The answer is that, exceptionally, policies get distributed to the SM upon the very first usage of OAG filter. We can clearly see this if we analyze the following log snippets:

    OAG log snippet upon first usage of “OES 11g Authorization” filter:

    DEBUG 21/May/2014:11:28:55.369 [b9da8700] run filter [Call 'Authorize Access'] {
    DEBUG 21/May/2014:11:28:55.369 [b9da8700] run circuit "Authorize Access"...
    DEBUG 21/May/2014:11:28:55.369 [b9da8700] run filter [Authorize Access (OES 11g Authorization)] {
    DEBUG 21/May/2014:11:28:55.369 [b9da8700] creating subject from 'jane'
    DEBUG 21/May/2014:11:28:55.373 [b9da8700] checking 'GET' to resource: MyServices/restapi//resources/empSalaries
    DEBUG 21/May/2014:11:28:55.373 [b9da8700] env attribute name: 'INVOKING_APPLICATION' env attribute value: 'OAG'
    DEBUG 21/May/2014:11:28:55.634 [b9da8700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DATA 21/May/2014:11:28:55.637 [b9da8700] getting class com.vordel.jaxprovider.libxml.XPathExpressionImpl with classLoader.loadClass()
    DATA 21/May/2014:11:28:55.638 [b9da8700] loaded class com.vordel.jaxprovider.libxml.XPathExpressionImpl
    DATA 21/May/2014:11:28:55.638 [b9da8700] getting class javax.xml.xpath.XPath with classLoader.loadClass()
    DATA 21/May/2014:11:28:55.638 [b9da8700] loaded class javax.xml.xpath.XPath
    DATA 21/May/2014:11:28:55.638 [b9da8700] getting class javax.xml.xpath.XPathConstants with classLoader.loadClass()
    DATA 21/May/2014:11:28:55.638 [b9da8700] loaded class javax.xml.xpath.XPathConstants
    DATA 21/May/2014:11:28:55.638 [b9da8700] getting class javax.xml.namespace.QName with classLoader.loadClass()
    DATA 21/May/2014:11:28:55.638 [b9da8700] loaded class javax.xml.namespace.QName
    DATA 21/May/2014:11:28:55.639 [b9da8700] getting class javax.xml.namespace.NamespaceContext with classLoader.loadClass()
    DATA 21/May/2014:11:28:55.639 [b9da8700] loaded class javax.xml.namespace.NamespaceContext
    DEBUG 21/May/2014:11:28:55.656 [b9da8700] Loaded XML file /scratch/fmwapps/oes11gps2_client/oes_sm_instances/oag_sm/config/jps-config.xml
    DEBUG 21/May/2014:11:28:55.656 [b9da8700] parsing (options value 2052) XML body from input stream of type java.io.FileInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:55.656 [b9da8700] release system resources for the loaded XML
    DEBUG 21/May/2014:11:28:56.216 [b9da8700] parsing (options value 2052) XML body from input stream of type sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:56.253 [b9da8700] parsing (options value 2052) XML body from input stream of type sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:58.742 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:58.771 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:58.775 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:58.778 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:59.071 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:59.077 [31663700] parsing (options value 2052) XML body from input stream of type java.io.ByteArrayInputStream. ContentSource is of type java InputStream
    DEBUG 21/May/2014:11:28:59.988 [b9da8700] Request: {jane, GET, MyServices/restapi//resources/empSalaries}
    Result: true
    DEBUG 21/May/2014:11:28:59.988 [b9da8700] result from OES: true
    DEBUG 21/May/2014:11:28:59.989 [b9da8700] } = 1, filter [Authorize Access (OES 11g Authorization)]
    DEBUG 21/May/2014:11:28:59.989 [b9da8700] Filter [Authorize Access (OES 11g Authorization)] completes in 4620 milliseconds.
    DEBUG 21/May/2014:11:28:59.989 [b9da8700] ..."Authorize Access" complete.
    DEBUG 21/May/2014:11:28:59.989 [b9da8700] } = 1, filter [Call 'Authorize Access']
    DEBUG 21/May/2014:11:28:59.989 [b9da8700] Filter [Call 'Authorize Access'] completes in 2620 milliseconds.

    See last line for how long it took. This is because the SM had to be registered into OES and policies distributed. Look at what happens in OES:

    [2014-05-21T11:28:57.664-07:00] [AdminServer] [TRACE] [] [oracle.jps.policymgmt] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 6af0f0aa917fd32e:-4e07427f:1461efea049:-8000-00000000000004ac,0] [APP: oracle.oes.admin.pd.ssl#11.1.1.3.0] [SRC_CLASS: oracle.security.jps.internal.policystore.entitymanager.impl.PDPRegistrationManagerImpl] [SRC_METHOD: registerPDP] registerPDP: PDPInfoEntry: {address=https://slc05ylp.us.oracle.com:16933/pd/PDClient, configurationID=oag_sm, instanceName=oagB_sm_slc03rfc.us.oracle.com__scratch_fmwapps_oes11gps2_client_oes_sm_instances_oagB_sm_config_jps-config_xml, isFusionApp=false, isTransactionalMode=false, appVersions={}}
    [2014-05-21T11:28:57.665-07:00] [AdminServer] [TRACE] [] [oracle.jps.policymgmt] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 6af0f0aa917fd32e:-4e07427f:1461efea049:-8000-00000000000004ac,0] [APP: oracle.oes.admin.pd.ssl#11.1.1.3.0] [SRC_CLASS: oracle.security.jps.internal.policystore.rdbms.DBStoreManager] [SRC_METHOD: getDataManagerInternal] JpsDataManager ThreadLocal: current='null', new='oracle.security.jps.internal.policystore.rdbms.JpsDBDataManager@49fb3c8'
    [2014-05-21T11:28:57.666-07:00] [AdminServer] [TRACE] [] [oracle.jps.policymgmt] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 6af0f0aa917fd32e:-4e07427f:1461efea049:-8000-00000000000004ac,0] [APP: oracle.oes.admin.pd.ssl#11.1.1.3.0] [SRC_CLASS: oracle.security.jps.internal.policystore.entitymanager.impl.PDPRegistrationManagerImpl] [SRC_METHOD: registerPDP] PDP registration: create a new PDPInfo.
    [2014-05-21T11:28:57.666-07:00] [AdminServer] [TRACE] [] [oracle.jps.policymgmt] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 6af0f0aa917fd32e:-4e07427f:1461efea049:-8000-00000000000004ac,0] [APP: oracle.oes.admin.pd.ssl#11.1.1.3.0] [SRC_CLASS: oracle.security.jps.internal.policystore.ldap.JpsLdapAttributeMapper] [SRC_METHOD: prepEntryForPersist] entryType PDP_INFO, contextID cn=oes_domain,cn=JPSContext,cn=jpsroot, Updated Attributes: {orcloespdpstarttime=orclOESPDPStartTime: Wed May 21 11:28:57.666 PDT 2014, orcloespdpaddress=orclOESPDPAddress: https://slc05ylp.us.oracle.com:16933/pd/PDClient, orcloespdpheartbeattime=orclOESPDPHeartBeatTime: Wed May 21 11:28:57.666 PDT 2014, orcloespdpconfigurationid=orclOESPDPConfigurationID: oagB_sm, objectclass=objectclass: top, orclOESPDPInfo, orcloespdpstatus=orclOESPDPStatus: registered, orcloespdpinstancename=orclOESPDPInstanceName: oagB_sm_slc03rfc.us.oracle.com__scratch_fmwapps_oes11gps2_client_oes_sm_instances_oagB_sm_config_jps-config_xml, cn=cn: https://slc05ylp.us.oracle.com:16933/pd/PDClient}

    Once policies are distributed, notice a folder named “work” appears under $OES_CLIENT_HOME/oes_sm_instances/$SM_NAME/config. That’s where the SM keeps local policies. That folder is updated every time a successful distribution occurs.

     

    “OES 11g Authorization” filter will finish way much faster in subsequent executions, like:

    DEBUG 21/May/2014:11:47:56.791 [ba3ae700] run filter [Call 'Authorize Access'] {
    DEBUG 21/May/2014:11:47:56.791 [ba3ae700] run circuit "Authorize Access"...
    DEBUG 21/May/2014:11:47:56.791 [ba3ae700] run filter [Authorize Access (OES 11g Authorization)] {
    DEBUG 21/May/2014:11:47:56.791 [ba3ae700] creating subject from 'jane'
    DEBUG 21/May/2014:11:47:56.791 [ba3ae700] checking 'GET' to resource: MyServices/restapi//resources/empSalaries
    DEBUG 21/May/2014:11:47:56.792 [ba3ae700] env attribute name: 'INVOKING_APPLICATION' env attribute value: 'OAG'
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] Request: {jane, GET, MyServices/restapi//resources/empSalaries}
    Result: true
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] result from OES: true
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] } = 1, filter [Authorize Access (OES 11g Authorization)]
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] Filter [Authorize Access (OES 11g Authorization)] completes in 20 milliseconds.
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] ..."Authorize Access" complete.
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] } = 1, filter [Call 'Authorize Access']
    DEBUG 21/May/2014:11:47:56.811 [ba3ae700] Filter [Call 'Authorize Access'] completes in 11 milliseconds.

     

    Debugging OES SM in OAG

    If you ever want to debug OES authorization decisions in OAG, refer back to jvm.xml.  There you find the entry

    <VMArg name=”-Djava.util.logging.config.file=$JRE_HOME/lib/logging.properties”/>

    Specify the following properties in logging.properties. You’re basically configuring a FileHandler to log FINEST level messages about OES authorization decisions to a file named java?.log in the user’s home directory.

    handlers= java.util.logging.FileHandler
    .level=INFO
    # default file output is in user's home directory.
    java.util.logging.FileHandler.pattern = %h/java%u.log
    java.util.logging.FileHandler.limit = 50000
    java.util.logging.FileHandler.count = 1
    java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
    # OES logging
    oracle.jps.authorization.level=FINEST
    oracle.jps.openaz.level=FINEST

    As a result, a typical debug output message is like:

    FINE: ========== Start Of Policy Evaluation Info ==========
    Application: MyServices
    Requested Resource Type: restapi
    Requested Resource: resources/empSalaries
    Requested Resource Present: false
    Requested Action: GET
    Request Subject Principals:
    class weblogic.security.principal.WLSUserImpl:jane
    Effective Roles Granted: [authenticated-role, Managers]
    Role-Mapping Policies: NONE
    Static Role Grants: NONE
    Denied Static Role Grants: NONE
    Authorization Policies:
    1.Policy Name: permit_EmpSalaries
    Matched Policy Principals:
    class oracle.security.jps.service.policystore.ApplicationRole:Managers
    Policy Principals Semantics: OR
    Matched Policy Resource-Actions:
    Resource = /resources/empSalaries, Action = GET
    Policy Obligations: NONE
    Policy Evaluation Result: GRANT
    Policy Rules:
    Rule Name: GetCarModelsPOLICY_RULE
    Rule Effect: GRANT
    Rule Condition: STRING_IS(INVOKING_APPLICATION,OAG)
    Evaluated Rule Attributes and Functions:
    INVOKING_APPLICATION(Dynamic, String) = OAG
    Rule Evaluation Result: GRANT
    ========== End Of Policy Evaluation Info ==========

    Wrapping Up…

    As we can see, OAG and OES are a powerful combination when protecting web APIs. In this post we showed what to expect from this integration, how to configure policies on both sides, how to install OES SM in OAG and how to debug authorization decisions. Hopefully this is useful for some of you out there.

    See you next time!

    IDM FA Integration flows

    $
    0
    0

    Introduction

    One of the key aspects of Fusion Applications operations is the Users and Roles management. Fusion Applications uses the Oracle Identity management for its Identity store and policy store by default.This article explains how user and roles flows work from different poin of views, using ‘key’ IDM products for each flow in detail. With a clear understanding of the workings of the Fusion Applications with Identity Management for user provisioning and roles management you will have better understanding and can improve your FA IDM environments by integrating with the rest of the enterprise assets and processes. For example: If you need to integrate your current IDM enterprise with this solution what are the flows you need to be aware of.

    Main Article

    FA relies on roles and privileges implemented in IDM to both authenticate and authorize users and operations respectively. FA uses jobs in the ESS system to reconcile the users and roles in OIM. OIM, in turn, gets the corresponding data from the user and policy store respectively using LdapSynch(provisioning and reconciliation process). This flow is described below

    Fig1: FA IDM integration flow

    Fig1: FA IDM integration flow.

    Brief explanation of each topic on this main flow above:

    FA OID flow: OID holds policy information from FA. Basically duty roles and privileges are created from FA to OID(Policy or Security Store).

    Fig2: FusionApps and OID.

    Fig2: FusionApps and OID.

    FA OIM flow:FA/OIM provision users or roles to OIM/FA through SPML.

    For example: Enterprise business logic may qualify the requester and initiate a role provisioning request by invoking the Services Provisioning.

    Language (SPML) client module, as may occur during onboarding of internal users with Human Capital Management (HCM), in which case the SPML client submits an asynchronous SPML call to OIM.

    Or OIM handles the role request by presenting roles for selection based on associated policies.

    Or it communicates with each other produc providing challenge questions response , password reset procedure and more.

    Fig3:picture above helps to explain the flow information that we explained above.

    Fig3: picture above helps to explain the flow information that we explained above.

    OID OIM flow: OIM connects into OVD through LDAP ITResource feature, that allows the connection and it is also responsible for LDAP Synch Reconciliations from OID to OIM as well as the event handlers that OIM triggers, if there is any update from there.

    Fig4: Provides the visual explanation of the OID OIM flow.

    Fig4: Provides the visual explanation of the third flow.

    FA OIM flow: Here it’s ESS JOB from FA that create user into OID or update it from OID. 4.1)”Retrieve Latest LDAP Changes” reads from OID and updates FA if there are any things missing (users, role assignments, etc); 4.2) “Send Pending LDAP Changes” will send over to OIM any requests that have not yet been processed. (If you are using the FA UIs like Manage Users to create a user, it should happen almost immediately, but if you have bulk loaded employees and assignments, you need to run Send Pending LDAP Requests to get the requests processed.)

    Fig5: OAM -FA integrated.

    Fig5: OAM -FA integrated.

    Conclusion

    Implementing FA+IDM solution for an organization is a proposition that should be done with all other flows consideration, such as ‘New Hire’ and ‘Authentication and Autorization’ flows. Using a proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern why or even whether they need Oracle IDM and FA wired or not with their IDM enterprise solution. It also highlights, what of the enterprise is willing to protect on user details, and how best to offer Oracle protection in an integrated and effective manner.

    Other useful links:

    Oracle® Fusion Applications Security Guide ,11g Release 1 (11.1.1.5.0) : http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e16689/F323392AN1A795.htm

    Chained LDAP Authentication in OAM 11g

    $
    0
    0

    Introduction

    In this post, we look at a simple way to configure a chained LDAP authentication scheme in OAM 11g R2. This post is part of a larger series on Oracle Access Manager 11g called Oracle Access Manager Academy. An index to the entire series with links to each of the separate posts is available.

    The problem we are trying to solve

    Consider a situation where the users that need to be authenticated by Oracle Access Manager do not all reside within the same LDAP directory. This is a fairly common situation which can arise as a result of mergers and acquisitions, or even of IT consolidation between different departments or business units within a single organisation. While it may well be desirable to consolidate all user records into a single LDAP structure over time, often the immediate needs of the business are such that this simply wouldn’t be feasible in the short term, particularly if there are challenges in terms of data consistency, format, password storage and so on that would need to be overcome as part of a migration project.

    While it is possible (and quite common) to address requirements of this nature using Oracle Virtual Directory – in short, inserting OVD in between OAM and the physical LDAP repositories in order to construct a single virtual tree – one can also use OAM’s pre-built authentication plugins to accomplish essentially the same thing, in a simpler way.

    The solution described below has been built and tested against OAM 11g R2 PS2 (11.1.2.2). I cannot guarantee that it will work against older versions.

    Creating a Chained LDAP Authentication Module

    The first step here is to ensure that all directories containing user information are correctly configured as User Identity Stores within OAM. To demonstrate the concept, I installed two separate Oracle Unified Directory instances on my test server and added both as Identity Stores using the OAM console – the below screenshot shows OUDStore and OUDStore2, which refer to distinct directory instances as can be seen from the differing port numbers:

    IDStores

    We then configure a custom Authentication Module, which will attempt to match the provided user ID against each of the directories in turn. In this simple example, the module will first do a lookup against the first directory (OUDStore). If that lookup succeeds, it will attempt authentication against OUDStore using the provided password. Should the intitial lookup fail, it will attempt the lookup against the second directory (OUDStore2). Again, should that lookup succeed, it will attempt authentication against OUDStore2 using the provided password. Should both lookups (or either password authentication step) fail, then the entire authentication attempt will fail.

    We configure the module by including two instances of both the UserIdentificationPlugin and the UserAuthenticationPlugin. Both are standard OAM plugins that ship with the product.

    The screenshots below demonstrate the configuration of our ChainedLDAP custom Authentication Module:

    ChainedAuthNModule1ChainedAuthNModule2ChainedAuthNModule3ChainedAuthNModule4ChainedAuthNModule5

    This next screenshot shows the orchestration flow through the various steps:

    ChainedAuthNModuleOrch

    Note that “OUD1ID” (User Identification against OUD 1) is the first step. On failure of this step, we execute “OUD2ID” (User Identification against OUD 2). Note also that successful completion of either of the two User Authentication steps will result in a successful result for the orchestration as a whole.

    The only remaining step is to build and Authentication Scheme that uses this new module. That can be done by cloning the standard “LDAPScheme” and changing the Authentication Module, as below:

    ChainedAuthNScheme

    When we assign the above Authentication Scheme to a resource within an Application Domain, we see that we can successfully authenticate using credentials from either of the OUD instances.

    What about Authorization?

    Extending the use case a little, we can build authorization conditions that make sense in this case by specifying group membership rules across both directories. In the screenshots below, as an illustration, I’ve included separate group membership conditions for each OUD instance, with an OR rule to ensure that either will allow access.

    ChainedAuthZ1

    ChainedAuthZ2

    ChainedAuthZ3

    Some practical considerations and limitations

    As with all approaches described on this site, this one is not a silver bullet. While it offers a simple way to allow OAM to authenticate users against more than one directory, it has a number of limitations, of which the below are just a few:

    • Collision of user names is not handled well. In the case where the same user ID existed in both directories, only the first would be checked. It is, of course, not possible for a single login process to handle the case where ore than one distinct user has the same user ID in any case, but it is important to be aware of this limitations regardless.
    • It is unlikely that this approach will scale particularly well beyond a handful of directories. You certainly wouldn’t want to check a user name against each of a large number of directories in sequence, for performance reasons.
    • The complexities of defining meaningful authorization policies based on group memberships across multiple directories shouldn’t be underestimated. This approach is probably best used in situations where authorization requirements are not stringent (such as allowing all users within all directories to access resources)

    Logging in OIM custom code

    $
    0
    0

    Proper logging is one of the main considerations during custom development. This is no different in OIM projects in which custom code is being developed and deployed to OIM. Proper logging is fundamental part of development, helping in finding issues, fixing them and also in reporting relevant runtime conditions.

    This post shows how to leverage the Oracle Fusion Middleware infrastructure in which OIM runs in order to create proper logging information from custom code. It is not the intent of this post to cover all logging considerations; there are plenty of materials on the internet and book stores to cover the basics.

    OIM running on WebLogic leverages ODL for logging. ODL stands for Oracle Diagnostic Logging, the OIM related documentation is available here. This documentation provides details on the out-of-the-box loggers, how they can be changed, where the logging statements go, and what each log level will produce. The image below depicts some of the OIM loggers:

    oim_logging1When it comes to proper logging in your custom code, the task is pretty simple. Below a very basic example of an OIM event handler containing some logging related code:

    ...
    import java.util.logging.Level;
    import java.util.logging.Logger;
    
    public class MyCustomEventHandler implements PostProcessHandler {
        
        private static Logger myLogger = Logger.getLogger("MY.CUSTOM.LOGGER");
        
        public EventResult execute(long l, long l1, Orchestration orchestration) {
           
            myLogger.entering("MyCustomEventHandler", "execute");
         
            try {
                myLogger.logp(Level.FINEST, "MyCustomEventHandler", "execute", "Trying to convert X to an integer");
                
                int x = Integer.parseInt("x");
            }
            catch (NumberFormatException e) {
                myLogger.logp(Level.SEVERE, "MyCustomEventHandler", "execute", "Error during operation "+e.getMessage(), e);
            } 
            myLogger.exiting("MyCustomEventHandler", "execute");
            
            return new EventResult();
        }
    
    ...
    
    

    Some important details about the code above:

    • Although it could, it does not use ODL APIs, it uses plain java.util.logging classes. At runtime, FMW stack redirects the java.util.logging logging to ODL logging. This makes your life easier when compiling the code as you do not need extra libraries.
    • It is very important to mention that the code is intentionally forcing a Java exception by trying to parse an alpha String into a Java integer. The intention was to force the exception to happen for demonstrating the logging use
    • The logger being used is name ‘MY.CUSTOM.LOGGER’.
    • The code is an event handler, but the same logging approach can be used in other customizations like scheduled tasks, notification providers, plug-ins in general and also in UI customizations
    • There are different log levels being used by the code. Below a simplified table showing the mapping between ODL and java.logging levels (click here for a complete table):
    Java Log Level ODL Level
    SEVERE ERROR:1
    WARNING WARNING:1
    INFO NOTIFICATION:1
    CONFIG NOTIFICATION:16
    FINE TRACE:1
    FINER TRACE:16
    FINEST TRACE:32

    When the custom code is loaded for the first time, ODL will create the proper logger instances, and they will be exposed through /em console. Then the logging level can be changed accordingly to the needs. The image below depicts the custom logger and it shows how log level changes can be persisted to survive server restarts:

    oim_logging2

    And below the excerpt from the log file with the log statements generated by the code above:

    [2014-06-03T12:57:53.284-07:00] [wls_oim1] [WARNING] [ADF_FACES-30118] [oracle.adfinternal.view.faces.renderkit.rich.SelectItemUtils] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: xelsysadm] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0:3] [APP: oracle.iam.console.identity.self-service.ear#V2.0] [DSID: 0000KPZJ3LD2rIQ5Ib_Aif1JZYPa000008] [URI: /identity/faces/home] No help provider found for helpTopicId=modify_user.
    [2014-06-03T12:57:53.605-07:00] [wls_oim1] [NOTIFICATION] [] [OAM Notification Logger] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: oiminternal] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0] [APP: oim#11.1.2.0.0] Notification status true
    
    [2014-06-03T12:57:53.892-07:00] [wls_oim1] [TRACE:16] [] [MY.CUSTOM.LOGGER] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: oiminternal] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0] [APP: oim#11.1.2.0.0] [SRC_CLASS: MyCustomEventHandler] [SRC_METHOD: execute] ENTRY
    [2014-06-03T12:57:53.892-07:00] [wls_oim1] [TRACE:32] [] [MY.CUSTOM.LOGGER] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: oiminternal] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0] [APP: oim#11.1.2.0.0] [SRC_CLASS: MyCustomEventHandler] [SRC_METHOD: execute] Trying to convert X to an integer
    [2014-06-03T12:57:53.892-07:00] [wls_oim1] [ERROR] [] [MY.CUSTOM.LOGGER] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: oiminternal] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0] [APP: oim#11.1.2.0.0] Error during operation For input string: "x"
    java.lang.NumberFormatException: For input string: "x"
            at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
            at java.lang.Integer.parseInt(Integer.java:449)
            at java.lang.Integer.parseInt(Integer.java:499)
            at oracle.iam.demo.eventhandlers.MyCustomEventHandler.execute(MyCustomEventHandler.java:27)
            at oracle.iam.platform.kernel.impl.OrchProcessData.runPostProcessEvents(OrchProcessData.java:1490)
            at oracle.iam.platform.kernel.impl.OrchProcessData.runEvents(OrchProcessData.java:896)
            at oracle.iam.platform.kernel.impl.OrchProcessData.executeEvents(OrchProcessData.java:357)
            at oracle.iam.platform.kernel.impl.OrchestrationEngineImpl.resumeProcess(OrchestrationEngineImpl.java:948)
            at oracle.iam.platform.kernel.impl.OrchestrationEngineImpl.resumeProcess(OrchestrationEngineImpl.java:978)
            at oracle.iam.platform.kernel.impl.OrhestrationAsyncTask.execute(OrhestrationAsyncTask.java:134)
            at oracle.iam.platform.async.impl.TaskExecutor.executeUnmanagedTask(TaskExecutor.java:99)
            at oracle.iam.platform.async.impl.TaskExecutor.execute(TaskExecutor.java:69)
            at oracle.iam.platform.async.messaging.MessageReceiver.onMessage(MessageReceiver.java:68)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
            at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
            at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
            at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
            at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
            at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
            at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
            at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
            at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
            at $Proxy411.onMessage(Unknown Source)
            at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:583)
            at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:486)
            at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:388)
            at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4659)
            at weblogic.jms.client.JMSSession.execute(JMSSession.java:4345)
            at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3821)
            at weblogic.jms.client.JMSSession.access$000(JMSSession.java:115)
            at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5170)
            at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)
            at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
            at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    
    
    [2014-06-03T12:57:53.892-07:00] [wls_oim1] [TRACE:16] [] [MY.CUSTOM.LOGGER] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: oiminternal] [ecid: 004yjFhM1xh2rIQ5Ib_Aif0006Tz000JZ4,0] [APP: oim#11.1.2.0.0] [SRC_CLASS: MyCustomEventHandler] [SRC_METHOD: execute] RETURN

    By using the approach described in this post, the custom logging statements will be written to the <managed-serve-name>-diagnostics.log located, in the out-of-the-box configuration, under $DOMAIN_HOME/servers/<server_name>/logs. You can also create your own ODL log handlers and redirect your custom logging to a different log file, check the documentation here on how to do that. The Fusion Middleware logging infrastructure used by OIM has been the same since first 11g release, so this blog post applies to all the 11g versions. Have fun!

    Presenting the new IDM Deployment Wizard

    $
    0
    0

    Introduction

    With the recent IDM 11gR2PS2 release Oracle has developed a new deployment tool that aims to automate and reduce the time required to install and configure Oracle Identity and Access Management Components.
    In this post we are going to present the benefits, supported topologies and components and key points to keep in mind to conduct a successful IDM deployment.

    Architecture and Components

    Currently the Deployment Wizard Supports two types of topologies:

    • Single Host/Domain: All components are installed in one host under a single domain. This is recommended for testing and demos but not for production due to the lack of redundancy.
    • Multi Host/Domain: Components are installed in multiple hosts and split in two domains for High Availability, AccessDomain (OAM) and GovernanceDomain (OIM). By having OAM and OIM in different domains also helps to maintain and patch each component separately.

    This approach also offers a mix of options:

    • Distributed: 8 hosts, consisting of 2 web hosts, 2 OAM hosts, 2 OIM hosts and 2 directory hosts.
    • Consolidated: 4 hosts, consisting of 2 web hosts, 2 IAM hosts (OIM + OAM + LDAP)

    There is also an option to install just OIM or just OAM but I will not cover those in this post.
    The tool supports OUD, OAM, OAAM *, OIM, SOA and OHS/Webgate but there are plans to include other IDM products in future releases.
    *OAAM is supported through an EDG-documented scale-out procedure.

    Automated x Manual Installation

    There are a few advantages and disadvantages for both cases, let’s discuss each option.

    • Automated. The biggest advantage is the time spent in deploying the components. The deployment of a HA, split domain, consolidated topology can be done in a couple of days work. Also, the complexity and numbers of manual steps required is greatly reduced, translating in fewer errors, issues and planning time. Rather than having to manually install and configure each component (JRE, WLS, OUD, OAM, SOA, OIM), this new tool allows you to run a few commands to install and configure the whole stack. Another advantage is the ability to reproduce a successful install: once you created a response file, is easy to just change its values (hostnames, port numbers, passwords, etc, etc) and run the deployment tool again on another environment. That also leads to consistency between your environments as they will all have the same basic structure and configuration.
    • Manual. The manual approach gives you more freedom and flexibility, as to which components, architecture and products you want to install (though future releases will probably reduce this gap). The manual installation requires a considerable amount of time to plan, install and configure all components, and if not followed the exact process can lead to a problematic environment down the road. The number of required manual steps is estimated in over a thousand and it will require more than a week (if you’re already familiar with the process) to get a full working OAM-OIM integrated environment in a High Available architecture. Reproducibility is another problem. Trying to recreate a second environment (Development, Test, Production, DR, etc) requires a controlled and documented installation process and I’ve seen many customers failing to do so.

    Things to watch out when deploying with the new tool

    The new tool isn’t a silver bullet and will require at least a minimum of preparation before starting. The tool has a “preverify” phase where it will try to validate your environment but it will not catch all the missing configuration and it will fail later. Failing to follow some of the recommendations will result in errors down the road requiring you to start the whole deployment process all over. So, in order to prepare and have a smooth installation, based on my first impressions, I would recommend:

    • Get familiar with the 11gR2PS2 Enterprise Deployment Guide (http://docs.oracle.com/cd/E40329_01/doc.1112/e48618/toc.htm). It will help you understand the new concept and to make the required preparations before starting the deployment;
    • Stick to the recommended architecture, whichever you choose, single domain or split domain, and to the number of hosts/components;
    • Having a NFS shared mounting point to host the installation files makes the process even faster and easier. Make sure to mount the installation directory in the web hosts too, you can unmount it later after the installation completes.
    • Dedicate some time to verify if all the hosts and infrastructure are correctly configured. Check if all hosts are resolvable both in through DNS and hosts files (again, you can isolate the web hosts later, after installation finishes), kernel parameters, mounting points, database, available disk space and temp directory, load balancer, etc. Refer to the EDG guide and make notes of all the requirements before starting the deployment.
    • When you create the Database Schemas with RCU, use two prefixes, but make sure to create the ORASDPM schema for both OIM and OAM. For example:
    RCU Prefix
    Schemas
    EDGIAD OAM, IAU, ORASDPM, MDS, OPSS, OAAM
    EDGIAG OIM, SOA_INFRA, MDS, OPSS, ORASDPM

     

    • Before even start to run the tool check the Support Note 1662923.1. There are some required manual steps that need to be executed before and right after executing the tool.
    • In case you encounter an error, the clean up procedure basically instructs you to erase everything and start all over. In my experience I found some minor issues (low /tmp space or hosts not resolvable) that were not caused by the tool itself. In my case just deleting the /stage/lcm/provisioning/phaseguards files for that particular phase lured the tool into thinking it hasn’t started the phase yet and it allowed me to correct the issue and run phase again. Might worth a try before erasing everything and starting over.
    • After the installation (and the manual steps described in Support Note 1662923.1, https://support.oracle.com/epmos/faces/DocumentDisplay?id=1662923.1) completes, there are still a couple of manual steps that need to be executed. Don’t forget to check the EDG guide and follow them through.

    Conclusions

    Although the new deployment tool does not fit all needs and it currently support a few components, options and features, it’s a great step towards a more simple and effective way to install and configure IDM components. In future releases there will be more flexibility and a more refined and robust tool available. We hope that this tool will help to provide customer with an easier way to test and deploy our products and reduce the number of issues and required time to install IDM.

    References

    • IDM 11gR2PS2 EDG: http://docs.oracle.com/cd/E40329_01/doc.1112/e48618/toc.htm
    • Identity Management Deployment Repository Download Page: http://www.oracle.com/technetwork/middleware/id-mgmt/downloads/oid-11gr2-2104316.html
    • Support Note 1662923.1 – https://support.oracle.com/epmos/faces/DocumentDisplay?id=1662923.1
    Viewing all 180 articles
    Browse latest View live