• Levi D.

    It's oddly not documented and/or doesn't show up like other profile permissions within reporting, however in my experience you can report on this.  The "SSO Login (SAML 2.0)" profile permission is stored in profiles.integ_perms and the permission value is 256.  All the "perm" type fields on profiles can have multiple permissions so it's best to use the bitand function when you are looking at these fields.  If you want a filter to output only SSO profiles then it would be like:

    bitand(profiles.integ_perms, 256) equals 256

    Or you could output all of your profiles and add a column definition like this to see which are SSO enabled or not:

    if(bitand(profiles.integ_perms, 256) = 256, 'Yes', 'No')

  • Levi D.

    Hi Kati,

    If you encounter an issue with an answer on you may report the issue on that answer's page.  If you scroll to the bottom of the page there is "Was this answer helpful?" question, answer "No" to this and provide details of the issue.  No need to do this for answer 155 though as it has already been addressed.

  • Levi D.

    You could use the editor loads trigger.  This would ensure the field would always get set based on the condition.  The downside is that if a user triggers the rule but makes no other changes and wants to close the record they will be prompted to save and would have to click no.  This extra prompt and click would add up if this is a fairly common scenario though.

    An alternate solution would involve developing a CPM that would be fired by business rules.  In this case you would probably use the Mail API to build and send the notification email instead of using a rule fired Email action.  No workspace changes or custom field additions would be required with this type of implementation.

  • Levi D.

    Hi Saji,

    There is way to do this and it is documented in the Customer Portal API Documentation:

  • Levi D.

    The ID for the report is not part of the url but instead is part of the body of the request.  For Postman I use the "raw" input set to JSON for the body instead of using "form-data".  The raw body in Postman would like like this:

    "id": 100001

    or if you use lookupName it would look like the example in the documentation:

    "lookupName": "Last Updated by Status"

  • Levi D.
  • Levi D.

    Yes, "aggregation" for the relationship type would be correct for your setup.  Aggregation means that when the parent record is deleted the related children records are deleted.  Association means that when the parent record is deleted the related children records remain; their foreign key relationship to the parent (via the Child Field) will be removed.

    You will also need to specify a Child Field for the relationship.  This is what will relate the parent (incident) to the children (Tickets$Notes).  You can go back to fields and create and name your own integer type field which you can then specify as the Child Field for the relationship.  Or, in the Child Field drop-down you can choose "Auto Generate New..." and it will create a field for the relationship.

  • Levi D.

    Check out your value on this configuration setting:  SRCH_ANS_ID
    Enables the searching for answer id during a phrase search. When enabled, if the search query consists of only a number, and there is answer with answer ID = 254, that answer will be displayed at the top of the search results. Default is enabled (Yes).

  • Levi D.

    Once I have an IP in question I get all the associated clickstream sessions and do various reporting on those sessions.  Here are some metrics I look at and all are based on clickstreams data:

    number of sessions per hour - If this is consistent per hour and occurs every hour or is a ridiculously high number those are bot-like indicators.  This will also give you an indicator of the IPs history on your site as far back as you have clickstreams data (by default 30 days).

    clickstream action count - This outputs each clickstream action that was hit at least once and provides how many times it has been hit.  Some bots only hit the home page and this report will reflect that.  I typically look for a low amount of the type of actions but for those actions counts to have similar numbers.

    number of sessions per user-agent - Some bots may identify with only 1 or 2 user-agents.  If there is a high proportion of sessions devoted to just 1 or 2 user-agents it may indicate bot-like behavior.

    number of sessions per contact - More important actions on our site require a login so I value this metric.  I have yet to see bot-like activity on our site associated to a contact record.  In my case, I consider no contacts associated to the sessions to be a potential for a bot.

    Lastly, I have report that outputs the entirety of the clickstream data associated with an IP.  This helps me eyeball patterns and review timing of events.  Sometime a manual review is needed to verify bot-like activity.

    In general when evaluating for bots I will take many of the above metrics in consideration.  Additionally I will do an IP lookup to check out any additional data points about the IP.  Here is one place you can do this:

    I like to see the location, if it is static or dynamic, a possible proxy server, organization, comments on this IP, and hostname.  I document all associated hostnames for IPs I determine to be bot-like.  This in for case where I may see multiple IPs from a similar hostname then I can block the entire host/domain instead of trying to get all possibly IPs.  This is a big gotcha though, you must have a high-level of certainty to block an entire host/domain.  This is similarly true if you ever block by a substring within a user-agent.

    Currently, I block 1-2 IPs in SEC_INVALID_ENDUSER_HOSTS per month.  Although, this will vary on per site/interface basis.

  • Levi D.

    For custom fields you could use the "Text Area" data type as that allows up to 4000 characters.

    Similar to custom fields are system attributes (found in the object designer).  You can add one of these as "Text -> Long Text" data type to be able to add up to one megabyte of characters.

    Either of these options should work for you.

  • Levi D.

    I've attached a report that I use to check for possible bot activity by IP on the site.  I put dummy values in the  IP filters as you will fill these in as you go along and will be site specific.  Here is a summary of how I use the report and how I update the three IP filters.

    The report is scheduled to run every hour but sends to no one.  However there is an alert based on the one exception in the report, if the alert is triggered then I am sent an email.  This allows the report to run hourly but to only notify me when an IP(s) triggers the alert. 

    The alert is triggered if any records are in the report and a record will only show if an IP has generated at least 250 sessions within the past week on one of the interfaces.  If you want to change the parameters for triggering the alert, there are two items of interest:

    1.  The "Past Week of Data" filter is what sets the timeframe of one week

    2.  The "count >=" group filter is what sets the 250 sessions limit

    For your scenario you would want to change the #1 filter to likely only look at 24 hours worth of data, but since you are looking at smaller window of time you may also want to turn down the #2 filter.  For example you may only want to look at IPs that generate 80 sessions within 24 hours.  You can always adjust these too if you find yourself investigating too many non-bot IPs or not finding enough bot-like IPs.

    There are 3 IP filters on this report and here is how I use them:

    1.  Known Good IPs - I put IPs in this filter to exclude them from the report.  These are IPs that have been investigated, but the activity is considered legitimate and there is likely not a need to investigate again.  For example, IPs I found associated to Oracle that trigger this report I put in this filter.

    2.  Known Bad IPs - I put IPs in this filter to also exclude them from the report.  These are IPs that have been investigated and determined to be bot-like.  I also add these IPs to the SEC_INVALID_ENDUSER_HOSTS configuration setting.  This filter and SEC_INVALID_ENDUSER_HOSTS have the same IPs in them.

    3.  Temp Excluded IPs - I also put IPs in this filter to also exclude them from the report.  However, these are for IPs that have been investigated but there is not conclusive evidence yet that they are or are not a bot.  I remove the filter about every 1-2 weeks and may end re-investigating and IP that re-triggers this alert at some time in the future.

    NOTE:  This report only provides me IPs of interest.  I then run a dashboard of reports that are all clickstreams based to help determine if an IP is bot-like or not and use other tools to look at user-agents and hostname association of IPs.

    I hope this helps.

  • Levi D.

    Also the CLIENT_SESSION_EXP configuration setting can be changed.  However if the "Session Timeout field" is set on a profile it will take precedence over the configuration setting.  This bit of documentation talks about both items:

    Both items can be set/used.  In general, I would use CLIENT_SESSION_EXP to set a global timeout and use the "Session Timeout field" for any timeouts that differ from CLIENT_SESSION_EXP.

  • Levi D.

    How can I get it to send to me immediately when the profile is changed and hits the exception? 

    With a scheduled/alert report the best that can be done is 15 minutes.  You will need to edit the Recurrence and expand each hour to set the 15 minute schedules.

    Why is it sending it to me every hour, even when nothing has changed?

    How are your filters set?  Do you have a relative filter set based on a timestamp?  If you decide to change your schedule to every 15 minutes then I would recommend to only look at the past 15-20 minutes of the relevant data by using a relative filter.  This is so that if there is a change at 7:36 then the 7:45 schedule will trigger the alert and send it and all subsequent schedules will not pickup that change.

  • Levi D.

    Try using a system attribute.  They are similar overall to custom fields, and in regards to the masking functionality they allow you to define a regular expression as a pattern (such as you have referenced).  Custom fields have input masks but these do not appear to be as dynamic as the masking capability within system attributes.

    One gotcha though, if you need to use this field within business rules then system attribute may not be an option as they are not exposed to business rules.

  • Levi D.

    Check out this table in the data dictionary:  Incidents to Contacts (inc2contacts)

    Table structure would likely be:

    contacts -> inc2contacts -> incidents (all inner joins)

    You can output the prmry field too and it will tell you if a contact is the primary contact on the incident or not.

    Looks like the standard "Incident Contacts" (ID = 9011) report may be a potential starting point for you too as it has part of the table structure in place already.