Posted on by & filed under analytics, cloud computing,

Analyze the changes to your Salesforce org in your Salesforce org.


The Problem

Salesforce is an easy platform to develop on. It is so easy that decent-size organizations will find many people configuring and enhancing the system at the same time. One of the “joys” of being a Salesforce developer or admin is co-existing with the other developers and admins in the org. Often times, while we are developing away in our own bubble, we aren’t even conscious of the number of changes being made by others to the very system we are changing ourselves.

There are many use cases where visibility into system changes are useful. Here are a couple of examples:

  1. A user (perhaps an admin) makes some changes they thought were a good idea at the time, but which may create other problems the user was unaware of.
  2. A developer goes into Salesforce to add a feature, and realizes it has already been added “under the radar” by someone else. Time is wasted just figuring out that it has already been done.

Salesforce does keep a basic record of all these changes which are made.  Navigating to Setup->Security Controls->View Setup Audit Trail takes you to a page like this:


This shows you the last twenty changes made to your org, or gives you the option to download a CSV dump of the last six months.  Twenty changes can get easily get made by one person in a hurry, pushing everything you are interested in into the CSV log.  Obviously Salesforce has been busy building other features, and hasn’t shown the Setup Audit Trail any love in quite some time.

The Solution

After seeing some Twitter conversations about this, I realized I wasn’t the only one who needed a better way to get some visibility into the Audit Trail, so decided to try building a solution 100% native on  Enter AuditForce!

AuditForce is architected as a series of scheduled and batch jobs which scrape this CSV file out of Salesforce, ingest it into a temporary object, then transform that temporary object into a final object which contains a running history of your audit trail—even spanning back before the 6 month limit of the CSV file.

Once this data is loaded into an object, it is easy to run reports, dashboards, or even post these changes to a Chatter feed!


This dashboard would be a good place for any admin or developer to start their day, looking at what changes were done in their org on the previous day.

The Implementation

Database Objects
AuditForce contains 3 database objects which don’t reference each other:


Setup Audit Trail Config contains a bunch of settings:


There needs to be just one record in this object. The name isn’t important.

  • username, password, and token are used to authenticate into the org. This is required, since this is implemented as a scheduled job and doesn’t have the user’s session Id context at the time it is executed sometime in the future.
  • Post to Chatter and Chatter Group Id are for if you want the Setup Audit Trail line items to end up in a Chatter Group you create.
  • The Bulk Batch/Job IDs are actually written and read to programatically, so they can be ignored.

TEMP_Setup Audit Trail is a receptacle for the raw CSV file which is loaded using the Bulk API. It is close to the finished product, but the CSV data needs some light massaging before it is suitable to be loaded into the final Setup Audit Trail table. Due to limitations of the Bulk API, we can’t do any ETL during the import. We have to load the file as-is.

Apex Classes

In case you’re not familiar with the platform but are still reading this, Apex is the java-esque programming language that salesforce developers live and breathe.  Here is an overview of the Apex classes and how they work together.


This is the schedulable method (lightweight wrapper) that kicks off the first batch process AuditTrailClearTempGetCSV. Schedule this to run once a day or once an hour, to your liking. There is a comment block at the top of the method you can paste into an anon block in Salesforce to schedule it.

A batch Apex method which clears out the temporary object in small batches until it’s empty, then calls AuditTrailGetAndLoadCSV when done.

This is where most of the magic happens. This class authenticates back into the Salesforce org, to capture a “unicorn”: the magical, proprietary “CONFIRMATIONTOKEN” needed to download the CSV file. Apparently there is no other way to get this token other than to screen-scrape it. Not a best practice, but workable for some indefinite period of time. With a session ID in the header, and the special token in the query string, we can get this CSV file using a simple HTTPResponse instance’s getBody() method. The CSV header on the file needs to match the field names of the TEMP object exactly in order to be able to load the file with the bulk api. Here is an easy way to swap out the CSV’s header row:

We then do a series of calls to the bulk API’s XML REST methods.

  • Create a new Bulk Load Job
    • Start the process, pull a Job ID
  • Add 1 Batch to the Bulk Load Job
    • Add the CSV to the batch by using the HttpRequest instance’s setBodyAsBlob() method.
  • Close (finalize) the Bulk Load Job
    • Close out the Bulk Load Job, and start the ingestion process.

We write the Job ID and Batch ID in our Audit Trail config object, to be able to pass them to future methods which need to know them. This is a necessary evil to enable scheduled jobs in different contexts to share values with each other.

The last thing this class does is to schedule AuditTrailCheckBulkCompleteSchedule to run in one minute. You should probably just check out this class in its entirety.
A lightweight schedulable wrapper for a batch wrapper: AuditTrailCheckBulkCompleteBatchWrapper

This is needed to avoid the “Callout from scheduled Apex not supported” exception. When it is finished, it calls AuditTrailCheckBulkComplete

This is to make sure the Bulk Job is completed before continuing. It queries the bulk API for job status. If it is completed, it goes on to the next step: AuditTrailBatchTransformTemp. If the bulk load is still going, it schedules AuditTrailCheckBulkCompleteSchedule again for 1 minute in the future. NOTE: in practice these always seem to take a matter of seconds, so I don’t know if it will ever need to do the re-schedule or not.

This batch Apex takes everything in the temp object, which has been loaded from the bulk job, and does 3 transformations:

  1. Converts the custom text representation of a date and time into a datetime datatype
  2. Resolves the username field to an actual user via with a lookup field (foreign/primary key)
  3. Because some “Actions” are more than 255 chars long, it makes a truncated version of this field called “Action (Short)” so we can have robust searching and filtering on this field (fields > 255 don’t have this)

Once these transformations are complete, we can remove entries from the temp table which are already in the live table by looking at timestamps. If we need to remove some entries from the temp table, we call AuditTrailBatchDeleteExistingTemp. Otherwise we call AuditTrailBatchInsertToFinal.

Very simple batch Apex to delete records from the temp table earlier than a given datetime. Calls AuditTrailBatchInsertToFinal when done.

Simple batch Apex to query everything in the temp object, and insert it into the live object:

Apex Trigger

Loop through all the Audit Trail records being added to the live table, and post them to chatter!


Here’s the whole shebang on github (for reference).

Since your org will be calling itself, you need to add it as an authorized endpoint:

Setup->Security->Remote site settings

I have packaged up the custom objects and Apex code on GitHub in ZIP file with a package.xml file. You can install this ZIP file using workbench.


Once installed, create custom object tabs and an app to hold these objects if you wish.

You will need to create a single Setup Audit Trail Config record and fill in the field values as shown towards the beginning of this article.

Schedule the AuditTrailImporterScheduler class to run.  I would recommend scheduling it to run via an anon block execution:


You can copy and paste the comments from the top of the AuditTrailImporterScheduler class so you don’t have to type all that.  The first time you run it, you can set it to run a couple minutes from your current time just to make sure everything’s working.  The entire string of jobs takes around five minutes to complete.  You can watch it run via setup menu’s “Scheduled Jobs” and “Apex Jobs”.  If all looks good, delete the job and re-schedule it for your preferred time.  Daily or hourly.

I did not include any reports or dashboards in the package, just to keep the package simple.  The reports and dashboards are easy to create, and different users may have different ideas on what is best to report on.  Most of them are Summary reports “By Date, By User” type of thing. Here is an example:


Known Issues

  • First and foremost: this was made in my spare time for fun, no guarantees of any sort made!
  • Screen-scraping may eventually break, although it is non-brittle as it just looks for the hyperlink.
  • If your org’s CSV has more than 10,000 rows, it will only import the first 10,000. They are ordered date DESC, so the older ones will be lost. This is good. If you want the old ones, do a manual load with the data loader your first time. Over time the 10K limit isn’t an issue, unless you are making 10K changes to your org quicker than your scheduled import frequency! In practice this should be close to a nonexistent problem, even in huge orgs.
  • The unit testing does fairly minimal assertions. It was made to cover the lines for runtime errors and allow for installations in production, but doesn’t do full positive and negative test cases with exhaustive test data sets.
  • Password and token and stored as encrypted text fields in the custom object. Not very secure. Make sure to lock fields down with CRUD permissions and consider using a user with limited privileges for these credentials.
  • This hasn’t been test run for long periods like months on end. I’m not sure how the scheduled jobs will play out over long periods of time.
  • This could be implemented in a more secure, more streamlined fashion using an outside server. But I wanted to see if it was possible with 100%

The Future?

Salesforce will most likely release their own version of this eventually (“Safe harbor!”), this is a stop-gap measure until then.

If you are really paying attention you may have realized that this architecture could actually pull the data from ANOTHER org, not just itself. This is due to the nature of the authentication. In fact, it may become desirable to create “Management Orgs” on which pull down Audit Trail logs from one or more other orgs. Incorporated into these management orgs could be additional, rich data from version control systems on metadata which has been changed. See this previous article for more info on capturing all your metadata changes in github.

The Setup Audit Trail data residing in the custom object would lend itself well to serving up custom, Visualforce charts. Keir Bowden gives some great recipes for this in his new “Visualforce Development Cookbook”. This would allow you to design larger charts with more data on them than the somewhat constricting reports and dashboards charts. Also you could incorporate dynamic, dropdown filters to allow you to cut through noise and re-render the graphs to answer questions quickly. Such as filtering out the power admins from your Audit Trail to focus on the infrequent changes.


6 Responses to “AuditForce: Native surfacing of the Salesforce Setup Audit Trail”

  1. Arun

    Thanks for the detailed blog post!
    I tried something similar a few days back and was stuck at the “Maximum size of callout request or response = 3MB” limit. The File in my org is 4+ MB. I gave up previously.
    Let me know if you were able to work around this limit or have any ideas…

  2. Dave Lea

    This is a great! Thanks for posting this. I do have one question though.

    What type of permissions would a “dummy” license need to run this? Would it simply need read/edit on the setup audit trail config object and the permissions to run the apex classes?

    • Daniel Peter

      Great question!

      I haven’t tested this, but here is my guess for a base config:

      Full salesforce license (not platform, chatter plus, etc)

      No special settings on the user record (feature licenses, etc)

      Full CRUD on TEMP_Setup Audit Trails
      Read/Create on Setup Audit Trails
      Read/Update on Setup Audit Trail Configs
      “View Setup and Configuration” permission checked
      Add all the Apex classes in the package to “Enabled Apex Classes” for the profile
      *I THINK* “Dataloader Bulk” flag needs to be enabled on the profile
      Access to the Chatter Group they are posting to (OPTIONAL, if trigger is enabled)

      That’s my thoughts based on my white-box knowledge of how things are working. Obviously some testing will be needed. Start restricted, and open up as necessary :)

  3. Kal

    Hi there,

    It is working in my dev org appropriately, but we have issues with using this in one of our company sandboxes. I get this error “System.ListException: List index out of bounds: 1: Class.AuditTrailGetAndLoadCSV.getTrail: line 35, column 1 AnonymousBlock: line 1, column 1 AnonymousBlock: line 1, column 1”. Could you please help.

    Many Thanks

    • Daniel Peter

      Looks like it can’t find the download URL from the setup page.

      Try to log in as the user that is authing for this.
      Navigate to this page:
      (perhaps the user can’t even get this far? If so, that’s your problem!)
      Once at the page, view source or inspect element to see if you can find the magic hyperlinked “Download setup audit trail for last six months (Excel .csv file) »”
      the source should should look like this:
      <a href="/servlet/servlet.SetupAuditTrail?id=&_CONFIRMATIONTOKEN=”>Download setup audit trail for last six months (Excel .csv file) »

      The problem with scraping is that you can’t guarantee the results. So perhaps salesforce is using some different page structure for your page based on profile settings, org settings, etc.

      I’d love to find out what the final issue was. if you want to take offline contact me at DAN AT DANPETER DOT COM.

      Obviously my next version of this needs to have more descriptive error handling :)

      Thanks for giving it a whirl!