Friday, 2 August 2019

Community/Experience Cloud Certification

Please do continue to like, share and subscribe sfdconestop Youtube channel, if you find the Salesforce tutorials(100+ Video's) to be informative!!
▶️Subscribe:  https://www.youtube.com/c/sfdconestop
👉 https://sfdconestop.blogspot.com/
👀 Follow us on Linkedin : https://www.linkedin.com/company/sfdconestop
 
Experience Cloud Certification would be a tough exam if you do not have hands on experience on Community cloud. I have followed the trailmix and study guide to get a good hold on all the community related topics. 
 
Watch sfdconestop experience cloud tutorial to understand the use case - https://youtube.com/playlist?list=PLReSpXrazNv5wgbxMLViNhfpb9-ifJnfp

Study Guide:
https://trailhead.salesforce.com/help?article=Salesforce-Certified-Community-Cloud-Consultant-Exam-Guide
  • Time allotted to complete the exam: 90 minutes
  • Passing score: 62%
  • Registration fee: USD 200, plus applicable taxes as required per local law
  • Retake fee: USD 100, plus applicable taxes as required per local law


Trailmix:
https://trailhead.salesforce.com/en/users/sfdcfan/trailmixes/community-cloud-certification

Its very important to have hands on in implementing community in developer org to avoid confusion and eliminate the wrong answers.

It is recommended to go over quizlet and proprofs to get an hold on the sample questions and to self assess.
Self Assessment -Proprofs

Go through the below blogs as well for quick revision -
http://www.salesforcechris.com/community-cloud-consultant-resource-guide/
http://www.alwaysablezard.com/salesforce/salesforce-certified-community-cloud-consultant-exam-tips/

Thursday, 11 April 2019

Org Migration Readiness

During an org migration, your org moves from one instance to another. If you follow our best practices, this maintenance should be seamless. Below are some frequently asked questions regarding org migrations. Pre & Post org migration checklists attached.


Frequently Asked Questions

What actions do I need to take to prepare for an org migration? What will happen if I don’t follow Salesforce Infrastructure Best Practices?
If you are not following our best practices outlined on the Plan and Prepare for Org Maintenance and Releases site, your end users may not be able to access Salesforce after the maintenance is complete.
In order to avoid unintended service disruptions, you may need to take the following actions:
i. Enabling My Domain (NOTE: My Domain is required for customers that have requested the org migration.), and if you have any hard-coded references (for example, na1.salesforce.com) make sure that you update them to relative URLs (for example, login.salesforce.com or your My Domain subdomain) prior to the org migration.


Will an org migration impact Live Agent?
It's possible. During an org migration, your org’s instance name changes. When this happens, the URL you use to access Live Agent/SOS changes. Chat clients and deployment code supplied by Salesforce react to this change and appropriately forward HTTP requests to the new endpoint, but some third-party or custom applications, including Live Agent custom REST clients, may not. These custom applications will not be able to find your account on your previous instance and will likely fail.

To minimize impact to your Live Agent/SOS implementation, follow best practices and ensure your Live Agent custom REST client can properly redirect requests to a new instance of the Live Agent service following any maintenance involving a move for your org. The best method to avoid these issues with your custom client (which, again, will not automatically direct requests to the correct endpoint) is to handle the SwitchServer response and use the 'newUrl' property for the request that resulted in this response and all subsequent requests thereafter. For more information on updating your custom client and testing, read the, How to update your Live Agent custom client when your org instance changes article. This will ensure your custom client will not encounter issues after a site switch, and will provide you ample time to later update the endpoint used from the start of its execution.

For more information about Live Agent endpoints and what is meant by hard-coded Live Agent references, review the article, Live Agent server (endpoint URL) has changed and now Live Agent Chat is no longer working.








Site Switch Readiness


What is a Site Switch?

Each Salesforce instance is built and maintained in two geographically separate locations. An instance is actively served from one location (the active site) with transactions replicating in near real-time to the other completely redundant location (the ready site). This infrastructure model allows us to switch the location of the active site for maintenance, compliance, and disaster recovery purposes, which is referred to as a site switch.

During site switch instance would remain the same instance however will be supported from the secondary data center. During Site switch org will be in read only mode for about an hour.
 
Email IP addresses Whitelist -

 
Important Actions

A. Subscribe to Trust Notifications to know when site switches happen.

B. Follow Salesforce infrastructure best practices by not restricting access to Salesforce IP ranges, removing hard-coded references, and by setting your DNS timeout value to 5 minutes (default setting).

C. Customers with Live Agent/SOS custom clients should ensure those clients can properly handle redirects to the new active site, otherwise a disruption to your Live Agent service can occur. The best method to avoid these issues is to handle the SwitchServer response and use the 'newUrl' property for the request that resulted in this response and all subsequent requests thereafter.
Frequently Asked Questions


1. How does Salesforce communicate a site switch?

Planned site switches are posted to the maintenance calendar on our Trust site at status.salesforce.com. If we need to perform a site switch during an incident to bring an instance back online, the incident record on status.salesforce.com will be updated to reflect that information. Sign up for Trust Notifications regarding your instance to receive maintenance notification emails (reminders, starting, updates, and completed) as well as incident notification emails (new, updates, resolved, and root cause). See the Trust Notification User Guide for more information on how to sign-up for these emails.

2. How long does a site switch take?

Currently, site switches can take approximately one hour to complete. As we improve our operational processes through practice and reiteration, our goal is reduce this window. For planned site switches, we post the anticipated site switch activity window to Trust.

3. Can I opt out of a site switch?

Individual orgs cannot be opted out of a site switches. Due to the multi-tenant architecture of our infrastructure, all orgs on the instance must undergo a site switch at the same time.

Planned site switches are only scheduled during preferred system maintenance windows. We ask that you plan maintenance activities for your Salesforce org (software upgrades, integration changes, etc.) outside of the preferred system maintenance windows.

4. Will I be able to access my org during a site switch?

During planned site switches, your org should be available in read-only mode for the duration of the site switch activity unless otherwise stated. If, for any reason, read-only mode will not be available, the maintenance record on Trust will be updated to reflect the expected impact to availability.

5. What actions are required to prepare for a site switch?

If you already follow our infrastructure best practices by not restricting access to Salesforce IP ranges and setting your DNS timeout value to 5 minutes (default setting), a site switch should be seamless to your users.

Otherwise, if you are restricting access to certain IP ranges or data centers, then update your network settings to include the complete list of Salesforce IP ranges in order to avoid any unintended service disruptions following a site switch. And if you control your DNS timeout values set, then you may need to refresh your DNS cache and restart any integrations following the maintenance.


6. How does a site switch impact previously scheduled activities (weekly exports, Apex jobs, etc.) and Apex callouts?

Ongoing activities will be paused prior to the site switch and resumed following the site switch. Activities scheduled during the site switch will start following the site switch.

A small subset of Apex, Batch Apex, REST API, SOAP API, and Bulk API jobs started prior to the site switch may return an error following the maintenance window. If you receive an error resulting from a previously scheduled job following the maintenance window, restarting the job will return the expected results. We recommend rescheduling large or long-running jobs after the site switch completes for the most seamless experience.Apex callouts to external services will continue to execute during the maintenance, and since these frequently result in follow-up DML calls to the Salesforce application, you may experience issues with intended program flows as the application will be in read-only mode. We recommend preventing these callouts from executing in read-only mode. For more information on how to prevent these callouts, see Apex Callouts in Read-Only Mode.


7. How does a site switch impact Web-2-Lead, Web-2-Case, and Email-2-Case activities?

Web-2-Leads, Web-2-Cases, and Email-2-Cases that occur during the site switch will be queued and processed following the completion of the site switch.


8. Will a site switch impact Live Agent?

Yes. During a site switch, your org’s active site gets switched to the ready location, and the ready site gets switched to the active location. When this happens, the URL you use to access Live Agent/SOS changes. Chat clients and deployment code supplied by Salesforce react to this change and appropriately forward HTTP requests to the new endpoint, but some third-party or custom applications, including Live Agent custom REST clients, may not. These custom applications will not be able to find your account on your previous instance and will likely fail.

To minimize impact to your Live Agent/SOS implementation, follow best practices and ensure your Live Agent custom REST client can properly redirect requests to a new instance of the Live Agent service following any maintenance involving a move for your org. The best method to avoid these issues with your custom client (which, again, will not automatically direct requests to the correct endpoint) is to handle the SwitchServer response and use the 'newUrl' property for the request that resulted in this response and all subsequent requests thereafter. For more information on updating your custom client and testing, read the article, How to update your Live Agent custom client when your org instance changes. This will ensure your custom client will not encounter issues after a site switch, and will provide you ample time to later update the endpoint used from the start of its execution.


For more information about Live Agent endpoints and what is meant by hard-coded Live Agent references, review the article, Live Agent server (endpoint URL) has changed and now Live Agent Chat is no longer working.


9. How does a site switch impact ongoing sandbox refreshes?

Sandbox refreshes that are not completed prior to the site switch will be stopped. The sandbox refresh will restart (not resume) following the site switch. In addition, customers will not be able to initiate a sandbox refresh during the site switch.

Email2Case Setup/Troubleshooting

Email2case sfdconestop demo video-






When emails are sent to the email to case address by a customer two cases being created.

Duplicate cases being created in Salesforce by Email-to-Case can be due to a series of reasons: a workflow rule, an error with the setup on the email server, inaccurate forwarding to the email services address, or the organization recently switched to On-Demand Email-to-Case but the Agent has not been deactivated.

Try the following to narrow down the cause:

1. Check all Process Builder/Flow to ensure that there is not an inadvertent workflow that was created to cause the duplicate cases.
If no Process Builder/Flow are causing the duplicates then proceed with testing and replication.

2. Check to see if you can replicate the issue with both the email address that is customer-facing as well as the service address (the service address is the long address that Salesforce generates when creating a new routing address):
First, attempt to duplicate the results by sending an email to the email address that customers are using. (This should trigger a dup, but note that it can be intermittent.)
Next, send an email directly to the email service address that the emails are being forwarded to and see if this causes the creation of a duplicate. More often than not, when emailing the service address directly you will find that no duplicate cases have been created.
Create an Email Snapshot to see how many emails are received in the org. If you capture two emails, then email-to-case is working as expected, since two emails will generate two cases. For more, please review Troubleshoot Inbound Email Errors with Snapshots
If duplicate cases were created when sending emails to the email address but not to the email service address, it could be that a setting on the email server is causing the duplication. For example, the redirecting rule on the email server is not set correctly, that the email address is part of a cc list, etc. In this instance, the email administration team needs to get involved and provide help by looking into the forwarding of emails and any related settings in place on the email server. Somewhere prior to forwarding to the service address, the email is being duplicated and two emails are being sent to Salesforce.

3. Finally, ensure the email-to-case agent is no longer active. If both the agent and the on-demand version of email-to-case are enabled, and using the same email address, this can cause both to create a new case.

4. Analyze email snapshot headers using http://mxtoolbox.com/NetworkTools.aspx

See Also:
Troubleshooting with Inbound Email Snapshots
Defining Email Service Addresses
Configuring Routing Addresses for Email-to-Case and On-Demand Email-to-Case



Deferred Sharing Rule Recalculation - Deployment Impact

When you deploy Sharing Rules via the Metadata API or Change Sets, sharing recalculation is run to update User access to records. For larger organizations, this recalculation might take a significant amount of time even after the deployment has been successfully performed. Here's how can you minimize the impact caused by Sharing Rule deployment.



Enable Deferred Sharing Rule Recalculation or Parallel Recalculation

Deferred Recalculation lets you apply sharing rule changes at a later time after you create or edit them.
Parallel recalculation takes advantage of multiple threads to speed recalculation of each object. .
To monitor your deployments done via the Metadata API:
In Classic:

1. Go to your name | Setup.
2. Click Deploy | Monitor Deployments.

In Lightning:
Go to Setup
Click on Environments under Platform Tools in the left hand pane
Click Deploy | Deployments StatusTo monitor your deployments via Change Sets:

The outbound change set page will show the results of the deployment, and an email notification will be sent out. If you notice that your deployment is successful but you are experiencing sharing access issues, you can take steps to mitigate them.

To monitor Sharing Rule recalculations running in parallel mode:
In Classic
1. Go to your name | Setup.
2. Click Monitoring | Jobs | Background Jobs.

In Lightning:
Go to Setup
Click on Environments under Platform Tools in the left hand pane
Click Jobs | Background JobsParallel recalculation jobs are listed together with other background processes, including a percentage estimate of the recalculation progress.

Good to know: Before you contact Salesforce Support to have these features enabled, check out Considerations Before Making Org Wide Sharing Changes, Deferred Sharing Rule Recalculation, and Parallel Recalculation.

Tuesday, 2 April 2019

Bulk API V1/V2

Please do continue to like, share and subscribe sfdconestop Youtube channel, if you find the Salesforce tutorials(100+ Video's) to be informative!!
▶️Subscribe:  https://www.youtube.com/c/sfdconestop
👉 https://sfdconestop.blogspot.com/
👀 Follow us on Linkedin : https://www.linkedin.com/company/sfdconestop
 
Now generally available in Winter ’18 (API version 41.0), Bulk API v2 brings the power of bulk transactions from Bulk API v1 into a simplified, easier-to-use API.

Starting summer20 bulk api can now process 15000 batches against to earlier 10000 batches
 Your organization can process 15,000 batches in a 24-hour period. If bulk jobs page doesn't load add the suffix to the url /750?job_info_only=true

Bulk API v2 lets you create, update, or delete millions of records asynchronously, just like Bulk API v1, but offers the following core improvements:
Bulk API v2 uses the same REST API framework as other Salesforce REST APIs. You can use OAuth authentication just like any other Salesforce REST API and take advantage of features like CORS (cross-origin resource sharing) support.
Bulk API v2 does away with the need to manually break up data into batches. Simply submit jobs with the full set of records, and Salesforce automatically determines the most efficient way to batch the data.
Bulk API v2 simplifies the basic daily limits. Instead of having limits based on the number of Bulk jobs and batches, you’re simply limited to a maximum number of records (100 million) per 24 hour period.

Bulk API v2 also has a number of new features that aren’t available in Bulk API v1, but more on that later.

Additional v2 Features

Bulk API v2 goes beyond Bulk API v1 and offers some additional features to make your life easier. These include:
When creating a new job, you can also include the job data in the same request, using a multi-part request. This is limited to smaller sets of records (up to 20K characters).
You can specify different column delimiters and line endings for your CSV data, including:
backquotes, carets, pipes, semi-colons, and tabs for delimiters (instead of commas)
carriage-return & linefeed line endings (instead of just linefeeds)
You can get a list of all Bulk API jobs in your org (active and completed) and use query parameters to filter this list. For example a GET request to /services/data/vXX.X/jobs/ingest?concurrencyMode=parallel will return a list of all jobs in your org using parallel concurrency mode for processing.

Note that you can’t do Bulk queries in Bulk API v2 yet.

Bulk API v2 reduces the amount of code you have to write and gives you more options on how to process your data. Plus, it simplifies data limits, so you can spend less time worrying about how much data you can work with, and spend more time actually running your integration jobs. Consider taking the time to switch over to using Bulk API v2 if you’re using v1, and your code will be slim and trim in no time!
Further resources

For more information on Bulk API v2, see:

Bulk API 2.0 Developer Guide
sfdoncestop youtube over 70 videos - https://www.youtube.com/c/sfdconestop

Salesforce Governor limits


Below are the Pre transaction limits
Description Synchronous Limit Asynchronous Limit
Total number of SOQL queries issued (This limit doesn’t apply to custom metadata types. In a single Apex transaction, custom metadata records can have unlimited SOQL queries.) 100 200
Total number of records retrieved by SOQL queries 50,000
Total number of records retrieved by Database.getQueryLocator 10,000
Total number of SOSL queries issued 20
Total number of records retrieved by a single SOSL query 2,000
Total number of DML statements issued 150
Total number of records processed as a result of DML statements, Approval.process, ordatabase.emptyRecycleBin 10,000
Total stack depth for any Apex invocation that recursively fires triggers due to insert, update, or delete statements 16
Total number of callouts (HTTP requests or Web services calls) in a transaction 100
Maximum timeout for all callouts (HTTP requests or Web services calls) in a transaction 120 seconds
Maximum number of methods with the future annotation allowed per Apex invocation 50
Maximum number of Apex jobs added to the queue with System.enqueueJob 50
Total number of sendEmail methods allowed 10
Total heap size 6 MB 12 MB
Maximum CPU time on the Salesforce servers 10,000 milliseconds 60,000 milliseconds
Maximum execution time for each Apex transaction 10 minutes
Maximum number of unique namespaces referenced 10
Maximum number of push notification method calls allowed per Apex transaction 10
Maximum number of push notifications that can be sent in each push notification method call 2,000

Capture and debug network traffic with Fiddler

The steps below should be followed when you experience performance issues with Salesforce and only when instructed by a Technical Support Engineer. The steps below might be slightly different based on the version of FiddlerCap you are using. For specific questions about how to use this software, please contact FiddlerCap Support.
To do this, we will use a tool, called Fiddler (Not a Salesforce product). Fiddler is a tool for analyzing HTTP and HTTPS (must be enabled from within the app) transactions. This program is a simple-to-use version which can save logs which you can send back to the support team.

In order to install this program, you will need Administrative rights on your local machine. Please consult your IT department if you are unsure, or experience problems installing the application.

Resolution

If you're working with a Salesforce support agent , they can request a capture of network traffic to help with troubleshooting. These can be helpful for support to diagnose issues such as network performance, proxy and browser issues, Single Sign-on, or Integration troubleshooting.

1. Download Fiddler 4 on the local machine
2. Complete the installation and run the application
3. In Fiddler, from the top menu click on Tools | Options | HTTPS |Capture HTTPS CONNECTs
4. Check the box for Decrypt HTTPS traffic and agree to any prompts to install the needed certificate for https decryption
5. Select "Ignore Server Certificate errors" and click on ok to exit out of the Fiddler options.
6. Ensure Capture traffic is selected under the File menu or press F12
7. Run the application or processes that needs debugging and do regular work until the issue seems to have re-occurred.
8. In Fiddler navigate to File | Save | All sessions and save them as a .saz file
9. Close Fiddler
10. Compress and zip the file and attach to the support case along with the information the exact time the issue occurred. You can also email it directly to the support rep you are working with.

* To compress the file, locate the file in the folder after you saved it and then right click on it and click on Send to | Compressed (Zipped) folder

Lightning Component Best Practices

Lightning Components run at the client-side, in a single page (where they are created and destroyed as needed), and alongside other components that work on the same data. In this blog post, we discuss how these characteristics impact performance, and review a list of best practices to optimize the performance of your Lightning Components.

https://developer.salesforce.com/blogs/developer-relations/2017/04/lightning-components-performance-best-practices.html

Data Migration Best Practices

How to Capture HAR file and Analyze

Please do continue to like, share and subscribe sfdconestop Youtube channel, if you find the Salesforce tutorials(80+ Video's) to be informative!!
▶️Subscribe:  https://www.youtube.com/c/sfdconestop
👉 https://sfdconestop.blogspot.com/
👀 Follow us on Linkedin : https://www.linkedin.com/company/sfdconestop
 
Create and view HAR files in supported browsers

Chrome

1. Go to the page where you're experiencing slowness.
2. Press F12 on your keyboard.
3. Click the Network tab in the diagnostic window.
4. Click on the link/button/tab to have the problem page or action load in the main window.
5. After the page loads, you should see some information and graphs in the diagnostic window. If the slowness was seen during this page load, Right-click in the diagnostic window and Save as HAR with content.
6. If you need to preserve the logs for multiple page loads, there's a 'preserve log' checkbox below the tabs.
7. Press F12 to remove the diagnostic window.


Internet Explorer


1. Press F12 on the keyboard.
2. You should see a component appear on the bottom of the screen.
3. Go to the Network tab in this component and press the green triangle (Play button)
4. Reproduce the issue.
5. To save, click the red square (stop button.) Directly to the right you will see a disk icon with an arrow on it.
6. Click this icon and save it.
7. IE only offers options to export as an XML or CSV file. Either format is fine. CSV can be viewed via Excel and XML can be viewed by any tool that can read HTTP Archive files, such as the Chrome extension, 'HTTP Archive Viewer.'


View the HAR file log

1. Go to the HAR viewer.
2. Uncheck: Validate data before processing? (Otherwise, an error may occur.)
3. Drag the HAR file inside 'Preview' box.
 

Row Lock - Record currently unavailable errors

Please do continue to like, share and subscribe sfdconestop Youtube channel, if you find the Salesforce tutorials(100+ Video's) to be informative!!
▶️Subscribe:  https://www.youtube.com/c/sfdconestop
👉 https://sfdconestop.blogspot.com/
👀 Follow us on Linkedin : https://www.linkedin.com/company/sfdconestop
 
When a record is being updated or created, we place a lock on that record to prevent another operation from updating the record at the same time and causing inconsistencies on the data.

These locks normally last for a few seconds and when the lock is released, other operations can do whatever processing they are supposed to do on the record in question. However, a given transaction can only wait a maximum of 10 seconds for a lock to be released, otherwise it will time out.


Common Scenarios that prevent unlocking record

a. Email-To-Case

When an email is processed by email-to-case, triggers on the email message object or related objects (i.e the parent account) will attempt to lock those records for processing. If another process is holding a lock on these records and the processing of the email has to wait for more than 10 seconds, a timeout will occur and you will see this error.
b. Apex Triggers/API

Assume there is an After Insert Apex Trigger on Tasks, and it runs for about 14 seconds while doing some processing. This trigger will run when a task is created. When tasks are created and are related to an Account, we place a lock on the parent Account while the task is being created. This means that the account cannot be updated while the task creation is in progress.
Reduce using Locking Statements.

Scenario:
User A imports a task via the data loader and assigns it to an existing account record. When the task is inserted, the apex trigger is fired.
Just 2 seconds after User A starts the insert via the data loader, User B is manually editing the same account record the task is related to.
When User B clicks Save, internally we attempt to place a lock on the account, but the account is already locked so we cannot place another lock. The account had already been locked by the creation of the Task.
The 2nd transaction then waits for the lock to be removed. Because the task takes about 14 seconds to be created, the lock is held for 14 seconds. The second transaction (from User B) times out, as it can only wait for a maximum of 10 seconds.

In this case, user B would see an error on the screen similar to the ones mentioned above. Apex Tests can also incur locks if run against production data​.

c. Bulk API

Inserting or updating records through the Bulk API can cause multiple updates on the same parent record at once, because the batches are processed in parallel. For example, if two batches are being processed at the same time and the two of them contain records pointing to the same parent record, one of the batches will attempt to place a lock in the parent record, which can lead to the other batch throwing a "unable to lock row" error as the batch was not able to get a lock within 10 seconds.

To prevent this, you can do either of the following:
Reduce the batch size
Process the records in Serial mode instead of parallel, that way on batch is processed at a time.
Sort main records based on their parent record, to avoid having different child records (with the same parent) in different batches when using parallel mode.
For these type of scenarios, it's also highly recommended that you become familiar with the guidelines found on the article


d. Master-detail Relationship

If a record on the master side of a master-detail relationship has too many child records (thousands) you are likely to encounter these errors as well, as every time you edit the detail record, the master record is locked. The more detail records you have, the more likely that these will be edited by users, causing the parent record to be locked.

To prevent this issue you can move some child records to another parent, as to reduce the amount of child records attached to a single parent record.
Record Level Locking is a common scenario which can be reduced.


Troubleshooting:
1. You can enable Debug logs for the user who is facing the error , to find the offending trigger/flow/ValidationRules/ causing the issue.
2. Check for any dependent background jobs that are running on the same object. If there is any, try to pause the jobs and then perform the actions to reduce row locks.

Performance testing Practices

Performance Testing Best Practices-
 

Sandbox Refresh Preview/non Preview Recommendations

Sandbox Instances are split up in two groups – Preview and Non-Preview. Preview Instances are instances which get upgraded to the newer version of Salesforce before Production Instances (e.g. NA2, EU1, AP0) and Non-Preview Instances are upgraded towards the end of a Major Release along with the majority of Production Instances. 
Review detailed video here -
 

You can key in your instance name like cs87 at the instance of the below url to understand what action to be take to stay in preview/non preview mode-

Sample format -
https://sandbox-preview.herokuapp.com/sandbox/cs87

For multi sandbox instances -
https://sandbox-preview.herokuapp.com/sandbox/cs87,cs88,cs89,cs90

Q: Is it possible to move my sandbox that received the release preview off the preview without a refresh?

A: No, in order to move a sandbox off of the release preview you will need to submit a refresh to direct the copy to a non-preview instance.

There are no exceptions and a refresh is always required in this circumstance because the preview upgrade is applied to the entire instance and not at the org level which is why a refresh is required to move the sandbox currently located on a preview instance to non-preview instance.


Q: Is it possible to upgrade my sandbox to the preview after the preview release date has passed?

A: No, since your production environment is still on the current release it is not possible to move an existing or refresh a new sandbox copy onto the release preview. The release version of a sandbox will always match the version of its production organization at the time of the copy.

This is why the sandbox preview window is important because it is designed to allow you to direct copies to sandbox instances that will be receiving an early upgrade or preview of the next release. The preview instances are all upgraded on a specific date. The only way to have sandbox on the release preview is for the refresh to be completed with the sandbox located on a preview instance so it can be upgraded with the instance.

If the preview upgrade date has passed it is no longer possible to direct copies or move a sandbox onto the release preview. The next opportunity to refresh a sandbox to the next major release will be after your production instance has been upgraded.

Should you need to test in a preview org, the only recommendation is to sign up for a pre-release org. The sign up link is typically posted in the Salesforce Blog entry for each major release.


Q: Is it possible to refresh my sandbox and keep it on the release preview if production hasn't been upgraded to the next release yet?

A: No, since your production instance and the sandbox copy's target preview instance are on different release versions this is not possible from an architectural and versioning standpoint.

There may be alternative means to achieve the same goal as a refresh. For example, to move application changes to the sandbox, consider using Change Sets, the Force.com IDE or the Force.com Migration Tool to migrate your metadata. If it is a matter of bringing additional information input since the refresh you should be able to do so by utilizing the API (Apex Data Loader) to extract the data you need for testing from production and then import it into your sandbox.

Full copy Sandbox Refresh best practices -
  • Uncheck “Include Chatter Data”
  • Uncheck “Include Field Tracking History Data”
  • Use Sandbox template







Instance Refresh Readiness

In order to prepare for your organization’s continued growth, we occasionally need to perform an activity, called an instance refresh, where we upgrade the infrastructure supporting your instance in our data centers. Following the maintenance, your instance will move to a new data center, and the name of your instance will change. This will enable us to continue to provide organizations with the same levels of performance they have come to expect from Salesforce.

For example if your org is on NA1 instance post instance refresh your org would move to NA2. 


If you follow salesforce best practices, this maintenance should be seamless. Below are some frequently asked questions regarding instance refresh maintenance.

https://help.salesforce.com/articleView?id=Instance-Refresh-Maintenance-FAQ&language=en_US&r=https%3A%2F%2Forg62.my.salesforce.com%2F&type=1

How to prepare for instance refresh video -
https://salesforce.vidyard.com/watch/gmdRR8QbUA44bVyHNViA37

Friday, 29 March 2019

Optimize SOQL/Reports/List Views

sfdoncestop youtube over 70 videos - https://www.youtube.com/c/sfdconestop
Maximizing the Performance of Force.com SOQL, Reports, and List Views

If you have sales representatives closing opportunities, support representatives working through a list of cases, or even managers running reports, you’ll want to optimize query performance in your Force.com applications. In saleforce.com’s multitenant environment, the Force.com query optimizer does its own kind of optimization, generating the most efficient SQL from your SOQL, reports, and list views. This blog post explains the filter conditions and the Force.com query optimizer thresholds that determine the selectivity of your queries and affect your overall query performance.


If data is king, timely access is queen. If you have sales representatives closing opportunities, support representatives working through a list of cases, or even managers running reports, you’ll want to optimize query performance in your Force.com applications. In saleforce.com’s multitenant environment, the Force.com query optimizer does its own kind of optimization, generating the most efficient SQL from your SOQL, reports, and list views. This blog post explains the filter conditions and the Force.com query optimizer thresholds that determine the selectivity of your queries and affect your overall query performance.
The Force.com Query Optimizer


The Force.com query optimizer is an engine that sits between your SOQL, reports, and list views and the database itself. Because of salesforce.com’s multitenancy, the optimizer gathers its own statistics instead of relying on the underlying database statistics. Using both these statistics and pre-queries, the optimizer generates the most optimized SQL to fetch your data. It looks at each filter in your WHERE clause to determine which index, if any, should drive your query.


To determine if an index should be used to drive a query, the Force.com query optimizer checks the number of records targeted by the filter against selectivity thresholds. For a standard index, the threshold is 30 percent of the first million targeted records and 15 percent of all records after that first million. In addition, the selectivity threshold for a standard index maxes out at 1 million total targeted records, which you could reach only if you had more than 5.6 million total records.


So if you had 2.5 million accounts, and your SOQL contained a filter on a standard index, that index would drive your query if the filter targeted fewer than 525,000 accounts.


In these standard index and custom index examples, the Force.com query optimizer does use the standard and custom indexes, as each number of targeted records falls below the appropriate selectivity threshold. If, on the other hand, the number of targeted records exceeds an index’s selectivity threshold, the Force.com query optimizer does not use that index to drive the query.


The Inside the Force.com Query Optimizer webinar explains in more detail how you can create selective queries for the Force.com query optimizer.
Common Causes of Non-Selective SOQL Queries


There are several factors that can prevent your SOQL queries from being selective.
Having Too Much Data


Whether you’re displaying a list of records through a Visualforce page or through a list view, it’s important to consider the user experience. Pagination can help, but will your users really go through a list with thousands of records? You might not have this much data in your current implementation, but if you don’t have enough selective filters, these long lists can easily become an issue as your data grows. Design your SOQL, reports, and list views with large data volumes in mind.
Performing Large Data Loads


Large data loads and deletions can affect query performance. The Force.com query optimizer uses the total number of records as part of the calculation for its selectivity threshold.


This number takes into account your recently deleted records. A deleted record remains in the Recycle Bin for 15 days—or even less time if you exceed your storage limit, and the record has been in the Recycle Bin for at least two hours—and then that record is actually removed from the Recycle Bin or flagged for a physical delete. When the Force.com query optimizer judges returned records against its thresholds, all of the records that appear in the Recycle Bin or are marked for physical delete do still count against your total number of records.


From our earlier example of accounts and a custom indexed field, the selectivity threshold was 175,000, and the total number of records was 2.5 million.


Let’s say that a Bulk API job runs and deletes all records before January 1, 2013, and those records total 2.4 million. That leaves us with 100,000 non-deleted account records. If the deleted records are still in the Recycle Bin, the Force.com optimizer mistakenly thinks that the 100,000 non-deleted records fall under and meet a 2.5 million-record selectivity threshold, and it generates a query that isn’t optimized. In reality, the threshold is 10,000 targeted records (10 percent of 100,000 targeted records).


If the deleted records do not need to go to the Recycle Bin, use the hard delete option in the Bulk API or contact salesforce.com Customer Support to physically delete the records.


If your data loads cause the records targeted by your filter to exceed the selectivity threshold, you might need to include additional filters to make your queries selective again.
Using Leading % Wildcards

This is the type of query that would normally work better with SOSL. However, if you need real-time results, an alternative is to create a custom search page, which restricts leading % wildcards and adds governance on the search string(s).


Note: Within a report/list view, the CONTAINS clause translates into ‘%string%’.
Using NOT and !=


When your filter uses != or NOT—which includes using NOT EQUALS/CONTAINS for reports, even if the field is indexed—the Force.com query optimizer can’t use the index to drive the query. For better performance, filter using = or IN, and the reciprocal values.


Note: Using a filter on an indexed field such as CreatedDate is always recommended, but this field was not included in the original query so that we could make a point about the selectivity threshold.
Using Complex Joins


Complex AND/OR conditions and sub-queries require the Force.com query optimizer to produce a query that is optimized for the join, but might not perform as well as multiple issued queries would. This is especially true with the OR condition. For Force.com to use an index for an OR condition, all of the fields in the condition must be indexed and meet the selectivity threshold. If the fields in the OR condition are in multiple objects, and one or more of those fields does not meet a selectivity threshold, the query can be expensive.


For more information on AND/OR conditions, refer to the Inside the Force.com Query Optimizer webinar.


Filters on formula fields that are non-deterministic can’t be indexed and result in additional joins. Common formula field practices include transforming a numeric value in a related object into a text string or using a complex transformation involving multiple related objects. In both cases, if you filter on this formula field, the Force.com query optimizer must join the related objects.


If you have large data volumes and are planning to use this formula field in several queries, creating a separate field to hold the value will perform better than following either of the previous common practices. You’ll need to create a workflow rule or trigger to update this second field, have this new field indexed, and use it in your queries.
 
To find if report is in private folder
SELECT Id, DashboardId, CustomReportId, dashboard.developername, dashboard.title, dashboard.foldername, dashboard.folderid, dashboard.createdby.name FROM DashboardComponent USING SCOPE allPrivate WHERE CustomReportId=''
 
Select Id, CreatedDate from AsyncApexJob where jobtype='ApexToken' AND Status = 'Queued' . 

CreatedDate > 2019-09-04T00:00:00Z order by CreatedDate desc

SELECT ApexClassId,CreatedDate,Id,JobItemsProcessed,NumberOfErrors,ParentJobId,Status,TotalJobItems
FROM AsyncApexJob where Status='Processing' and JobType='BatchApex'  


Concurrent Apex Limit - Unable to Process Request. Concurrent requests limit exceeded.

Please do continue to like, share and subscribe sfdconestop Youtube channel, if you find the Salesforce tutorials(100+ Video's) to be informative!!
▶️Subscribe:  https://www.youtube.com/c/sfdconestop
👉 https://sfdconestop.blogspot.com/
👀 Follow us on Linkedin : https://www.linkedin.com/company/sfdconestop

 
This is one of the Governor limit you may hit if you do not follow the best practices during implementation/code development. Below is the limit where for most of the orgs the limit is 10 which means you cannot have 10 synchronous transactions running concurrently for more than 5 second as for 11th request you hit the governor limit error  as below -



Governor limit:
Number of synchronous concurrent requests for long-running requests that last longer than 5 seconds for each organization.*

Error:
"Unable to Process Request. Concurrent requests limit exceeded.
To protect all customers from excessive usage and Denial of Service attacks, we limit the number of long-running requests that are processed at the same time by an organization. Your request has been denied because this limit has been exceeded by your organization. Please try your request again later."

https://help.salesforce.com/articleView?id=admin_web_limits.htm&type=5

What Counts Against the Limit

A single synchronous Apex request could include Apex code, SOQL, callouts, and triggers. You might need to tune these and other common components because the duration of their transactions counts toward the request limit.
Apex
Classes/controllers, triggers
SOQL

Web Services
External and Apex Web Services
Visualforce
ActionPoller, Ajax/ActionFunctions, JavaScript Remoting
API
Calls to an Apex Class

How to Design to Avoid the Limit

Take the concurrent request limit into consideration as you design your application around your business processes. Does the business process need to be synchronous? Is batch processing possible? Can you use the Streaming API?
Web Services

The most common causes of limit errors are synchronous Web service callouts. When one of your application’s users submits an order, that business process depends upon one or more external Web services, which must finish running for your application to actually create the order. If these external Web services cannot scale to your expected volumes, consider alternative solutions, such as using a callback instead of waiting for the callouts to complete.

To use a callback, just continue to make the synchronous callout. The following steps complete automatically after that.
The external Web service immediately returns an acknowledgement, saying that it received your request.
After the external Web service processing completes, it calls an Apex Web service to update the appropriate data.
The Streaming API publishes that updated data.


SOQL

The performance of your queries and DML operations is another big contributor to long-running requests. As your data grows, inefficient SOQL affects Visualforce pages, detail pages, list views and reports. If you’re querying for large amounts of data, you’ll incur additional processing time both when querying and rendering the data.

Refer to the Force.com Query Optimizer webinar for more information.
Data Skew

Data skew can also contribute to concurrent request limit errors. Consider the following scenario. You have a parent object with 10,000 or more child objects. Ordinarily, when you insert or change an owner of a child object, the Force.com platform automatically locks the parent for a certain amount of time. However, because you have data skew, the platform holds this lock even longer while determining the appropriate record access. The wait time for your lock is included in your total request time, and it causes your request to run for more than 5 seconds.

To avoid data skew so that you can also avoid ending up in this situation, read Reducing Lock Contention by Avoiding Data Skew.
Visualforce

With the ActionPoller component, you can poll by calling an Apex class. Unfortunately, you can’t dynamically change the polling interval or condition. This can result in a large number of unneeded requests. If the polling operation is expensive and takes longer than 5 seconds, you’ll quickly hit the limit. For more scalable requests, use the Streaming API, not polling, to subscribe to near real-time notifications of changes to data.

The <Apex:ActionFunction> component provides support for invoking controller action methods directly from JavaScript code using an AJAX request. JavaScript remoting extends this capability and allows you to create pages with complex, dynamic behavior that isn’t possible with the standard Visualforce AJAX components. However, from an Apex perspective, both of these components are still synchronous requests that are similar to standard Visualforce requests, and they count toward your concurrency limit.
Summary

The concurrent request limit attempts to ensure a positive user experience by limiting the number of synchronous Apex requests running for more than 5 seconds. Once the limit is reached, new synchronous Apex requests are denied. This behavior can be disruptive to your work. Therefore, it is easier to avoid this limit when designing new applications than it is when tuning live applications. You can accomplish this goal by ensuring that your users’ workflows do not include synchronous requests that take longer than 5 seconds.

Some useful tips:
Convert synchronous processes to asynchronous processes. Batch Apex might be a viable alternative. Limit synchronous Web service callouts.
Use the Streaming API instead of polling
Tune SOQL and DML operations. Make sure that your queries are selective. Limit the number of records in your list views. Avoid data skew.

Tuesday, 26 March 2019

Understand usage of fields in an object using field trip

In a huge organization where we have many custom and standard objects and when we create various fields as per the requirements, there are chances that we end up creating multiple fields and at times might not be using those fields as needed or there may be instances where in we end up the Salesforce limit of field creation per object i.e. 1000 and then we have to look out for options which fields can we delete to accommodate new fields.Eliminating unnecessary fields will help with user adoption and efficiency and makes your org clean. So how can we achieve this i.e. underutilized fields?

We have an application called as Field trip that does this job for us, we have to install this manage package and this app gives us a very clear picture of the percentage utilization of fields across all the objects whether its standard object or custom object. This app can be downloaded from an App exchange for free!!!!

Field Trip lets you analyze the fields of any object, giving you instant insight into what percentage of your records (or a subset) have that specific field populated. Run reports on the standard and custom fields you have in Salesforce for a better understanding of which field are important to your organization.

With a simple install, an intuitive user interface, and an easy-to-export report, Field Trip has made analyzing your fields quick and painless.

Once installed, simply name your trip, select an object (e.g. Accounts) and, optionally add a filter (for record subsets). Then you will receive a report detailing field usage (or lack thereof), available for simple export.

For more details visit : https://appexchange.salesforce.com/listingDetail?listingId=a0N30000003HSXEEA4

Using this link you can download this app for your org and check for the utilization of fields in an object.

Saturday, 23 March 2019

Understand your code for a new org

Examining your code base can seem daunting. Especially if you’re new to an org or if your org has lots of code. But understanding how the different pieces of your org relate to one another is an essential, necessary part of identifying how to begin managing your org in more precise, meaningful units.

Type of CodeWhat to look for Questions to ask 
Triggers 1. Trigger patterns
2. Trigger logic 
Does your org have one trigger per object? Is there business or application logic written directly in a trigger? Do triggers “hand off” logic or functionality to other classes (aka trigger handlers)? 
Apex Classes1. Naming conventions
2. Comments
3. API Version 
Do Apex classes use common prefixes or even namespaces to group units of code? Do classes have similar names, based on functionality? Is the purpose and authorship of code documented in comments? Do classes have comments that help clarify function? What API versions do classes use? 
Apex Tests 1. Test patterns/units
2. Code coverage
3. Test data handling 
How do tests relate to other code? Does each class have its own test? Are your tests organized into functional groups? Are there parts of your code base not covered by tests? Do your tests depend on common data factories or static resources? Do any of your tests use the 'seeAllData=True' annotation, or run on an API version earlier than 24? 
Lightning Components and Events 1. Naming conventions
2. Comments
3. Apex controllers
4. API Version 
Do components use common prefixes or even namespaces to create groups? Do components have clear names, related to functionality? Are Lightning events scoped to be application events or component events? Are the purpose and authorship of components and events clearly documented in comments or Aura documentation files? Do components use Apex controllers? What API versions do components and events use? 
Visualforce 1. Naming conventions
2. Comments
3. Apex controllers
4. API Version 
Do Visualforce pages and components use common prefixes or even namespaces to create groups? Do pages have clear names, related to functionality? Do pages use Apex controllers? What API versions do pages use? Are pages used with any email templates?


These pieces help you identify patterns within the code in your org. But these techniques may not help you understand every piece of your code base. Or, if your code base doesn’t seem to be consistently organized, you may need to try other ways to discover how your org’s code is connected.

This is where the new Dependency API can help.

Friday, 1 February 2019

Understand configuration of your new org/Salesforce Optimizer

When you are assigned a new project, it is not easy to understand the use case for all.

How can you start looking at things you’ve built with clicks and not with code?

One way to get started is to use the Salesforce Optimizer.

This tool can recommend ways to improve some of the features in your Salesforce implementation. After you’ve looked at your Optimizer report, you can look more deeply into your org’s processes and declarative customizations.

So what should you look for?

Type of CustomizationWhat to look for Questions to ask 
Process Builder 1.Object-related patterns
2. Active/inactive versions
3. Process logic 
How many processes exist per object? Are processes clearly named? How many inactive versions exist per process? Do decision nodes have clear logic? Are commonly used actions grouped into invocable processes? 
Workflow Rules 1. Object-related patterns
2. Active/inactive rules
3. Action logic 
How many workflow rules do objects have? Are some objects busier? Are rules clearly named, with descriptions? How many active and inactive rules exist on objects? What kinds of actions do rules execute? Do rules carry out any cross-object field updates? 
Flow/Visual Flow 1. Naming conventions
2. Object-related patterns
3. Active/inactive versions
4. Flow logic
5. Flow screens 
Do flows use prefixes or similar names to create groups? Do flows have names clearly related to functionality? Do flows have clear, up-to-date descriptions? What object(s) does a flow interact with? What is the relationship between inactive flows or flow versions and active flows? Do flows put common functionality into subflows, invocable actions or quick actions? If flows have screens, are they based on Lightning components? Do screens depend on certain objects and fields? 
Objects and Fields 1. Naming conventions
2. Record types
3. Page layouts
4. Permissions
5. Action overrides 
Were custom objects created that duplicate standard object behavior? Do multiple business units use the same objects or fields? Are business logic and validations differentiated by record types? Do objects and fields have clear, up-to-date descriptions? 
You want to create a clear sense of how well organized your processes and declarative customizations have been to date. If you find that your org isn’t as organized as you’d like, that’s OK. Now is the time to identify places where your team can work to increase quality and develop some standards that can help you build a healthier org moving forward. You may also identify projects that you want to tackle first, to clean up pieces of your org.